I am Christos Papageorgiou, an Artificial Intelligence engineer with a strong academic foundation and experience across multiple AI domains, including machine learning, computer vision, cognitive modeling, and intelligent agents. My primary focus lies in large language models (LLMs) and generative AI, where I design and implement systems that reason, generate, and interact intelligently. I specialize in building retrieval-augmented generation (RAG) pipelines, agentic and multi-agent systems, LLM evaluation frameworks, and prompt optimization strategies that enhance reliability and contextual understanding. My work extends to developing custom chatbots and intelligent assistants capable of dynamic reasoning, tool use, and Firebase-integrated communication for real-world applications.
Beyond AI research, I have a solid background in SaaS development, having engineered CRM and data-driven web applications using technologies such as Python, Odoo, PostgreSQL, and Docker. This combination of AI expertise and software engineering enables me to bridge theory with production-ready systems, transforming complex ideas into intelligent, deployable solutions.
What I build
- LLM-powered assistants for real products
- Agentic / multi-agent workflows with human-in-the-loop
- RAG chatbots over product / website / Firestore data
- Admin + analytics layer for AI apps
- Backend endpoints that connect AI ↔ Firebase ↔ UI
AI & LLM toolkit
1. LLM Orchestration (LangChain / LangGraph)
I design multi-step LLM workflows that do more than answer a single prompt. Using LangChain and LangGraph I handle:
- Definition of LLM pipelines with nodes, edges, and conditional branches
- Tool-enabled agents (search, DB access, API calls)
- Interruptions and human-in-the-loop approval steps
- Reusable prompt templates and structured outputs per step
- Separation of “reasoning tool” vs “action tool”

2. Agentic Systems & Memory
I build agent-based setups including:
- Single-agent and multi-agent setups (planner, grader, retriever, executor)
- Custom tool and node definitions for specialized tasks
- Memory strategies: standard, summarized, and stateful / conversation-level
- Structured output via Pydantic tool binding
- JSON schema enforcement with Kor for reliable downstream processing
3. Knowledge & RAG
I make assistants answer from real data using:
- Content ingestion (websites, project docs, Firestore data)
- Embedding creation and storage in Firebase
- Similarity-based retrieval
- RAG pipelines that attach sources/citations to answers
- Hybrid setups: agentic graph + RAG node (when lookup is necessary)
4. Model Adaptation
I can tailor models to a domain and measure quality by:
- Fine-tuning open-source models (e.g. with unsloth)
- Fine-tuning closed-source LLMs (OpenAI fine-tuning workflows)
- Prompt optimization and programmatic prompting with DSPy
- Evaluation loops to compare prompts, models, and tool strategies
- Integration of tuned models into agentic graphs and backend endpoints

Product-level integration
Most of my AI work is not in notebooks. It’s connected to data and UI.
Backend Services and Endpoints
Building the bridge between frontend and AI logic.
- Design and implementation of REST endpoints that trigger AI workflows.
- Endpoints can return AI results directly or trigger longer agentic flows that update the database.
- Can be organized per feature (auth, chat, FAQ, RAG, admin actions) so the frontend only has to call clean, documented endpoints.
Firebase Integration
Real-time backend foundation for state, analytics, and configuration.
- Data persistence for user sessions, chat histories, and app state.
- Per-user session tracking (for chatbots) so the assistant keeps context between turns.
- User profile storage and updates (preferences, model selection, UI settings).
- Ability to design and maintain complex NoSQL Firestore structures for multi-layered apps.
- Storing embeddings or vector references in Firebase to support RAG / similarity lookup.
AI App ↔ Firestore ↔ Admin Page Collaboration
Coordinated loop among the three pillars of an AI product.
- The AI app performs reasoning and actions using Firestore as a knowledge/state layer.
- The Firestore database captures user interactions, logs, and embeddings.
- Admin page: lets clients change settings, prompts, or app behavior without direct Firebase access as well as viewing app analytics.
- Result: admins control the AI, the AI reads / writes to Firestore, and the UI shows the updated behavior.
Database-Aware Tools (SQL / NoSQL)
Integrating live data access inside agent workflows.
- Design and implementation of REST endpoints that trigger AI workflows.
- Endpoints can return AI results directly or trigger longer agentic flows that update the database.
- Can be organized per feature (auth, chat, FAQ, RAG, admin actions) so the frontend only has to call clean, documented endpoints.

Multi-Context Protocol (MCP) Access
Expanding what the agent can reach beyond its native environment.
- Implementation of MCP connectors so agents can call external APIs, private data endpoints, or third-party systems.
- Enables agents to combine knowledge from multiple contexts in real time.
