Sign in required

Please sign in with Google to use the demo.

FastAPI + Ollama + LangChain + Chroma

Query

This demo shows a Retrieval-Augmented Generation (RAG) workflow built around a local LLM. Questions can be sent directly to the model, or routed through a retrieval step that searches a vector database built from the HR, Policy, and Sales document corpora. The most relevant document chunks are injected into the model prompt so answers are grounded in source material and returned with citations. The system also demonstrates simple automation logic by detecting when responses should be routed to Slack or Discord, with manual review safeguards for ambiguous cases.

No ingest run yet.
Answer
No query run yet.
RAG Query JSON
No query run yet.
Integration JSON
No integration plan run yet.
Citations
No citations yet.