Local RAG Agent
A full-stack RAG (Retrieval-Augmented Generation) chatbot that runs entirely locally

About the Project
The local-rag-chatbot project is a comprehensive, production-ready solution for private document interaction using Retrieval-Augmented Generation (RAG). It features a robust FastAPI backend that manages document ingestion (PDF, TXT, DOCX), utilizing sentence-transformers for high-quality embedding generation and PostgreSQL with pgvector for high-performance similarity searches. The system integrates directly with Ollama to provide local LLM inference, ensuring that sensitive data never leaves your local infrastructure. The frontend is a modern Next.js 14 application styled with TailwindCSS, offering a premium user experience with real-time chat, session history management, and transparent source citations for AI responses. Designed with a "local-first" philosophy, the entire stack is containerized with Docker and includes enterprise-grade features like JWT-based authentication, role-based access control, and full WCAG 2.1 accessibility compliance.