A minimal Agentic RAG built with LangGraph — learn Retrieval-Augmented Generation Agents in minutes.
-
Updated
Dec 4, 2025 - Jupyter Notebook
A minimal Agentic RAG built with LangGraph — learn Retrieval-Augmented Generation Agents in minutes.
Client-side retrieval firewall for RAG systems — blocks prompt injection and secret leaks, re-ranks stale or untrusted content, and keeps all data inside your environment.
AI-Rag-ChatBot is a complete project example with RAGChat and Next.js 14, using Upstash Vector Database, Upstash Qstash, Upstash Redis, Dynamic Webpage Folder, Middleware, Typescript, Vercel AI SDK for the Client side Hook, Lucide-React for Icon, Shadcn-UI, Next-UI Library Plugin to modify TailwindCSS and deploy on Vercel.
RAGify is a modern chat application that provides accurate, hallucination-free answers by grounding responses in your documents. No more made-up information - if the answer isn't in your knowledge base, RAGify tells you so.
A powerful RAG tool that scrapes YouTube channel videos, extracts transcripts, and enables AI-powered chat interactions using Google's Gemini API.
An advanced, fully local, and GPU-accelerated RAG pipeline. Features a sophisticated LLM-based preprocessing engine, state-of-the-art Parent Document Retriever with RAG Fusion, and a modular, Hydra-configurable architecture. Built with LangChain, Ollama, and ChromaDB for 100% private, high-performance document Q&A.
🩺 RAGnosis — An AI-powered clinical reasoning assistant that retrieves real diagnostic notes (from MIMIC-IV-Ext-DiReCT) and generates explainable medical insights using Mistral-7B & FAISS, wrapped in a clean Gradio UI. ⚡ GPU-ready, explainable, and open-source.
LLMlight is a lightweight Python library for running local language models with built-in memory, retrieval, and prompt optimization, requiring minimal dependencies.
A RAG-based retrieval system for air pollution topics using LangChain and ChromaDB.
RAG Mini Project — Retrieval‑Augmented Generation chatbot with FastAPI backend (Docker on Hugging Face Spaces) and Streamlit frontend (Render), featuring document ingestion, vector search, and LLM‑powered answers
A comprehensive, hands-on tutorial repository for learning and mastering LangChain - the powerful framework for building applications with Large Language Models (LLMs). This codebase provides a structured learning path with practical examples covering everything from basic chat models to advanced AI agents, organized in a progressive curriculum.
🚀 Build a production-ready Agentic RAG system with LangGraph using minimal code and streamline your AI development process.
An AI-powered HR assistant that uses Retrieval-Augmented Generation (RAG) with FastAPI & Streamlit to answer employee queries, search profiles, and simplify HR resource management
Production-grade Retrieval-Augmented Generation (RAG) backend in TypeScript with Express.js, PostgreSQL, and Sequelize — featuring OpenAI-powered embeddings, LLM orchestration, and a complete data-to-answer pipeline.
Modular RAG framework with semantic chunking, hybrid retrieval, and reranking. Built to be simple (3-line setup) but production-ready.
Repository for the take-home midterm of CENG 543 Information Retrieval Course
Research & Education oriented LangChain RAG framework (5P Principles + EUQS quality metrics + Chrono Vision design). Provides structured output (TL;DR, reasons, sources, counterfactuals, next steps, JSON log).
📄 QuestRAG: AI-powered PDF Question Answering & Summarizer Bot using LangChain, Flan-T5, and Streamlit: A GenAI mini-project that allows users to upload research PDFs, ask questions, and get intelligent summaries using Retrieval-Augmented Generation (RAG) with locally hosted Hugging Face models.
The RAG Pipeline Utils library offers a production-ready, modular framework for building retrieval-augmented generation (RAG) pipelines — plug in custom loaders, embedders, retrievers and LLMs, and deploy secure, observable, high-performing workflows with ease.
Research FlowStream — multi‑agent research assistant with Streamlit frontend and FastAPI backend, leveraging LLMs and Qdrant for retrieval, deployed on Render (UI) and Hugging Face Spaces (API)
Add a description, image, and links to the retrieval-augmented-generation-rag topic page so that developers can more easily learn about it.
To associate your repository with the retrieval-augmented-generation-rag topic, visit your repo's landing page and select "manage topics."