🤖 Note: This project was fully generated and developed using GitHub Copilot, demonstrating the power of AI-assisted development.
Are you looking for a simple way to asks question to your codebase using the inference provider you want without to be locked to a specific service? This tool is a way to achieve this!
- Production-ready RAG backend with per-project persistent storage
- PyCharm/IDE integration via REST API (see REST_API.md)
- Per-project databases: Each project gets isolated SQLite database
- Indexes files, computes embeddings using an OpenAI-compatible embedding endpoint
- Stores vector embeddings in SQLite using sqlite-vector for fast semantic search
- Analysis runs asynchronously (FastAPI BackgroundTasks) so the UI remains responsive
- Minimal web UI for starting analysis and asking questions (semantic search + coding model)
- Health check and monitoring endpoints for production deployment
A full-featured PyCharm/IntelliJ IDEA plugin is available:
- Download: Get the latest plugin from Releases
- Per-Project Indexing: Automatically indexes current project
- Secure API Keys: Stores credentials in IDE password safe
- Real-time Responses: Streams answers from your coding model
- File Navigation: Click retrieved files to open in editor
- Progress Indicators: Visual feedback during indexing
See ide-plugins/README.md for building and installation instructions.
- Python 3.8+ (3.11+ recommended for builtin tomllib)
- Git (optional, if you clone the repo)
- If you use Astral
uv, install/configureuvaccording to the official docs: https://docs.astral.sh/uv/
First step: Example .env (copy .env.example -> .env and edit)
- Follow Astral uv installation instructions first: https://docs.astral.sh/uv/
- Typical flow (after
uvis installed and you are in the project directory):
uv pip install -r pyproject.toml
uv run python ./main.py
Notes:
- The exact
uvsubcommands depend on the uv version/configuration. Check the Astral uv docs for the exact syntax for your uv CLI release. The analyzer only needs a Python executable in the venv to runpython -m pip list --format=json;uvtypically provides or creates that venv.
- Create a virtual environment and install dependencies listed in
pyproject.tomlwith your preferred tool.
# create venv
python -m venv .venv
# activate (UNIX)
source .venv/bin/activate
# activate (Windows PowerShell)
.venv\Scripts\Activate.ps1
uv pip install -r pyproject.toml
# run the server
python ./main.py
poetry install
poetry run main.py