A simple CLI program to chat with multiple LLM providers (e.g., ChatGPT and Claude) through a unified interface.
- Unified interface for multiple LLM providers
- Support for ChatGPT (OpenAI) and Claude (Anthropic)
- Conversation history maintained throughout the session
- Spinner animation while waiting for responses
- Easy model selection
- Have an OpenAI API key. Refer to OpenAI website for more info.
- Add balance to your account.
- Store your API key to a safe place, for example,
~/.api_key_openai_1. - Add this line to your
~/.zshrcor~/.bashrc:
export OPENAI_API_KEY="$(cat $HOME/.api_key_openai_1 2>/dev/null || echo '')"- Have an Anthropic API key. Refer to Anthropic website for more info.
- Add balance to your account.
- Store your API key to a safe place, for example,
~/.api_key_anthropic_1. - Add this line to your
~/.zshrcor~/.bashrc:
export ANTHROPIC_API_KEY="$(cat $HOME/.api_key_anthropic_1 2>/dev/null || echo '')"- Remember to source the API keys:
source ~/.zshrcorsource ~/.bashrc
- Clone this repo to your computer, then
cdinto your local repo:
git clone <repo-url>
cd <local-repo>- Create a Python virtual environment for this repo and activate it:
python3 -m venv .env
source .env/bin/activate # On Windows: .env\Scripts\activate- Install dependencies:
pip install -r requirements.txt- Make the chat program executable:
chmod u+x chat.pyChat with ChatGPT (default model: gpt-5):
./chat.py --provider chatgptChat with Claude (default model: claude-sonnet-4-5):
./chat.py --provider claudeSpecify a custom model:
./chat.py --provider chatgpt --model gpt-4o
./chat.py --provider claude --model claude-sonnet-4Adjust maximum tokens:
./chat.py --provider chatgpt --max-tokens 2048Adjust temperature (randomness):
./chat.py --provider chatgpt --temperature 0.7
./chat.py --provider claude --temperature 0.2 # More deterministic
./chat.py --provider chatgpt --temperature 1.0 # More creative./chat.py -hOptions:
-p, --provider(required): Choose provider (chatgptorclaude)-m, --model(optional): Specify model name (uses provider default if not specified)--max-tokens(optional): Maximum tokens in response (default: 1024)--temperature(optional): Sampling temperature 0.0-1.0, higher is more random (default: 0.5)
- Type your prompts and press Enter to send
- Type
exitorquitto end the conversation - Press Ctrl+C to interrupt the conversation
To add a new LLM provider:
- Create a new provider class in
src/providers/that inherits fromLLMProvider - Implement the required abstract methods:
send_message()andget_provider_name() - Register the provider in
src/config.pyin theProviderFactory.PROVIDERSdictionary