Skip to content

quietlyecho/simple-llm-chat-cli

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Unified LLM Chat Interface

A simple CLI program to chat with multiple LLM providers (e.g., ChatGPT and Claude) through a unified interface.

Features

  • Unified interface for multiple LLM providers
  • Support for ChatGPT (OpenAI) and Claude (Anthropic)
  • Conversation history maintained throughout the session
  • Spinner animation while waiting for responses
  • Easy model selection

Prerequisites

For ChatGPT

  1. Have an OpenAI API key. Refer to OpenAI website for more info.
  2. Add balance to your account.
  3. Store your API key to a safe place, for example, ~/.api_key_openai_1.
  4. Add this line to your ~/.zshrc or ~/.bashrc:
export OPENAI_API_KEY="$(cat $HOME/.api_key_openai_1 2>/dev/null || echo '')"

For Claude

  1. Have an Anthropic API key. Refer to Anthropic website for more info.
  2. Add balance to your account.
  3. Store your API key to a safe place, for example, ~/.api_key_anthropic_1.
  4. Add this line to your ~/.zshrc or ~/.bashrc:
export ANTHROPIC_API_KEY="$(cat $HOME/.api_key_anthropic_1 2>/dev/null || echo '')"
  1. Remember to source the API keys: source ~/.zshrc or source ~/.bashrc

Installation

  1. Clone this repo to your computer, then cd into your local repo:
git clone <repo-url>
cd <local-repo>
  1. Create a Python virtual environment for this repo and activate it:
python3 -m venv .env
source .env/bin/activate  # On Windows: .env\Scripts\activate
  1. Install dependencies:
pip install -r requirements.txt
  1. Make the chat program executable:
chmod u+x chat.py

Usage

Basic Usage

Chat with ChatGPT (default model: gpt-5):

./chat.py --provider chatgpt

Chat with Claude (default model: claude-sonnet-4-5):

./chat.py --provider claude

Advanced Usage

Specify a custom model:

./chat.py --provider chatgpt --model gpt-4o
./chat.py --provider claude --model claude-sonnet-4

Adjust maximum tokens:

./chat.py --provider chatgpt --max-tokens 2048

Adjust temperature (randomness):

./chat.py --provider chatgpt --temperature 0.7
./chat.py --provider claude --temperature 0.2  # More deterministic
./chat.py --provider chatgpt --temperature 1.0  # More creative

Command-Line Options

./chat.py -h

Options:

  • -p, --provider (required): Choose provider (chatgpt or claude)
  • -m, --model (optional): Specify model name (uses provider default if not specified)
  • --max-tokens (optional): Maximum tokens in response (default: 1024)
  • --temperature (optional): Sampling temperature 0.0-1.0, higher is more random (default: 0.5)

During Conversation

  • Type your prompts and press Enter to send
  • Type exit or quit to end the conversation
  • Press Ctrl+C to interrupt the conversation

Extending the System

To add a new LLM provider:

  1. Create a new provider class in src/providers/ that inherits from LLMProvider
  2. Implement the required abstract methods: send_message() and get_provider_name()
  3. Register the provider in src/config.py in the ProviderFactory.PROVIDERS dictionary

About

A simple CLI tool for chatting LLMs.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages