tokdu (Token Disk Usage) is a terminal-based utility that helps you analyze and visualize token usage in your codebase. Similar to the classic du (disk usage) command, tokdu shows you how many tokens your files and directories consume, which is essential when working with Large Language Models (LLMs) that have token limits.
- 📊 Visualize token distribution across your project
- 🚀 Fast, asynchronous scanning with caching
- 🔍 Respects
.gitignorerules - ⏩ Skips binary files automatically
- 🧩 Uses OpenAI's
tiktokenfor accurate token counting - 🔄 Supports Google's Gemini local tokenization
- 🔮 Supports Anthropic's Claude API tokenization
- 🎛️ Support for different models' tokenizers
- ⚙️ Cross-platform configuration system
pip install tokduFor Gemini tokenization support:
pip install "tokdu[gemini]"For Anthropic Claude tokenization support:
pip install "tokdu[anthropic]"Or install from source:
git clone https://github.com/unitythemaker/tokdu.git
cd tokdu
pip install .Basic usage:
tokduThis will start tokdu in the current directory.
Specify a starting directory:
tokdu /path/to/projectUsing the explicit scan command:
tokdu scan /path/to/projectUse a specific tiktoken encoding:
tokdu --encoding cl100k_baseUse tokenization based on a specific model:
tokdu --model gpt-4oUse Google's Gemini tokenizer:
tokdu --tokenizer gemini --model gemini-1.5-flash-001Use Anthropic's Claude tokenizer (requires API key):
tokdu --tokenizer anthropic --model claude-3-haiku-20240307View current configuration:
tokdu config --showSet default tokenizer type:
tokdu config --tokenizer geminiSet default model (will clear any encoding setting):
tokdu config --model gemini-1.5-flash-001Set default encoding (will clear any model setting):
tokdu config --encoding cl100k_baseNote: The model and encoding settings are mutually exclusive. Setting one will automatically clear the other to avoid confusion about which one takes precedence.
Configuration is stored in a platform-specific location:
- Windows:
C:\Users\<Username>\AppData\Local\tokdu\config.ini - macOS:
~/Library/Application Support/tokdu/config.ini - Linux:
~/.config/tokdu/config.ini
- ↑/↓ or j/k: Navigate up/down
- Enter: Open selected directory
- Backspace: Go to parent directory
- Page Up/Down: Scroll by page
- q: Quit
Large Language Models like GPT-4o and Gemini have context window limits measured in tokens. When embedding code in prompts or using tools and IDEs like GitHub Copilot or Zed, understanding your project's token usage helps you:
- Stay within context window limits
- Optimize prompts for LLMs
- Identify areas to trim when sharing code with AI assistants
- OpenAI Tiktoken: Used for OpenAI models (GPT-3.5, GPT-4, etc.)
- Google Gemini: Local tokenization for Gemini models (requires
google-cloud-aiplatform[tokenization]>=1.57.0) - Anthropic Claude: API-based tokenization for Claude models (requires
anthropic>=0.7.0and API key)
- Uses OpenAI's
tiktokenlibrary for accurate token counting with OpenAI models - Supports Google's Vertex AI SDK for local Gemini tokenization
- Supports Anthropic's API for Claude model tokenization
- Tokenizers can be specified with
--encoding,--model, or--tokenizerflags - Uses
appdirsto manage cross-platform configuration - Defaults to values from config file, or
tiktokenandgpt-4oif not configured - Scans directories asynchronously for better performance
- Caches results to avoid repeated scans
- Python 3
- pathspec
- appdirs
- curses (built into Python standard library)
- tiktoken
- google-cloud-aiplatform[tokenization] (optional, for Gemini tokenization; requires cmake) - requires cmake to be installed
- anthropic (optional, for Claude tokenization)
MIT
Halil Tezcan KARABULUT (@unitythemaker)
