My AI Integration Learning Journey
Six months of learning, debugging, and figuring out how to work with multiple AI APIs. Started as a quick project and turned into a deep dive into modern AI development.
"I thought I'd quickly connect a few APIs and build something cool. Turns out it took 6 months of debugging and learning."
- Saurabh Pareek
💡 Learning with AI Assistance: This project was built with significant help from AI tools (ChatGPT, Claude) to learn concepts, understand errors, and discover best practices. I researched, debugged, and tested everything myself, but AI helped me learn faster. The code I've kept is code I understand and can maintain.
What I Tried: Connect to Ollama for local AI. What Happened: Took three weeks to get it working reliably with proper async patterns.
Challenges:
- Learning API authentication from scratch
- Managing environment variables and secrets
- Basic error handling (crashed a lot initially)
Skills Gained: API authentication, env management, debugging, reading docs.
What I Tried: Add Google Gemini and Perplexity. What Happened: Each service taught me something different about APIs.
Challenges:
- Different auth methods for each service
- Hit rate limits (got blocked a few times)
- Handling different JSON response formats
Skills: API integration patterns, config management, better error handling and logging.
What I Tried: Build a system to route queries to the right AI and remember conversations. What Happened: Created semantic search with TF-IDF, conversation memory, and intelligent routing.
Skills: Async programming, semantic search algorithms, context management, system design.
- Local Ollama: Works for private/offline processing and general tasks
- Google Gemini: Fast responses, good for creative tasks
- Perplexity: Great for research questions
- Azure OpenAI (GPT-3.5 Turbo): Reliable, good for general tasks(temporarily disabled)
- 5 specialized agents (Assistant, Code Assistant, Data Analyst, Creative Writer, Research Assistant)
- Keyword based intelligent routing
- Automatic agent selection based on query
- Performance tracking per agent
- TF-IDF semantic search for conversation history
- Hybrid context retrieval (semantic + recent conversations)
- Persistent memory across sessions (pickle serialization)
- Automatic relevance scoring
- Async/await architecture for parallel API calls
- Automatic API fallbacks and error handling
- Rate limiting and quota management
- Clean modular code structure
# Clone and install
git clone https://github.com/SaurabhCodesAI/ENTAERA.git
cd ENTAERA
pip install -r requirements-local-models.txt
# Set up your .env file with API keys
# See .env.example for required keys
# Run the agent
ollama serve # Start Ollama first
python agent.pyAvailable Commands:
/agents- List all specialized agents/memory [n]- Show recent conversations/search <query>- Semantic search history/stats- Usage statistics/clear- Clear conversation memory/quit- Exit
Current Status: agent with semantic search, context management, and multi provider support.
- Python: From basic scripts to structured applications (async/await, dataclasses, type hints)
- API Integration: REST APIs, authentication, headers, rate limiting
- Semantic Search: TF-IDF algorithm, tokenization, cosine similarity
- Error Handling: Building resilient systems with fallbacks and retries
- Config Management: Organizing settings and secrets properly with dotenv
- Async Programming: asyncio, concurrent API calls, semaphores
- Git: Professional development workflow
- Research: Learning from documentation and examples
- Debugging: Systematic approach to finding issues (lots of print statements!)
- System Thinking: Understanding how components work together
- Persistence: Working through complex problems (6 months worth!)
- Enhanced conversation summarization for long histories
- Web API with FastAPI for remote access
- Better error recovery with circuit breakers
- Performance dashboard
- Vector embeddings with sentence-transformers
- Multi user support with authentication
- Streaming responses in real-time
- Plugin system for custom agents
ENTAERA/
├── agent.py # Production agent
│ ├── SimpleSemanticSearch # TF-IDF search engine
│ ├── ConversationMemory # Persistent memory
│ └── Multi-provider integration # Ollama, Azure, Gemini, Perplexity
├── src/entaera/core/ # Advanced framework modules
│ ├── semantic_search.py # Vector embeddings
│ ├── agent_orchestration.py # Multi agent coordination
│ ├── conversation_memory.py # Memory management
│ ├── context_retrieval.py # Context selection
│ ├── code_analysis.py # Code complexity metrics
│ ├── code_execution.py # Safe sandboxed execution
│ ├── code_generation.py # AI code generation
│ └── code_optimization.py # Performance optimization
├── requirements-local-models.txt # Dependencies
├── .env # API keys (you create this)
└── conversation_memory.pkl # Auto generated memory
---
## 🤝 Contributing
Contributions are welcome! Whether it's:
- 🐛 Bug reports
- 💡 Feature suggestions
- 📝 Documentation improvements
- 🔧 Code contributions
See [CONTRIBUTING.md](./CONTRIBUTING.md) for guidelines.
---
## �️ Tech Stack
<div align="center">






**Core Libraries:**
`asyncio` • `aiohttp` • `pickle` • `dotenv`
</div>
---
## �📄 License
This project is licensed under the **MIT License** - see the [LICENSE](./LICENSE) file for details.
<details>
<summary>📜 Click to view license summary</summary>
MIT License - Free to use, modify, and distribute ✅ Commercial use ✅ Modification ✅ Distribution ✅ Private use
</details>
---
**Key Files:**
* `agent.py` - Main production agent
* `src/entaera/core/` - Advanced framework modules
* `requirements-local-models.txt` - Dependencies
* `test_api_connections.py` - API testing
---
## Final Thoughts
This project represents six months of learning and problem solving. Not perfect, but it works.
### What This Shows:
Curiosity + Persistence = Real Results.
The code works | The concepts are solid | The foundation is strong
Built with effort and lots of coffee by Saurabh
> **� Learning with AI Assistance:** This project was built with significant help from AI tools (ChatGPT, Claude) to learn concepts, understand errors, and discover best practices. I researched, debugged, and tested everything myself, but AI helped me learn faster. The code I've kept is code I understand and can maintain.