Skip to content

Conversation

@hmusavi
Copy link

@hmusavi hmusavi commented Dec 1, 2025

This pull request introduces two new community contribution templates for AI client usage and a multi-model chatbot debate, including sample environment configuration files and demonstration notebooks/scripts. The changes provide clear, reusable templates for connecting to various AI APIs (OpenAI, Ollama, Gemini), and a structured example of orchestrating a debate between multiple chatbots with distinct personalities and models. The most important changes are grouped below:

AI Client Template (Snarky Personality):

  • Added api_client_template_snarky.ipynb and api_client_template_snarky.py, which demonstrate loading API credentials from environment variables, selecting between OpenAI, Ollama, or Gemini based on configuration, and interacting with the chosen model using snarky system/user prompts. Includes helper functions for prompt formatting and output display. [1] [2]
  • Provided a sample .env file (sample.env) for easy configuration and switching between AI providers.

Chatbot Debate Example:
Introduces a community contribution featuring a chatbot debate between three personas using different LLMs via Ollama. Includes a Jupyter notebook, Python script, markdown transcript, and sample environment file for configuration. The example demonstrates multi-model conversational orchestration and debate logic to serve for the homework assignment of week2/day1. This implementation uses a function to build the user prompt based on a list of tuples consisting of the (mode, prompt). The function determines whether the prompt should be a user prompt or an assistant prompt.

  • Added chatbot_debate.py, which sets up a multi-participant debate simulation using different models (OpenAI, Ollama, Gemma), loads configuration from .env, checks API/server status, and orchestrates a round-based debate with distinct system prompts for each chatbot persona.
  • Included a markdown transcript (chatbot_debate.md) showing a sample debate on astrophysics, highlighting the conversational flow and model responses.
  • Provided a sample .env file (sample.env) for configuring model names, debate topic, and participant details.

Introduces a community contribution featuring a chatbot debate between three personas using different LLMs via Ollama. Includes a Jupyter notebook, Python script, markdown transcript, and sample environment file for configuration. The example demonstrates multi-model conversational orchestration and debate logic to serve for the homework assignment of week2/day1. This implementation uses a function to build the user prompt based on a list of tuples consisting of the (mode, prompt). The function determines whether the prompt should be a user prompt or an assistant prompt.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants