two new community contribution templates for AI client usage and a multi-model chatbot debate #1023
+1,133
−0
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This pull request introduces two new community contribution templates for AI client usage and a multi-model chatbot debate, including sample environment configuration files and demonstration notebooks/scripts. The changes provide clear, reusable templates for connecting to various AI APIs (OpenAI, Ollama, Gemini), and a structured example of orchestrating a debate between multiple chatbots with distinct personalities and models. The most important changes are grouped below:
AI Client Template (Snarky Personality):
api_client_template_snarky.ipynbandapi_client_template_snarky.py, which demonstrate loading API credentials from environment variables, selecting between OpenAI, Ollama, or Gemini based on configuration, and interacting with the chosen model using snarky system/user prompts. Includes helper functions for prompt formatting and output display. [1] [2].envfile (sample.env) for easy configuration and switching between AI providers.Chatbot Debate Example:
Introduces a community contribution featuring a chatbot debate between three personas using different LLMs via Ollama. Includes a Jupyter notebook, Python script, markdown transcript, and sample environment file for configuration. The example demonstrates multi-model conversational orchestration and debate logic to serve for the homework assignment of week2/day1. This implementation uses a function to build the user prompt based on a list of tuples consisting of the (mode, prompt). The function determines whether the prompt should be a user prompt or an assistant prompt.
chatbot_debate.py, which sets up a multi-participant debate simulation using different models (OpenAI, Ollama, Gemma), loads configuration from.env, checks API/server status, and orchestrates a round-based debate with distinct system prompts for each chatbot persona.chatbot_debate.md) showing a sample debate on astrophysics, highlighting the conversational flow and model responses..envfile (sample.env) for configuring model names, debate topic, and participant details.