diff --git a/week2/community-contributions/meesam-day3-CSR-Chatbot.ipynb b/week2/community-contributions/meesam-day3-CSR-Chatbot.ipynb new file mode 100644 index 000000000..56e371731 --- /dev/null +++ b/week2/community-contributions/meesam-day3-CSR-Chatbot.ipynb @@ -0,0 +1,243 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "bf6f3210", + "metadata": {}, + "source": [ + "---\n", + "\n", + "# EduGuide – Study Abroad Consultancy Chatbot\n", + "\n", + "EduGuide is an AI-powered education consultancy assistant designed to guide students through the entire process of studying abroad. It provides country/program recommendations, eligibility insights, cost estimates, scholarship advice, visa basics, and step-by-step application planning.\n", + "\n", + "## Features\n", + "\n", + "### 1. Personalized Study-Abroad Guidance\n", + "\n", + "EduGuide collects essential profile details such as:\n", + "\n", + "* Current country & citizenship\n", + "* Academic background (qualification, GPA)\n", + "* Intended level (Bachelor’s/Master’s/PhD)\n", + "* Field of study\n", + "* Budget\n", + "* Test scores (IELTS/TOEFL/PTE, GRE/GMAT)\n", + "* Preferred destinations\n", + "* Target intake (Fall/Spring etc.)\n", + "\n", + "### 2. Country & Program Recommendations\n", + "\n", + "* Suggests 3–5 suitable countries based on profile and goals\n", + "* Provides 5–10 matching programs\n", + "* Includes eligibility ranges (GPA, test scores, experience)\n", + "\n", + "### 3. Cost & Scholarship Insights\n", + "\n", + "* Estimated tuition + living expenses\n", + "* Funding options and likely scholarship eligibility\n", + "* Advice for budget-constrained applicants\n", + "\n", + "### 4. Visa & Documentation Support\n", + "\n", + "* Overview of visa types\n", + "* Common document checklists\n", + "* Practical guidance for smooth preparation\n", + "\n", + "### 5. Step-by-Step Timeline\n", + "\n", + "* Custom 6–12 week application timeline\n", + "* Tasks prioritized from profile completion to visa filing\n", + "\n", + "### 6. Ready-to-Use Templates\n", + "\n", + "* Email to admissions office\n", + "* SOP/Personal Statement prompts\n", + "* Academic CV outline\n", + "* Interview practice questions\n", + "\n", + "### 7. Safety & Accuracy\n", + "\n", + "EduGuide does **not** guarantee visas, admissions, or legal outcomes.\n", + "It encourages students to verify policies with official university and embassy sources.\n", + "\n", + "---\n", + "\n", + "## How It Works\n", + "\n", + "1. The system prompt initializes EduGuide with expertise in international education consultancy.\n", + "2. The chatbot gathers missing details to personalize its advice.\n", + "3. Students ask questions such as:\n", + "\n", + " * *Which country is best for Computer Science?*\n", + " * *Am I eligible for a UK Master’s program?*\n", + " * *What is the cost of studying in Canada?*\n", + " * *How do I write an SOP?*\n", + "4. EduGuide responds with structured, concise, and practical guidance.\n", + "\n", + "---\n", + "\n", + "## Setup Instructions\n", + "\n", + "1. Add the system prompt to your chatbot (OpenAI, n8n, FastAPI, WhatsApp bot, or any other platform).\n", + "2. Route all student queries to this model with the system prompt active.\n", + "3. (Optional) Store user profile data in your backend or CRM to provide continuity across chats.\n", + "\n", + "---\n", + "\n", + "## Example Query\n", + "\n", + "**Student:**\n", + "“I have a Bachelor’s in IT with 2.8 CGPA, IELTS 6.5, and want to study in Europe. My budget is low. Options?”\n", + "\n", + "**EduGuide Response Includes:**\n", + "\n", + "* Matchable countries (Germany, Poland, Finland, Hungary)\n", + "* Program shortlist\n", + "* Cost estimates\n", + "* Scholarships and tuition-free routes\n", + "* Visa basics\n", + "* A 6–12 week action plan\n", + "\n", + "---" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "9cd276b2", + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "from openai import OpenAI\n", + "import gradio as gr\n", + "from dotenv import load_dotenv" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "81869861", + "metadata": {}, + "outputs": [], + "source": [ + "load_dotenv(override=True)\n", + "open_api_key = os.environ.get(\"OPENAI_API_KEY\")\n", + "MODEL=\"gpt-4.1-mini\"\n", + "openai = OpenAI()" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "b219ae84", + "metadata": {}, + "outputs": [], + "source": [ + "system_message = \"\"\"You are EduGuide — an expert education-consultant chatbot for students planning to study abroad. Provide accurate, actionable guidance on country & program choice, entry requirements, application timelines, document checklists, tests (IELTS/TOEFL/PTE, GRE/GMAT), scholarships & funding, cost estimates (tuition + living), visa basics, and post-acceptance steps (housing, insurance, arrival).\n", + "\n", + "When advising, first collect minimal profile data needed: current country & citizenship, highest qualification + GPA, intended level (Bachelors/Masters/PhD/diploma), field of study, preferred destination(s), budget, English/test scores (if any), desired start term, timeline, and constraints (work experience, dependents, scholarship need). Request only missing items required to personalize advice.\n", + "\n", + "For each request produce:\n", + "- 3–5 recommended destination countries and why.\n", + "- 5–10 matching programs (one safety, one stretch) with typical eligibility and cutoff indicators.\n", + "- Estimated annual cost range (tuition + living) with currency and year.\n", + "- Relevant scholarship types and likely eligibility.\n", + "- Visa overview and common required documents.\n", + "- Prioritized next-step checklist and a tailored 6–12 week application timeline with deadlines.\n", + "- Ready-to-use templates: inquiry email to admissions, CV outline, SOP prompts, and 5 mock interview questions.\n", + "\n", + "When giving factual policy, cite sources or advise the user to verify on official university/embassy pages. If asked for legal/financial/visa guarantees or diagnoses, refuse and refer to qualified authorities. Keep responses concise, professional, and actionable. Ask follow-up questions only when necessary to deliver a correct, personalized plan.\"\"\"\n" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "716db05f", + "metadata": {}, + "outputs": [], + "source": [ + "def chat(message, history):\n", + " history = [{\"role\":h[\"role\"], \"content\":h[\"content\"]} for h in history]\n", + " messages = [{\"role\": \"system\", \"content\": system_message}] + history + [{\"role\": \"user\", \"content\": message}]\n", + " stream = openai.chat.completions.create(model=MODEL, messages=messages, stream=True)\n", + " response = \"\"\n", + " for chunk in stream:\n", + " response += chunk.choices[0].delta.content or ''\n", + " yield response" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "id": "047670e3", + "metadata": {}, + "outputs": [], + "source": [ + "\n", + "chatView = gr.ChatInterface(\n", + " fn=chat, \n", + " type='messages',\n", + " title=\"Welcome to EduGuide Consultancy 📖\",\n", + " description=\"EduGuide is an AI-powered education consultancy assistant designed to guide students through the entire process of studying abroad. It provides country/program recommendations, eligibility insights, cost estimates, scholarship advice, visa basics, and step-by-step application planning.\",\n", + " examples=[\"How to study in Canada?\", \"What are the best IT Universities in Austrailia\",\n", + " \"Best universities for Medical Sciences.\"],\n", + " )\n", + "\n", + " " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "eab3e607", + "metadata": {}, + "outputs": [], + "source": [ + "chatView.launch(inbrowser=True)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "43a3b996", + "metadata": {}, + "outputs": [], + "source": [ + "chatView.close()\n" + ] + }, + { + "cell_type": "markdown", + "id": "da1cdce6", + "metadata": {}, + "source": [ + "### Conclusion:\n", + "\n", + "This chatbot will really be helpful once we can embed some knowledge base in it." + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": ".venv", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.12.12" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/week2/community-contributions/meesam-day4-ChatBot-With-ToolCalling.ipynb b/week2/community-contributions/meesam-day4-ChatBot-With-ToolCalling.ipynb new file mode 100644 index 000000000..02f2d44a7 --- /dev/null +++ b/week2/community-contributions/meesam-day4-ChatBot-With-ToolCalling.ipynb @@ -0,0 +1,510 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "f6135303", + "metadata": {}, + "source": [ + "# Project - Airline AI Assistant Updated with Setting the Price of a return ticket to a city\n", + "\n", + "I have implemented the set price function used for setting the price of the return ticket to a city mentioned by the User." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cf0f8656", + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "import json\n", + "from dotenv import load_dotenv\n", + "from openai import OpenAI\n", + "import gradio as gr" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "1cb61dde", + "metadata": {}, + "outputs": [], + "source": [ + "# Initialization\n", + "\n", + "load_dotenv(override=True)\n", + "\n", + "openai_api_key = os.getenv('OPENAI_API_KEY')\n", + "if openai_api_key:\n", + " print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n", + "else:\n", + " print(\"OpenAI API Key not set\")\n", + " \n", + "MODEL = \"gpt-4.1-mini\"\n", + "openai = OpenAI()\n", + "\n", + "# As an alternative, if you'd like to use Ollama instead of OpenAI\n", + "# Check that Ollama is running for you locally (see week1/day2 exercise) then uncomment these next 2 lines\n", + "# MODEL = \"llama3.2\"\n", + "# openai = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "9319fe04", + "metadata": {}, + "outputs": [], + "source": [ + "system_message = \"\"\"\n", + "You are a helpful assistant for an Airline called FlightAI.\n", + "Give short, courteous answers, no more than 1 sentence.\n", + "Always be accurate. If you don't know the answer, say so.\n", + "\"\"\"" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "547a9ea6", + "metadata": {}, + "outputs": [], + "source": [ + "def chat(message, history):\n", + " history = [{\"role\":h[\"role\"], \"content\":h[\"content\"]} for h in history]\n", + " messages = [{\"role\": \"system\", \"content\": system_message}] + history + [{\"role\": \"user\", \"content\": message}]\n", + " response = openai.chat.completions.create(model=MODEL, messages=messages)\n", + " return response.choices[0].message.content\n", + "\n", + "gr.ChatInterface(fn=chat, type=\"messages\").launch()" + ] + }, + { + "cell_type": "markdown", + "id": "fd45d975", + "metadata": {}, + "source": [ + "## Tools\n", + "\n", + "Tools are an incredibly powerful feature provided by the frontier LLMs.\n", + "\n", + "With tools, you can write a function, and have the LLM call that function as part of its response.\n", + "\n", + "Sounds almost spooky.. we're giving it the power to run code on our machine?\n", + "\n", + "Well, kinda." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "49784cae", + "metadata": {}, + "outputs": [], + "source": [ + "# Let's start by making a useful function\n", + "\n", + "ticket_prices = {\"london\": \"$799\", \"paris\": \"$899\", \"tokyo\": \"$1400\", \"berlin\": \"$499\"}\n", + "\n", + "def get_ticket_price(destination_city):\n", + " print(f\"Tool called for city {destination_city}\")\n", + " price = ticket_prices.get(destination_city.lower(), \"Unknown ticket price\")\n", + " return f\"The price of a ticket to {destination_city} is {price}\"\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "2b2058ef", + "metadata": {}, + "outputs": [], + "source": [ + "get_ticket_price(\"London\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "32121eed", + "metadata": {}, + "outputs": [], + "source": [ + "# There's a particular dictionary structure that's required to describe our function:\n", + "\n", + "price_function = {\n", + " \"name\": \"get_ticket_price\",\n", + " \"description\": \"Get the price of a return ticket to the destination city.\",\n", + " \"parameters\": {\n", + " \"type\": \"object\",\n", + " \"properties\": {\n", + " \"destination_city\": {\n", + " \"type\": \"string\",\n", + " \"description\": \"The city that the customer wants to travel to\",\n", + " },\n", + " },\n", + " \"required\": [\"destination_city\"],\n", + " \"additionalProperties\": False\n", + " }\n", + "}\n", + "\n", + "set_price_function = {\n", + " \"name\": \"set_ticket_price\",\n", + " \"description\": \"Set the price of a return ticket to the destination city.\",\n", + " \"parameters\": {\n", + " \"type\": \"object\",\n", + " \"properties\": {\n", + " \"destination_city\": {\n", + " \"type\": \"string\",\n", + " \"description\": \"The city that the customer wants to travel to\",\n", + " },\n", + " \"price\": {\n", + " \"type\": \"number\",\n", + " \"description\": \"The price for the return ticket to the destination city.\"\n", + " } \n", + " },\n", + " \"required\": [\"destination_city\",\"price\"],\n", + " \"additionalProperties\": False\n", + " }\n", + "}" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5c61cb57", + "metadata": {}, + "outputs": [], + "source": [ + "# And this is included in a list of tools:\n", + "\n", + "tools = [{\"type\": \"function\", \"function\": price_function},\n", + " {\"type\": \"function\", \"function\": set_price_function}]" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f2e2730e", + "metadata": {}, + "outputs": [], + "source": [ + "tools" + ] + }, + { + "cell_type": "markdown", + "id": "f7ccc03e", + "metadata": {}, + "source": [ + "## Getting OpenAI to use our Tool\n", + "\n", + "There's some fiddly stuff to allow OpenAI \"to call our tool\"\n", + "\n", + "What we actually do is give the LLM the opportunity to inform us that it wants us to run the tool.\n", + "\n", + "Here's how the new chat function looks:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e6514c8f", + "metadata": {}, + "outputs": [], + "source": [ + "def chat(message, history):\n", + " history = [{\"role\":h[\"role\"], \"content\":h[\"content\"]} for h in history]\n", + " messages = [{\"role\": \"system\", \"content\": system_message}] + history + [{\"role\": \"user\", \"content\": message}]\n", + " response = openai.chat.completions.create(model=MODEL, messages=messages, tools=tools)\n", + "\n", + " if response.choices[0].finish_reason==\"tool_calls\":\n", + " message = response.choices[0].message\n", + " response = handle_tool_call(message)\n", + " messages.append(message)\n", + " messages.append(response)\n", + " response = openai.chat.completions.create(model=MODEL, messages=messages)\n", + " \n", + " return response.choices[0].message.content" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "17a39858", + "metadata": {}, + "outputs": [], + "source": [ + "# We have to write that function handle_tool_call:\n", + "\n", + "def handle_tool_call(message):\n", + " tool_call = message.tool_calls[0]\n", + " if tool_call.function.name == \"get_ticket_price\":\n", + " arguments = json.loads(tool_call.function.arguments)\n", + " city = arguments.get('destination_city')\n", + " price_details = get_ticket_price(city)\n", + " response = {\n", + " \"role\": \"tool\",\n", + " \"content\": price_details,\n", + " \"tool_call_id\": tool_call.id\n", + " }\n", + " return response" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "2aa76afb", + "metadata": {}, + "outputs": [], + "source": [ + "gr.ChatInterface(fn=chat, type=\"messages\").launch()" + ] + }, + { + "cell_type": "markdown", + "id": "e34133aa", + "metadata": {}, + "source": [ + "## Let's make a couple of improvements\n", + "\n", + "Handling multiple tool calls in 1 response\n", + "\n", + "Handling multiple tool calls 1 after another" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "6b76959f", + "metadata": {}, + "outputs": [], + "source": [ + "def chat(message, history):\n", + " history = [{\"role\":h[\"role\"], \"content\":h[\"content\"]} for h in history]\n", + " messages = [{\"role\": \"system\", \"content\": system_message}] + history + [{\"role\": \"user\", \"content\": message}]\n", + " response = openai.chat.completions.create(model=MODEL, messages=messages, tools=tools)\n", + "\n", + " if response.choices[0].finish_reason==\"tool_calls\":\n", + " message = response.choices[0].message\n", + " responses = handle_tool_calls(message)\n", + " messages.append(message)\n", + " messages.extend(responses)\n", + " response = openai.chat.completions.create(model=MODEL, messages=messages)\n", + " \n", + " return response.choices[0].message.content" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "0a86b195", + "metadata": {}, + "outputs": [], + "source": [ + "def handle_tool_calls(message):\n", + " responses = []\n", + " for tool_call in message.tool_calls:\n", + " if tool_call.function.name == \"get_ticket_price\":\n", + " arguments = json.loads(tool_call.function.arguments)\n", + " city = arguments.get('destination_city')\n", + " price_details = get_ticket_price(city)\n", + " responses.append({\n", + " \"role\": \"tool\",\n", + " \"content\": price_details,\n", + " \"tool_call_id\": tool_call.id\n", + " })\n", + " elif tool_call.function.name == \"set_ticket_price\":\n", + " arguments = json.loads(tool_call.function.arguments)\n", + " city = arguments.get('destination_city')\n", + " price = arguments.get('price')\n", + " set_price = set_ticket_price(city, price)\n", + " print(set_price)\n", + " responses.append({\n", + " \"role\": \"tool\",\n", + " \"content\": str(set_price),\n", + " \"tool_call_id\": tool_call.id\n", + " })\n", + " return responses" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ea49f374", + "metadata": {}, + "outputs": [], + "source": [ + "gr.ChatInterface(fn=chat, type=\"messages\").launch()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4682c861", + "metadata": {}, + "outputs": [], + "source": [ + "def chat(message, history):\n", + " history = [{\"role\":h[\"role\"], \"content\":h[\"content\"]} for h in history]\n", + " messages = [{\"role\": \"system\", \"content\": system_message}] + history + [{\"role\": \"user\", \"content\": message}]\n", + " response = openai.chat.completions.create(model=MODEL, messages=messages, tools=tools)\n", + "\n", + " while response.choices[0].finish_reason==\"tool_calls\":\n", + " message = response.choices[0].message\n", + " responses = handle_tool_calls(message)\n", + " messages.append(message)\n", + " messages.extend(responses)\n", + " response = openai.chat.completions.create(model=MODEL, messages=messages, tools=tools)\n", + " \n", + " return response.choices[0].message.content" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "d18765d0", + "metadata": {}, + "outputs": [], + "source": [ + "import sqlite3\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c755c847", + "metadata": {}, + "outputs": [], + "source": [ + "DB = \"prices.db\"\n", + "\n", + "with sqlite3.connect(DB) as conn:\n", + " cursor = conn.cursor()\n", + " cursor.execute('CREATE TABLE IF NOT EXISTS prices (city TEXT PRIMARY KEY, price REAL)')\n", + " conn.commit()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "d2cf2f06", + "metadata": {}, + "outputs": [], + "source": [ + "def get_ticket_price(city):\n", + " print(f\"DATABASE TOOL CALLED: Getting price for {city}\", flush=True)\n", + " with sqlite3.connect(DB) as conn:\n", + " cursor = conn.cursor()\n", + " cursor.execute('SELECT price FROM prices WHERE city = ?', (city.lower(),))\n", + " result = cursor.fetchone()\n", + " return f\"Ticket price to {city} is ${result[0]}\" if result else \"No price data available for this city\"" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c8cfc267", + "metadata": {}, + "outputs": [], + "source": [ + "get_ticket_price(\"London\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "3a8ae250", + "metadata": {}, + "outputs": [], + "source": [ + "def set_ticket_price(city, price):\n", + " with sqlite3.connect(DB) as conn:\n", + " cursor = conn.cursor()\n", + " cursor.execute('INSERT INTO prices (city, price) VALUES (?, ?) ON CONFLICT(city) DO UPDATE SET price = ?', (city.lower(), price, price))\n", + " conn.commit()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "22aa42e6", + "metadata": {}, + "outputs": [], + "source": [ + "ticket_prices = {\"london\":799, \"paris\": 899, \"tokyo\": 1420, \"sydney\": 2999}\n", + "for city, price in ticket_prices.items():\n", + " set_ticket_price(city, price)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4ff9e5ae", + "metadata": {}, + "outputs": [], + "source": [ + "get_ticket_price(\"Tokyo\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "930cd554", + "metadata": {}, + "outputs": [], + "source": [ + "gr.ChatInterface(fn=chat, type=\"messages\").launch()" + ] + }, + { + "cell_type": "markdown", + "id": "6f2bb40d", + "metadata": {}, + "source": [ + "## Exercise\n", + "\n", + "Add a tool to set the price of a ticket!" + ] + }, + { + "cell_type": "markdown", + "id": "a3712922", + "metadata": {}, + "source": [ + "\n", + " \n", + " \n", + " \n", + " \n", + "
\n", + " \n", + " \n", + "

Business Applications

\n", + " Hopefully this hardly needs to be stated! You now have the ability to give actions to your LLMs. This Airline Assistant can now do more than answer questions - it could interact with booking APIs to make bookings!\n", + "
" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": ".venv", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.12.12" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/week2/community-contributions/meesam-gradio-ollama.ipynb b/week2/community-contributions/meesam-gradio-ollama.ipynb new file mode 100644 index 000000000..665ac9958 --- /dev/null +++ b/week2/community-contributions/meesam-gradio-ollama.ipynb @@ -0,0 +1,116 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": null, + "id": "7e7481b2", + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "from ollama import Client\n", + "from dotenv import load_dotenv\n", + "import gradio as gr" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "64c8bc7d", + "metadata": {}, + "outputs": [], + "source": [ + "load_dotenv(override=True)\n", + "\n", + "OLLAMA_BASE_URL=\"https://www.ollama.com\"\n", + "API_KEY=os.environ.get(\"OLLAMA_API_KEY\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "d7445168", + "metadata": {}, + "outputs": [], + "source": [ + "system_prompt= \"\"\"\n", + " You are a helpful assistant who responds to the user queries.\n", + "\"\"\"" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "b6f88855", + "metadata": {}, + "outputs": [], + "source": [ + "ollama = Client(\n", + " host=OLLAMA_BASE_URL,\n", + " headers={'Authorization': 'Bearer ' + os.environ.get('OLLAMA_API_KEY')}\n", + ")\n", + "\n", + "def stream_chat(prompt):\n", + " messages = [\n", + " {\"role\":\"system\", \"content\": system_prompt},\n", + " {\"role\":\"user\", \"content\": prompt}\n", + " ]\n", + " result = ollama.chat(\n", + " model=\"gpt-oss:120b\",\n", + " messages=messages, \n", + " stream=True)\n", + "\n", + " response = \"\"\n", + " for chunk in result:\n", + " response += chunk['message']['content']\n", + " yield response\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5a4f28a4", + "metadata": {}, + "outputs": [], + "source": [ + "message_input = gr.Textbox(label=\"Input\")\n", + "message_output = gr.Markdown(label=\"Answer\")\n", + "\n", + "view = gr.Interface(\n", + " fn=stream_chat,\n", + " title=\"Welcome to Ollama Cloud\", \n", + " inputs=[message_input], \n", + " outputs=[message_output], \n", + " examples=[\n", + " \"Explain the Transformer architecture to a layperson\",\n", + " \"Explain the Transformer architecture to an aspiring AI engineer\",\n", + " ], \n", + " flagging_mode=\"never\"\n", + " )\n", + "\n", + "view.launch(inbrowser=True)" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": ".venv", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.12.12" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/week2/community-contributions/meesam-week2-day1.ipynb b/week2/community-contributions/meesam-week2-day1.ipynb new file mode 100644 index 000000000..aec8a3aa9 --- /dev/null +++ b/week2/community-contributions/meesam-week2-day1.ipynb @@ -0,0 +1,214 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": null, + "id": "87142eac", + "metadata": {}, + "outputs": [], + "source": [ + "# imports\n", + "\n", + "import os\n", + "import requests\n", + "from dotenv import load_dotenv\n", + "from openai import OpenAI\n", + "from IPython.display import Markdown, display" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "b04cebfd", + "metadata": {}, + "outputs": [], + "source": [ + "load_dotenv(override=True)\n", + "openai_api_key = os.getenv('OPENAI_API_KEY')\n", + "anthropic_api_key = os.getenv('ANTHROPIC_API_KEY')\n", + "google_api_key = os.getenv('GOOGLE_API_KEY')\n", + "\n", + "if openai_api_key:\n", + " print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n", + "else:\n", + " print(\"OpenAI API Key not set\")\n", + " \n", + "if anthropic_api_key:\n", + " print(f\"Anthropic API Key exists and begins {anthropic_api_key[:7]}\")\n", + "else:\n", + " print(\"Anthropic API Key not set (and this is optional)\")\n", + "\n", + "if google_api_key:\n", + " print(f\"Google API Key exists and begins {google_api_key[:2]}\")\n", + "else:\n", + " print(\"Google API Key not set (and this is optional)\")\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "89cc40e2", + "metadata": {}, + "outputs": [], + "source": [ + "\n", + "openai = OpenAI()\n", + "\n", + "\n", + "anthropic_url = \"https://api.anthropic.com/v1/\"\n", + "gemini_url = \"https://generativelanguage.googleapis.com/v1beta/openai/\"\n", + "\n", + "anthropic = OpenAI(api_key=anthropic_api_key, base_url=anthropic_url)\n", + "gemini = OpenAI(api_key=google_api_key, base_url=gemini_url)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "129d9cbf", + "metadata": {}, + "outputs": [], + "source": [ + "# Let's make a conversation between GPT-4.1-mini and Claude-3.5-haiku\n", + "# We're using cheap versions of models so the costs will be minimal\n", + "\n", + "gpt_model = \"gpt-4.1-mini\"\n", + "claude_model = \"claude-3-5-haiku-latest\"\n", + "gemini_model = \"gemini-2.5-flash\"\n", + "\n", + "gpt_system = \"You are the one who loves football \\\n", + " defend your sports among two other chatbots.\"\n", + "\n", + "claude_system = \"You are the one who loves basketball \\\n", + " defend your sports among two other chatbots.\"\n", + "\n", + "gemini_system = \"You are the one who loves cricket \\\n", + " defend your sports among two other chatbots.\"\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "3f8eabfe", + "metadata": {}, + "outputs": [], + "source": [ + "conversation = []\n", + "\n", + "def complete_conversation():\n", + " return \"\\n\".join(conversation)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "dccfc1e0", + "metadata": {}, + "outputs": [], + "source": [ + "def alex():\n", + " user_prompt= f\"\"\"You are alex and you are in conversation with blake and charlie. \\\n", + " {complete_conversation()} \\\n", + " Here is the conversation till now \\\n", + " Now continue with the conversation further.\"\"\"\n", + " return user_prompt\n", + "\n", + "def blake():\n", + " user_prompt= f\"\"\"You are blake and you are in conversation with alex and charlie. \\\n", + " {complete_conversation()} \\\n", + " Here is the conversation till now \\\n", + " Now continue with the conversation further.\"\"\"\n", + " return user_prompt\n", + "\n", + "def charlie():\n", + " user_prompt= f\"\"\"You are charlie and you are in conversation with alex and blake. \\\n", + " {complete_conversation()} \\\n", + " Here is the conversation till now \\\n", + " Now continue with the conversation further.\"\"\"\n", + " return user_prompt" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f943baf0", + "metadata": {}, + "outputs": [], + "source": [ + "def alex_speaks():\n", + " res = openai.chat.completions.create(\n", + " model=gpt_model,\n", + " messages=[\n", + " {\"role\":\"system\", \"content\": gpt_system},\n", + " {\"role\":\"user\", \"content\": alex()}\n", + " ]\n", + " )\n", + " return res.choices[0].message.content\n", + "\n", + "def blake_speaks():\n", + " res = anthropic.chat.completions.create(\n", + " model=claude_model,\n", + " messages=[\n", + " {\"role\":\"system\", \"content\": claude_system},\n", + " {\"role\":\"user\", \"content\": blake()}\n", + " ]\n", + " )\n", + " return res.choices[0].message.content\n", + "\n", + "def charlie_speaks():\n", + " res = gemini.chat.completions.create(\n", + " model=gemini_model,\n", + " messages=[\n", + " {\"role\":\"system\", \"content\": gemini_system},\n", + " {\"role\":\"user\", \"content\": charlie()}\n", + " ]\n", + " )\n", + " return res.choices[0].message.content" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "3d71d6ad", + "metadata": {}, + "outputs": [], + "source": [ + "for i in range(1):\n", + " alex_res = alex_speaks()\n", + " print(f\"Alex: \\n {alex_res}\")\n", + " conversation.append(f\"Alex: {alex_res}\")\n", + "\n", + " blake_res = blake_speaks()\n", + " print(f\"Blake: \\n {blake_res}\")\n", + " conversation.append(f\"Blake: {blake_res}\")\n", + "\n", + " charlie_res = charlie_speaks()\n", + " print(f\"Charlie: \\n {charlie_res}\")\n", + " conversation.append(f\"Charlie: {charlie_res}\")\n", + " " + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": ".venv", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.12.12" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +}