Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
395 changes: 395 additions & 0 deletions week1/community-contributions/Rohan/day2.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,395 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "d15d8294-3328-4e07-ad16-8a03e9bbfdb9",
"metadata": {},
"source": [
"# Welcome to the Day 2 Lab!\n"
]
},
{
"cell_type": "markdown",
"id": "ada885d9-4d42-4d9b-97f0-74fbbbfe93a9",
"metadata": {},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../assets/resources.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#f71;\">Just before we get started --</h2>\n",
" <span style=\"color:#f71;\">I thought I'd take a second to point you at this page of useful resources for the course. This includes links to all the slides.<br/>\n",
" <a href=\"https://edwarddonner.com/2024/11/13/llm-engineering-resources/\">https://edwarddonner.com/2024/11/13/llm-engineering-resources/</a><br/>\n",
" Please keep this bookmarked, and I'll continue to add more useful links there over time.\n",
" </span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "markdown",
"id": "79ffe36f",
"metadata": {},
"source": [
"## First - let's talk about the Chat Completions API\n",
"\n",
"1. The simplest way to call an LLM\n",
"2. It's called Chat Completions because it's saying: \"here is a conversation, please predict what should come next\"\n",
"3. The Chat Completions API was invented by OpenAI, but it's so popular that everybody uses it!\n",
"\n",
"### We will start by calling OpenAI again - but don't worry non-OpenAI people, your time is coming!\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e38f17a0",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from dotenv import load_dotenv\n",
"\n",
"load_dotenv(override=True)\n",
"api_key = os.getenv('OPENAI_API_KEY')\n",
"\n",
"if not api_key:\n",
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n",
"elif not api_key.startswith(\"sk-proj-\"):\n",
" print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n",
"else:\n",
" print(\"API key found and looks good so far!\")\n"
]
},
{
"cell_type": "markdown",
"id": "97846274",
"metadata": {},
"source": [
"## Do you know what an Endpoint is?\n",
"\n",
"If not, please review the Technical Foundations guide in the guides folder\n",
"\n",
"And, here is an endpoint that might interest you..."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5af5c188",
"metadata": {},
"outputs": [],
"source": [
"import requests\n",
"\n",
"headers = {\"Authorization\": f\"Bearer {api_key}\", \"Content-Type\": \"application/json\"}\n",
"\n",
"payload = {\n",
" \"model\": \"gpt-5-nano\",\n",
" \"messages\": [\n",
" {\"role\": \"user\", \"content\": \"Tell me a fun fact\"}]\n",
"}\n",
"\n",
"payload"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2d0ab242",
"metadata": {},
"outputs": [],
"source": [
"response = requests.post(\n",
" \"https://api.openai.com/v1/chat/completions\",\n",
" headers=headers,\n",
" json=payload\n",
")\n",
"\n",
"response.json()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cb11a9f6",
"metadata": {},
"outputs": [],
"source": [
"response.json()[\"choices\"][0][\"message\"][\"content\"]"
]
},
{
"cell_type": "markdown",
"id": "cea3026a",
"metadata": {},
"source": [
"# What is the openai package?\n",
"\n",
"It's known as a Python Client Library.\n",
"\n",
"It's nothing more than a wrapper around making this exact call to the http endpoint.\n",
"\n",
"It just allows you to work with nice Python code instead of messing around with janky json objects.\n",
"\n",
"But that's it. It's open-source and lightweight. Some people think it contains OpenAI model code - it doesn't!\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "490fdf09",
"metadata": {},
"outputs": [],
"source": [
"# Create OpenAI client\n",
"\n",
"from openai import OpenAI\n",
"openai = OpenAI()\n",
"\n",
"response = openai.chat.completions.create(model=\"gpt-5-nano\", messages=[{\"role\": \"user\", \"content\": \"Tell me a fun fact\"}])\n",
"\n",
"response.choices[0].message.content\n",
"\n"
]
},
{
"cell_type": "markdown",
"id": "c7739cda",
"metadata": {},
"source": [
"## And then this great thing happened:\n",
"\n",
"OpenAI's Chat Completions API was so popular, that the other model providers created endpoints that are identical.\n",
"\n",
"They are known as the \"OpenAI Compatible Endpoints\".\n",
"\n",
"For example, google made one here: https://generativelanguage.googleapis.com/v1beta/openai/\n",
"\n",
"And OpenAI decided to be kind: they said, hey, you can just use the same client library that we made for GPT. We'll allow you to specify a different endpoint URL and a different key, to use another provider.\n",
"\n",
"So you can use:\n",
"\n",
"```python\n",
"gemini = OpenAI(base_url=\"https://generativelanguage.googleapis.com/v1beta/openai/\", api_key=\"AIz....\")\n",
"gemini.chat.completions.create(...)\n",
"```\n",
"\n",
"And to be clear - even though OpenAI is in the code, we're only using this lightweight python client library to call the endpoint - there's no OpenAI model involved here.\n",
"\n",
"If you're confused, please review Guide 9 in the Guides folder!\n",
"\n",
"And now let's try it!\n",
"\n",
"## THIS IS OPTIONAL - but if you wish to try out Google Gemini, please visit:\n",
"\n",
"https://aistudio.google.com/\n",
"\n",
"And set up your API key at\n",
"\n",
"https://aistudio.google.com/api-keys\n",
"\n",
"And then add your key to the `.env` file, being sure to Save the .env file after you change it:\n",
"\n",
"`GOOGLE_API_KEY=AIz...`\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f74293bc",
"metadata": {},
"outputs": [],
"source": [
"GEMINI_BASE_URL = \"https://generativelanguage.googleapis.com/v1beta/openai/\"\n",
"\n",
"load_dotenv(override=True)\n",
"\n",
"google_api_key = os.getenv(\"GOOGLE_API_KEY\")\n",
"\n",
"if not google_api_key:\n",
" print(\"No API key was found - please be sure to add your key to the .env file, and save the file! Or you can skip the next 2 cells if you don't want to use Gemini\")\n",
"elif not google_api_key.startswith(\"AIz\"):\n",
" print(\"An API key was found, but it doesn't start AIz\")\n",
"else:\n",
" print(\"API key found and looks good so far!\")\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d060f484",
"metadata": {},
"outputs": [],
"source": [
"gemini = OpenAI(base_url=GEMINI_BASE_URL, api_key=google_api_key)\n",
"\n",
"response = gemini.chat.completions.create(model=\"gemini-2.5-flash-lite\", messages=[{\"role\": \"user\", \"content\": \"Tell me a fun fact\"}])\n",
"\n",
"response.choices[0].message.content"
]
},
{
"cell_type": "markdown",
"id": "65272432",
"metadata": {},
"source": [
"## And Ollama also gives an OpenAI compatible endpoint\n",
"\n",
"...and it's on your local machine!\n",
"\n",
"If the next cell doesn't print \"Ollama is running\" then please open a terminal and run `ollama serve`"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f06280ad",
"metadata": {},
"outputs": [],
"source": [
"requests.get(\"http://localhost:11434\").content"
]
},
{
"cell_type": "markdown",
"id": "c6ef3807",
"metadata": {},
"source": [
"### Download llama3.2 from meta\n",
"\n",
"Change this to llama3.2:1b if your computer is smaller.\n",
"\n",
"Don't use llama3.3 or llama4! They are too big for your computer.."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e633481d",
"metadata": {},
"outputs": [],
"source": [
"!ollama pull llama3.2"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d9419762",
"metadata": {},
"outputs": [],
"source": [
"OLLAMA_BASE_URL = \"http://localhost:11434/v1\"\n",
"\n",
"ollama = OpenAI(base_url=OLLAMA_BASE_URL, api_key='ollama')\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e2456cdf",
"metadata": {},
"outputs": [],
"source": [
"# Get a fun fact\n",
"\n",
"response = ollama.chat.completions.create(model=\"llama3.2\", messages=[{\"role\": \"user\", \"content\": \"Tell me a fun fact\"}])\n",
"\n",
"response.choices[0].message.content"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1e6cae7f",
"metadata": {},
"outputs": [],
"source": [
"# Now let's try deepseek-r1:1.5b - this is DeepSeek \"distilled\" into Qwen from Alibaba Cloud\n",
"\n",
"!ollama pull deepseek-r1:1.5b"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "25002f25",
"metadata": {},
"outputs": [],
"source": [
"response = ollama.chat.completions.create(model=\"deepseek-r1:1.5b\", messages=[{\"role\": \"user\", \"content\": \"Tell me a fun fact\"}])\n",
"\n",
"response.choices[0].message.content"
]
},
{
"cell_type": "markdown",
"id": "6e9fa1fc-eac5-4d1d-9be4-541b3f2b3458",
"metadata": {},
"source": [
"# HOMEWORK EXERCISE ASSIGNMENT\n",
"\n",
"Upgrade the day 1 project to summarize a webpage to use an Open Source model running locally via Ollama rather than OpenAI\n",
"\n",
"You'll be able to use this technique for all subsequent projects if you'd prefer not to use paid APIs.\n",
"\n",
"**Benefits:**\n",
"1. No API charges - open-source\n",
"2. Data doesn't leave your box\n",
"\n",
"**Disadvantages:**\n",
"1. Significantly less power than Frontier Model\n",
"\n",
"## Recap on installation of Ollama\n",
"\n",
"Simply visit [ollama.com](https://ollama.com) and install!\n",
"\n",
"Once complete, the ollama server should already be running locally. \n",
"If you visit: \n",
"[http://localhost:11434/](http://localhost:11434/)\n",
"\n",
"You should see the message `Ollama is running`. \n",
"\n",
"If not, bring up a new Terminal (Mac) or Powershell (Windows) and enter `ollama serve` \n",
"And in another Terminal (Mac) or Powershell (Windows), enter `ollama pull llama3.2` \n",
"Then try [http://localhost:11434/](http://localhost:11434/) again.\n",
"\n",
"If Ollama is slow on your machine, try using `llama3.2:1b` as an alternative. Run `ollama pull llama3.2:1b` from a Terminal or Powershell, and change the code from `MODEL = \"llama3.2\"` to `MODEL = \"llama3.2:1b\"`"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6de38216-6d1c-48c4-877b-86d403f4e0f8",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.12"
}
},
"nbformat": 4,
"nbformat_minor": 5
}