Skip to content

Commit 775eabe

Browse files
authored
Merge pull request alexrudall#477 from alexrudall/gpt4o
Add GPT-4o to README
2 parents 0a897ba + 1d368c8 commit 775eabe

File tree

1 file changed

+15
-15
lines changed

1 file changed

+15
-15
lines changed

README.md

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66

77
Use the [OpenAI API](https://openai.com/blog/openai-api/) with Ruby! 🤖❤️
88

9-
Stream text with GPT-4, transcribe and translate audio with Whisper, or create images with DALL·E...
9+
Stream text with GPT-4o, transcribe and translate audio with Whisper, or create images with DALL·E...
1010

1111
[🚢 Hire me](https://peaceterms.com?utm_source=ruby-openai&utm_medium=readme&utm_id=26072023) | [🎮 Ruby AI Builders Discord](https://discord.gg/k4Uc224xVD) | [🐦 Twitter](https://twitter.com/alexrudall) | [🧠 Anthropic Gem](https://github.com/alexrudall/anthropic) | [🚂 Midjourney Gem](https://github.com/alexrudall/midjourney)
1212

@@ -233,7 +233,7 @@ client = OpenAI::Client.new(
233233

234234
client.chat(
235235
parameters: {
236-
model: "llama3", # Required.
236+
model: "gpt-4o", # Required.
237237
messages: [{ role: "user", content: "Hello!"}], # Required.
238238
temperature: 0.7,
239239
stream: proc do |chunk, _bytesize|
@@ -273,7 +273,7 @@ To estimate the token-count of your text:
273273

274274
```ruby
275275
OpenAI.rough_token_count("Your text")
276-
````
276+
```
277277

278278
If you need a more accurate count, try [tiktoken_ruby](https://github.com/IAPark/tiktoken_ruby).
279279

@@ -283,7 +283,7 @@ There are different models that can be used to generate text. For a full list an
283283

284284
```ruby
285285
client.models.list
286-
client.models.retrieve(id: "gpt-3.5-turbo")
286+
client.models.retrieve(id: "gpt-4o")
287287
```
288288

289289
### Chat
@@ -293,7 +293,7 @@ GPT is a model that can be used to generate text in a conversational style. You
293293
```ruby
294294
response = client.chat(
295295
parameters: {
296-
model: "gpt-3.5-turbo", # Required.
296+
model: "gpt-4o", # Required.
297297
messages: [{ role: "user", content: "Hello!"}], # Required.
298298
temperature: 0.7,
299299
})
@@ -310,7 +310,7 @@ You can stream from the API in realtime, which can be much faster and used to cr
310310
```ruby
311311
client.chat(
312312
parameters: {
313-
model: "gpt-3.5-turbo", # Required.
313+
model: "gpt-4o", # Required.
314314
messages: [{ role: "user", content: "Describe a character called Anna!"}], # Required.
315315
temperature: 0.7,
316316
stream: proc do |chunk, _bytesize|
@@ -351,7 +351,7 @@ You can set the response_format to ask for responses in JSON:
351351
```ruby
352352
response = client.chat(
353353
parameters: {
354-
model: "gpt-3.5-turbo",
354+
model: "gpt-4o",
355355
response_format: { type: "json_object" },
356356
messages: [{ role: "user", content: "Hello! Give me some JSON please."}],
357357
temperature: 0.7,
@@ -371,7 +371,7 @@ You can stream it as well!
371371
```ruby
372372
response = client.chat(
373373
parameters: {
374-
model: "gpt-3.5-turbo",
374+
model: "gpt-4o",
375375
messages: [{ role: "user", content: "Can I have some JSON please?"}],
376376
response_format: { type: "json_object" },
377377
stream: proc do |chunk, _bytesize|
@@ -408,7 +408,7 @@ end
408408
response =
409409
client.chat(
410410
parameters: {
411-
model: "gpt-3.5-turbo",
411+
model: "gpt-4o",
412412
messages: [
413413
{
414414
"role": "user",
@@ -472,7 +472,7 @@ Hit the OpenAI API for a completion using other GPT-3 models:
472472
```ruby
473473
response = client.completions(
474474
parameters: {
475-
model: "gpt-3.5-turbo",
475+
model: "gpt-4o",
476476
prompt: "Once upon a time",
477477
max_tokens: 5
478478
})
@@ -508,7 +508,7 @@ To use the Batches endpoint, you need to first upload a JSONL file containing th
508508
"method": "POST",
509509
"url": "/v1/chat/completions",
510510
"body": {
511-
"model": "gpt-3.5-turbo",
511+
"model": "gpt-4o",
512512
"messages": [
513513
{ "role": "system", "content": "You are a helpful assistant." },
514514
{ "role": "user", "content": "What is 2+2?" }
@@ -568,7 +568,7 @@ These files are in JSONL format, with each line representing the output or error
568568
"id": "chatcmpl-abc123",
569569
"object": "chat.completion",
570570
"created": 1677858242,
571-
"model": "gpt-3.5-turbo",
571+
"model": "gpt-4o",
572572
"choices": [
573573
{
574574
"index": 0,
@@ -618,7 +618,7 @@ You can then use this file ID to create a fine tuning job:
618618
response = client.finetunes.create(
619619
parameters: {
620620
training_file: file_id,
621-
model: "gpt-3.5-turbo"
621+
model: "gpt-4o"
622622
})
623623
fine_tune_id = response["id"]
624624
```
@@ -664,7 +664,7 @@ To create a new assistant:
664664
```ruby
665665
response = client.assistants.create(
666666
parameters: {
667-
model: "gpt-3.5-turbo",
667+
model: "gpt-4o",
668668
name: "OpenAI-Ruby test assistant",
669669
description: nil,
670670
instructions: "You are a Ruby dev bot. When asked a question, write and run Ruby code to answer the question",
@@ -1023,7 +1023,7 @@ HTTP errors can be caught like this:
10231023

10241024
```
10251025
begin
1026-
OpenAI::Client.new.models.retrieve(id: "gpt-3.5-turbo")
1026+
OpenAI::Client.new.models.retrieve(id: "gpt-4o")
10271027
rescue Faraday::Error => e
10281028
raise "Got a Faraday error: #{e}"
10291029
end

0 commit comments

Comments
 (0)