Skip to content

Commit 812cbea

Browse files
committed
Tweak README
1 parent 2b85439 commit 812cbea

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -161,7 +161,7 @@ client.models.retrieve(id: "text-ada-001")
161161
- text-babbage-001
162162
- text-curie-001
163163

164-
### GPT
164+
### Chat
165165

166166
GPT is a model that can be used to generate text in a conversational style. You can use it to [generate a response](https://platform.openai.com/docs/api-reference/chat/create) to a sequence of [messages](https://platform.openai.com/docs/guides/chat/introduction):
167167

@@ -176,9 +176,9 @@ puts response.dig("choices", 0, "message", "content")
176176
# => "Hello! How may I assist you today?"
177177
```
178178

179-
### Streaming GPT
179+
### Streaming Chat
180180

181-
[Quick guide to streaming GPT with Rails 7 and Hotwire](https://gist.github.com/alexrudall/cb5ee1e109353ef358adb4e66631799d)
181+
[Quick guide to streaming Chat with Rails 7 and Hotwire](https://gist.github.com/alexrudall/cb5ee1e109353ef358adb4e66631799d)
182182

183183
You can stream from the API in realtime, which can be much faster and used to create a more engaging user experience. Pass a [Proc](https://ruby-doc.org/core-2.6/Proc.html) (or any object with a `#call` method) to the `stream` parameter to receive the stream of completion chunks as they are generated. Each time one or more chunks is received, the proc will be called once with each chunk, parsed as a Hash. If OpenAI returns an error, `ruby-openai` will raise an error.
184184

@@ -195,7 +195,7 @@ client.chat(
195195
# => "Anna is a young woman in her mid-twenties, with wavy chestnut hair that falls to her shoulders..."
196196
```
197197

198-
Note: OpenAPI currently does not report token usage for streaming responses. To count tokens while streaming, try `OpenAI.rough_token_count` or [tiktoken_ruby](https://github.com/IAPark/tiktoken_ruby). Also, each call to the stream proc corresponds to a single token, so you can count the number of calls to the proc to get the completion token count.
198+
Note: OpenAPI currently does not report token usage for streaming responses. To count tokens while streaming, try `OpenAI.rough_token_count` or [tiktoken_ruby](https://github.com/IAPark/tiktoken_ruby). We think that each call to the stream proc corresponds to a single token, so you can also try counting the number of calls to the proc to get the completion token count.
199199

200200
### Functions
201201

0 commit comments

Comments
 (0)