Skip to content

Commit d962621

Browse files
simonx1alexrudall
authored andcommitted
Add /batches endpoint
1 parent 494a39b commit d962621

File tree

12 files changed

+1817
-0
lines changed

12 files changed

+1817
-0
lines changed

README.md

Lines changed: 67 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -35,6 +35,7 @@ Stream text with GPT-4, transcribe and translate audio with Whisper, or create i
3535
- [Functions](#functions)
3636
- [Edits](#edits)
3737
- [Embeddings](#embeddings)
38+
- [Batches](#batches)
3839
- [Files](#files)
3940
- [Finetunes](#finetunes)
4041
- [Assistants](#assistants)
@@ -486,6 +487,72 @@ puts response.dig("data", 0, "embedding")
486487
# => Vector representation of your embedding
487488
```
488489

490+
### Batches
491+
The Batches endpoint allows you to create and manage large batches of API requests to run asynchronously. Currently, only the `/v1/chat/completions` endpoint is supported for batches.
492+
493+
To use the Batches endpoint, you need to first upload a JSONL file containing the batch requests using the Files endpoint. The file must be uploaded with the purpose set to `batch`. Each line in the JSONL file represents a single request and should have the following format:
494+
495+
```json
496+
{"custom_id": "request-1", "method": "POST", "url": "/v1/chat/completions", "body": {"model": "gpt-3.5-turbo", "messages": [{"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "What is 2+2?"}]}}
497+
```
498+
499+
Once you have uploaded the JSONL file, you can create a new batch by providing the file ID, endpoint, and completion window:
500+
501+
```ruby
502+
response = client.batches.create(
503+
parameters: {
504+
input_file_id: "file-abc123",
505+
endpoint: "/v1/chat/completions",
506+
completion_window: "24h"
507+
}
508+
)
509+
batch_id = response["id"]
510+
```
511+
512+
You can retrieve information about a specific batch using its ID:
513+
514+
```ruby
515+
batch = client.batches.retrieve(id: batch_id)
516+
```
517+
518+
To cancel a batch that is in progress:
519+
520+
```ruby
521+
client.batches.cancel(id: batch_id)
522+
```
523+
524+
You can also list all the batches:
525+
526+
```ruby
527+
client.batches.list
528+
```
529+
530+
The output and error files for a batch can be accessed using the `output_file_id` and `error_file_id` fields in the batch object, respectively. These files are in JSONL format, with each line representing the output or error for a single request. The output object has the following format:
531+
532+
```json
533+
{
534+
"id": "response-1",
535+
"custom_id": "request-1",
536+
"response": {
537+
"id": "chatcmpl-abc123",
538+
"object": "chat.completion",
539+
"created": 1677858242,
540+
"model": "gpt-3.5-turbo-0301",
541+
"choices": [
542+
{
543+
"index": 0,
544+
"message": {
545+
"role": "assistant",
546+
"content": "2+2 equals 4."
547+
}
548+
}
549+
]
550+
}
551+
}
552+
```
553+
554+
If a request fails with a non-HTTP error, the error object will contain more information about the cause of the failure.
555+
489556
### Files
490557

491558
Put your data in a `.jsonl` file like this:

lib/openai.rb

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,7 @@
1414
require_relative "openai/run_steps"
1515
require_relative "openai/audio"
1616
require_relative "openai/version"
17+
require_relative "openai/batches"
1718

1819
module OpenAI
1920
class Error < StandardError; end

lib/openai/batches.rb

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
module OpenAI
2+
class Batches
3+
def initialize(client:)
4+
@client = client.beta(assistants: "v1")
5+
end
6+
7+
def list
8+
@client.get(path: "/batches")
9+
end
10+
11+
def retrieve(id:)
12+
@client.get(path: "/batches/#{id}")
13+
end
14+
15+
def create(parameters: {})
16+
@client.json_post(path: "/batches", parameters: parameters)
17+
end
18+
19+
def cancel(id:)
20+
@client.post(path: "/batches/#{id}/cancel")
21+
end
22+
end
23+
end

lib/openai/client.rb

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -78,6 +78,10 @@ def run_steps
7878
@run_steps ||= OpenAI::RunSteps.new(client: self)
7979
end
8080

81+
def batches
82+
@batches ||= OpenAI::Batches.new(client: self)
83+
end
84+
8185
def moderations(parameters: {})
8286
json_post(path: "/moderations", parameters: parameters)
8387
end

spec/fixtures/cassettes/batch_cancel.yml

Lines changed: 189 additions & 0 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

0 commit comments

Comments
 (0)