You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Batches endpoint allows you to create and manage large batches of API requests to run asynchronously. Currently, only the `/v1/chat/completions` endpoint is supported for batches.
492
493
493
494
To use the Batches endpoint, you need to first upload a JSONL file containing the batch requests using the Files endpoint. The file must be uploaded with the purpose set to `batch`. Each line in the JSONL file represents a single request and should have the following format:
494
495
495
496
```json
496
-
{"custom_id": "request-1", "method": "POST", "url": "/v1/chat/completions", "body": {"model": "gpt-3.5-turbo", "messages": [{"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "What is 2+2?"}]}}
497
+
{
498
+
"custom_id": "request-1",
499
+
"method": "POST",
500
+
"url": "/v1/chat/completions",
501
+
"body": {
502
+
"model": "gpt-3.5-turbo",
503
+
"messages": [
504
+
{ "role": "system", "content": "You are a helpful assistant." },
505
+
{ "role": "user", "content": "What is 2+2?" }
506
+
]
507
+
}
508
+
}
497
509
```
498
510
499
511
Once you have uploaded the JSONL file, you can create a new batch by providing the file ID, endpoint, and completion window:
The output and error files for a batch can be accessed using the `output_file_id` and `error_file_id` fields in the batch object, respectively. These files are in JSONL format, with each line representing the output or error for a single request. The output object has the following format:
542
+
Once the batch["completed_at"] is present, you can fetch the output or error files:
0 commit comments