You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
client.messages.retrieve(thread_id: thread_id, id: message_id) # -> Fails after thread is deleted
583
583
```
584
584
585
-
586
585
### Runs
587
586
588
587
To submit a thread to be evaluated with the model of an assistant, create a `Run` as follows (Note: This is one place where OpenAI will take your money):
@@ -604,7 +603,7 @@ The `status` response can include the following strings `queued`, `in_progress`,
Fill in the transparent part of an image, or upload a mask with transparent sections to indicate the parts of an image that can be changed according to your prompt...
@@ -791,12 +789,14 @@ puts response["text"]
791
789
792
790
The transcriptions API takes as input the audio file you want to transcribe and returns the text in the desired output file format.
793
791
792
+
You can pass the language of the audio file to improve transcription quality. Supported languages are listed [here](https://github.com/openai/whisper#available-models-and-languages). You need to provide the language as an ISO-639-1 code, eg. "en" for English or "ne" for Nepali. You can look up the codes [here](https://en.wikipedia.org/wiki/List_of_ISO_639_language_codes).
0 commit comments