Skip to content

Commit 402c323

Browse files
MrShahzebKhosomerveenoyanVaibhavs10pcuencaDeep-unlearning
authored
Add audio text to text task (#1692)
This PR introduces support for the Audio-Text-to-Text task in huggingface.js. - Added details to the sections of the audio-text-to-text task section in the packages/tasks/src/tasks/audio-text-to-text/ directory that contains about.md and data.ts. - Ensured consistency with existing task structure and documentation. --------- Co-authored-by: Merve Noyan <merve@huggingface.co> Co-authored-by: vb <vaibhavs10@gmail.com> Co-authored-by: Pedro Cuenca <pedro@huggingface.co> Co-authored-by: Steven Zheng <58599908+Deep-unlearning@users.noreply.github.com>
1 parent f734971 commit 402c323

File tree

2 files changed

+209
-0
lines changed

2 files changed

+209
-0
lines changed
Lines changed: 139 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,139 @@
1+
## Use Cases
2+
3+
> This task takes `audio` and a `text prompt` and returns `text` (answers, summaries, structured notes, etc.).
4+
5+
### Audio question answering
6+
7+
Ask targeted questions about lectures, podcasts, or calls and get context-aware answers.
8+
**Example:** Audio: physics lecture → Prompt: “What did the teacher say about gravity and how is it measured?”
9+
10+
### Meeting notes & action items
11+
12+
Turn multi-speaker meetings into concise minutes with decisions, owners, and deadlines.
13+
**Example:** Audio: weekly stand-up → Prompt: “Summarize key decisions and list action items with assignees.”
14+
15+
### Speech understanding & intent
16+
17+
Go beyond transcription to extract intent, sentiment, uncertainty, or emotion from spoken language.
18+
**Example:** “I’m not sure I can finish this on time.” → Prompt: “Describe speaker intent and confidence.”
19+
20+
### Music & sound analysis (textual)
21+
22+
Describe instrumentation, genre, tempo, or sections, and suggest edits or techniques (text output only).
23+
**Example:** Song demo → Prompt: “Identify key and tempo, then suggest jazz reharmonization ideas for the chorus.”
24+
25+
## Inference
26+
27+
You can use the 'transformers' library, and your audio file to any of the `audio-text-to-text` model, with instructions and get text responses. Following code examples show how to do so.
28+
29+
### Speech Transcription and Analysis
30+
31+
These models don’t just turn speech into text—they also capture tone, emotion, and speaker traits. This makes them useful for tasks like sentiment analysis or identifying speaker profiles.
32+
33+
You can try audio transcription with [Voxtral Mini](https://huggingface.co/mistralai/Voxtral-Mini-3B-2507) using the following code.
34+
35+
```python
36+
from transformers import VoxtralForConditionalGeneration, AutoProcessor
37+
import torch
38+
39+
device = "cuda"
40+
repo_id = "mistralai/Voxtral-Mini-3B-2507"
41+
42+
processor = AutoProcessor.from_pretrained(repo_id)
43+
model = VoxtralForConditionalGeneration.from_pretrained(repo_id, dtype=torch.bfloat16, device_map=device)
44+
45+
inputs = processor.apply_transcription_request(language="en", audio="https://huggingface.co/datasets/hf-internal-testing/dummy-audio-samples/resolve/main/obama.mp3", model_id=repo_id)
46+
inputs = inputs.to(device, dtype=torch.bfloat16)
47+
48+
outputs = model.generate(**inputs, max_new_tokens=500)
49+
decoded_outputs = processor.batch_decode(outputs[:, inputs.input_ids.shape[1]:], skip_special_tokens=True)
50+
51+
print("\nGenerated responses:")
52+
print("=" * 80)
53+
for decoded_output in decoded_outputs:
54+
print(decoded_output)
55+
print("=" * 80)
56+
```
57+
58+
### Audio Question Answering
59+
60+
These models can understand audio directly and answer questions about it. For example, summarizing a podcast clip or explaining parts of a recorded conversation.
61+
62+
You can experiment with [Qwen2-Audio-Instruct-Demo](https://huggingface.co/Qwen/Qwen2-Audio-Instruct-Demo) for conversations with both text and audio inputs, letting you ask follow-up questions about different sounds or speech clips.
63+
64+
```python
65+
from io import BytesIO
66+
from urllib.request import urlopen
67+
import librosa
68+
from transformers import Qwen2AudioForConditionalGeneration, AutoProcessor
69+
70+
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-Audio-7B-Instruct")
71+
model = Qwen2AudioForConditionalGeneration.from_pretrained("Qwen/Qwen2-Audio-7B-Instruct", device_map="auto")
72+
73+
conversation = [
74+
{'role': 'system', 'content': 'You are a helpful assistant.'},
75+
{"role": "user", "content": [
76+
{"type": "audio", "audio_url": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-Audio/audio/glass-breaking-151256.mp3"},
77+
{"type": "text", "text": "What's that sound?"},
78+
]},
79+
{"role": "assistant", "content": "It is the sound of glass shattering."},
80+
{"role": "user", "content": [
81+
{"type": "text", "text": "What can you do when you hear that?"},
82+
]},
83+
{"role": "assistant", "content": "Stay alert and cautious, and check if anyone is hurt or if there is any damage to property."},
84+
{"role": "user", "content": [
85+
{"type": "audio", "audio_url": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-Audio/audio/1272-128104-0000.flac"},
86+
{"type": "text", "text": "What does the person say?"},
87+
]},
88+
]
89+
text = processor.apply_chat_template(conversation, add_generation_prompt=True, tokenize=False)
90+
audios = []
91+
for message in conversation:
92+
if isinstance(message["content"], list):
93+
for ele in message["content"]:
94+
if ele["type"] == "audio":
95+
audios.append(
96+
librosa.load(
97+
BytesIO(urlopen(ele['audio_url']).read()),
98+
sr=processor.feature_extractor.sampling_rate)[0]
99+
)
100+
101+
inputs = processor(text=text, audios=audios, return_tensors="pt", padding=True)
102+
inputs.input_ids = inputs.input_ids.to("cuda")
103+
104+
generate_ids = model.generate(**inputs, max_length=256)
105+
generate_ids = generate_ids[:, inputs.input_ids.size(1):]
106+
107+
response = processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
108+
```
109+
110+
## Useful Resources
111+
112+
If you want to learn more about this concept, here are some useful links:
113+
114+
### Papers
115+
116+
- [SpeechGPT](https://huggingface.co/papers/2507.13264) — multimodal dialogue with speech and text.
117+
- [Voxtral](https://huggingface.co/papers/2507.13264) — a state-of-the-art audio-text model.
118+
- [Qwen2-audio-instruct](https://huggingface.co/papers/2407.10759) — large-scale audio-language modeling for instruction following.
119+
- [AudioPaLM](https://huggingface.co/papers/2306.12925) — scaling audio-language models with PaLM.
120+
121+
### Models, Codes & Demos
122+
123+
- [Qwen2-audio-instruct](https://github.com/QwenLM/Qwen2-Audio) — open-source implementation with demos.
124+
- [SpeechGPT](https://github.com/0nutation/SpeechGPT) — An end-to-end framework for audio conversational models built on top of large language models.
125+
- [AudioPaLM](https://google-research.github.io/seanet/audiopalm/examples/) — resources and code for AudioPaLM.
126+
- [Audio Flamingo](https://huggingface.co/nvidia/audio-flamingo-3) — unifies speech, sound, and music understanding with long-context reasoning.
127+
- [Ultravox](https://github.com/fixie-ai/ultravox) — a fast multimodal large language model designed for real-time voice interactions.
128+
- [Ichigo](https://github.com/menloresearch/ichigo) — an audio-text-to-text model for audio-related tasks.
129+
130+
### Datasets
131+
132+
- [nvidia/AF-Think](https://huggingface.co/datasets/nvidia/AF-Think)
133+
- [nvidia/AudioSkills](https://huggingface.co/datasets/nvidia/AudioSkills)
134+
135+
### Tools & Extras
136+
137+
- [Fast-RTC](https://huggingface.co/fastrtc) — turn any Python function into a real-time audio/video stream.
138+
- [PhiCookBook](https://github.com/microsoft/PhiCookBook) — Microsoft’s open-source guide to small language models.
139+
- [Qwen2-audio-instruct](https://qwenlm.github.io/blog/qwen2-audio/) — Blogpost explaining usage and demos of Qwen2-audio-instruct.
Lines changed: 70 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,70 @@
1+
import type { TaskDataCustom } from "../index.js";
2+
3+
const taskData: TaskDataCustom = {
4+
datasets: [
5+
{
6+
description: "A dataset containing audio conversations with question–answer pairs.",
7+
id: "nvidia/AF-Think",
8+
},
9+
{
10+
description: "A more advanced and comprehensive dataset that contains characteristics of the audio as well",
11+
id: "tsinghua-ee/QualiSpeech",
12+
},
13+
],
14+
demo: {
15+
inputs: [
16+
{
17+
filename: "audio.wav",
18+
type: "audio",
19+
},
20+
{
21+
label: "Text Prompt",
22+
content: "What is the gender of the speaker?",
23+
type: "text",
24+
},
25+
],
26+
outputs: [
27+
{
28+
label: "Generated Text",
29+
content: "The gender of the speaker is female.",
30+
type: "text",
31+
},
32+
],
33+
},
34+
metrics: [],
35+
models: [
36+
{
37+
description:
38+
"A lightweight model that has capabilities of taking both audio and text as inputs and generating responses.",
39+
id: "fixie-ai/ultravox-v0_5-llama-3_2-1b",
40+
},
41+
{
42+
description: "A multimodal model that supports voice chat and audio analysis.",
43+
id: "Qwen/Qwen2-Audio-7B-Instruct",
44+
},
45+
{
46+
description: "A model for audio understanding, speech translation, and transcription.",
47+
id: "mistralai/Voxtral-Small-24B-2507",
48+
},
49+
{
50+
description: "A new model capable of audio question answering and reasoning.",
51+
id: "nvidia/audio-flamingo-3",
52+
},
53+
],
54+
spaces: [
55+
{
56+
description: "A space that takes input as both audio and text and generates answers.",
57+
id: "iamomtiwari/ATTT",
58+
},
59+
{
60+
description: "A web application that demonstrates chatting with the Qwen2Audio Model.",
61+
id: "freddyaboulton/talk-to-qwen-webrtc",
62+
},
63+
],
64+
summary:
65+
"Audio-text-to-text models take both an audio clip and a text prompt as input, and generate natural language text as output. These models can answer questions about spoken content, summarize meetings, analyze music, or interpret speech beyond simple transcription. They are useful for applications that combine speech understanding with reasoning or conversation.",
66+
widgetModels: [],
67+
youtubeId: "",
68+
};
69+
70+
export default taskData;

0 commit comments

Comments
 (0)