Datasets:
audio
audioduration (s) 1.6
348
| label
class label 2
classes |
|---|---|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
|
0wav
|
WearVox: An Egocentric Multichannel Voice Assistant Benchmark for Wearables
Paper: WearVox: An Egocentric Multichannel Voice Assistant Benchmark for Wearables
Authors: Zhaojiang Lin*, Yong Xu*, Kai Sun*, Jing Zheng, Yin Huang, Surya Appini, Krish Narang, Renjie Tao, Ishan Kapil Jain, Siddhant Arora, Ruizhi Li, Yiteng Huang, Kaushik Patnaik, Wenfang Xu, Suwon Shon, Yue Liu, Ahmed Aly, Anuj Kumar, Florian Metze, Luna Dong
Affiliations: Meta Reality Labs, Meta
π Dataset Summary
WearVox Dataset Summary
WearVox is the first benchmark specifically designed to evaluate voice assistants in realistic wearable scenarios using devices like AI glasses.
- 3,842 multi-channel, egocentric audio recordings collected via AI glasses
- 5 diverse task types:
- Search-Grounded QA (547)
- Closed-Book QA (588)
- Side-Talk Rejection (1,082 - 500 queries duplicated from tool-calling)
- Tool Calling (1,125)
- Speech Translation (1,000)
Each recording is accompanied by rich audio metadata, enabling nuanced analysis of model performance under real-world constraints. Benchmarking results show that leading real-time Speech LLMs achieve accuracies ranging from 29% to 59%, with substantial performance degradation on noisy outdoor audio.
π Dataset Structure
Each example in the dataset contains:
audio_query: The beanformed single-channel egocentric audio queryaudio_query_mc: The multi-channel egocentric audio querygt_transcript: The ground-truth query transcriptground_truth: The ground-truth answertask: The aforementioned 5 taskstext_prompt: The task instruction for LLMaudio_metadata: audio metadata
- Downloads last month
- 13