license: cc-by-4.0
task_categories:
- video-text-to-text
- visual-question-answering
language:
- en
size_categories:
- <1K
configs:
- config_name: default
data_files:
- split: validation
path: validation_dataset.parquet
QuantiPhy (Validation Set)
Dataset Summary
QuantiPhy is a benchmark for evaluating whether vision–language models (VLMs) can perform quantitative physical inference from visual evidence, rather than producing plausible but ungrounded numerical guesses.
This repository contains the official validation set of QuantiPhy, released to support model development, ablation studies, and preliminary evaluation.
The validation set represents approximately 4% of the full benchmark and consists of 159 video–question–answer (QA) pairs.
Each instance requires a model to output a single continuous numerical value (e.g., object size, velocity, or acceleration) in real-world units, given a short video and a natural-language question.
Intended Use
This validation release is intended for:
- model debugging and prompt development,
- hyperparameter tuning,
- ablation and error analysis,
- sanity checks prior to full benchmark evaluation.
It is not intended to be used as a substitute for the full QuantiPhy benchmark.
The complete dataset, including training and test splits, will be released separately.
Supported Tasks
- Video-based numerical regression
- Quantitative visual reasoning
- Vision–language model evaluation
Tasks cover three core kinematic properties:
- Size
- Velocity
- Acceleration
All questions are open-ended and require predicting a real-valued scalar.
Dataset Structure
Each instance is represented as a structured video–text record with the following fields:
| Field | Description |
|---|---|
video_id |
Unique identifier for the video |
video_source |
Data source (simulation, lab, or internet) |
video_type |
Four-character code encoding task configuration |
fps |
Frame rate of the video |
inference_type |
Static or dynamic prior/target configuration |
question |
Natural-language question with explicit physical units |
prior |
Physical prior provided in world units (e.g., object size, velocity, or acceleration) |
depth_info |
Depth/distance information for 3D configurations (if applicable) |
answer |
Ground-truth numerical value (float, real-world units) |
Videos are short (typically 2–3 seconds) and recorded with a static camera to ensure well-defined kinematic inference.
Task Design Overview
Each instance provides the model with:
- a short video depicting object motion, and
- one physical prior in world units (object size, velocity at a given timestamp, or acceleration at a given timestamp).
The model is then asked to infer a target kinematic quantity—possibly for a different object—expressed in real-world units.
Tasks vary along four axes:
- Physical prior: Size (S), Velocity (V), Acceleration (A)
- Dimensionality: 2D (planar motion) or 3D (with depth variation)
- Object setting: Single-object (S) or multi-object (M)
- Background complexity: Plain (X), Simple (S), Complex (C)
Validation Set Statistics
- 159 QA pairs
- Covers all three physical priors (S / V / A)
- Includes both 2D and 3D configurations
- Videos sourced from:
- Blender simulations,
- laboratory captures,
- curated internet videos
This subset is designed to be representative but non-exhaustive relative to the full benchmark.
Data Sources and Quality Control
- Simulation: Blender-rendered scenes with precise physical ground truth.
- Laboratory capture: Real-world recordings using calibrated depth and multi-view setups.
- Internet / author-recorded videos: Carefully curated monocular videos meeting strict physical constraints.
All videos undergo manual review to remove:
- excessive motion blur,
- severe occlusion,
- untrackable motion,
- personally identifiable information (PII).
License
The annotations and metadata in this repository are released under the
Creative Commons Attribution 4.0 (CC BY 4.0) license.
Videos originate from simulated environments, laboratory recordings, and publicly available sources.
Each video remains subject to its original license and terms of use.
This release is intended for research and evaluation purposes.
Authors
Puyin Li*, Tiange Xiang*, Ella Mao*,
Shirley Wei, Xinye Chen, Adnan Masood,
Li Fei-Fei†, Ehsan Adeli†
* Equal contribution.
Citation
If you use this validation set in your work, please cite:
@article{li2025quantiphy,
title = {QuantiPhy: A Quantitative Benchmark Evaluating Physical Reasoning Abilities of Vision-Language Models},
author = {Li, Puyin and Xiang, Tiange and Mao, Ella and Wei, Shirley and Chen, Xinye and Masood, Adnan and Li, Fei-Fei and Adeli, Ehsan},
journal = {arXiv preprint arXiv:2512.19526},
year = {2025}
}