--- license: fair-noncommercial-research-license pretty_name: Ego-1K size_categories: - 100K.parquet # Per-scene metadata index │ └── test-.parquet └── shards/ ├── train/ │ └── / │ ├── -0000.tar # WebDataset tar shards (~1.5 GB each) │ ├── -0001.tar │ └── ... └── test/ └── / └── -0000.tar ``` ### Tar Shard Contents Each tar sample represents one frame across all 12 cameras: ```text /.200-1.png # Raw PNG bytes (1280x1280) /.200-2.png ... /.200-12.png /.metadata.json # Pose, rig calibration, scene info ``` The `metadata.json` per sample contains: | Field | Type | Description | |-------|------|-------------| | `scene_id` | string | Recording identifier | | `frame_id` | int | Frame index (0-indexed) | | `timestamp_ns` | int | Frame timestamp in nanoseconds | | `pose` | list | 4x4 device-to-world transform (key absent if unavailable) | | `rig_calibration` | object | Per-camera intrinsics (`K`) and extrinsics (`E`) | | `source` | string | Capture campaign: `OVD_M1` (lab, 513 recordings), `OVD_M2` (apartment, 414), `DD4` (29) | | `lux_bins` | string | Lighting level: `51-75`, `76-100`, `101-200`, `201-400`, `401-1000`, `1001+` | | `tags` | list | Scene diversity tags | ## Parquet Schema Each row represents a single frame (one timestamp across all 12 cameras): | Column | Type | Description | |--------|------|-------------| | `scene_id` | string | Recording identifier | | `frame_id` | int32 | Frame index within the recording (0-indexed; number of frames varies per scene, range 404-583) | | `timestamp_ns` | int64 | Frame timestamp in nanoseconds | | `source` | string | Capture campaign: `OVD_M1` (lab), `OVD_M2` (apartment), `DD4` | | `lux_bins` | string | Lighting level: `51-75`, `76-100`, `101-200`, `201-400`, `401-1000`, `1001+` | | `tags` | string | JSON list of scene diversity tags (85 unique tags covering garments, furnishings, lighting, pose, objects) | | `shard_name` | string | Relative path to the tar shard containing this frame's images (e.g., `shards/train//-0002.tar`) | | `pose` | string | JSON: 4x4 device-to-world transform matrix for this frame (null if pose unavailable) | | `rig_calibration` | string | JSON: per-camera intrinsics (`K`: 3x3) and extrinsics (`E`: 4x4), static per scene (repeated for each frame for convenience) | ### Calibration Details The `rig_calibration` column contains a JSON object keyed by camera name (`200-1` through `200-12`), each with: - **`K`**: 3x3 intrinsic matrix (rectified pinhole projection, 120 deg horizontal FOV) - **`E`**: 4x4 extrinsic matrix (camera-to-device transform) The `pose` column contains the 4x4 device-to-world transform, which changes per frame as the headset moves. ## Usage `load_dataset` returns frame-level **metadata only** (poses, calibration, scene info). Images are stored in WebDataset tar shards — use the `webdataset` library to stream them. See [`quickstart.ipynb`](quickstart.ipynb) for a full working example. ### WebDataset (Recommended for Training) Stream tar shards for high-throughput sequential access — no per-file API calls. See the notebook for the full `decode_sample` implementation. To wrap it in a PyTorch DataLoader: ```python dataset = wds.WebDataset(shard_urls, nodesplitter=wds.split_by_node, shardshuffle=True).map(decode_sample) loader = torch.utils.data.DataLoader(dataset, batch_size=4, num_workers=4) for batch in loader: images = batch["images"] # (B, N_cams, 3, 1280, 1280) break ``` ### Parquet Metadata (Random Access) The Parquet files contain frame-level metadata only (poses, calibration, scene info) — images are stored in the tar shards. Use the `shard_name` column to locate which tar file contains a given frame's images. ```python shard_url = f"https://huggingface.co/datasets/facebook/ego-1k/resolve/main/{example['shard_name']}" ``` ## Citation ```bibtex @inproceedings{ego1k2026, title={{Ego-1K}: A Large-Scale Multiview Video Dataset for Egocentric Vision}, author={Jae Yong Lee and Daniel Scharstein and Akash Bapat and Hao Hu and Andrew Fu and Haoru Zhao and Paul Sammut and Xiang Li and Stephen Jeapes and Anik Gupta and Lior David and Saketh Madhuvarasu and Jay Girish Joshi and Jason Wither}, booktitle={CVPR}, year={2026} } ```