Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

EgoTraj-Bench: Towards Robust Trajectory Prediction under Ego-view Noisy Observations

Paper Code

Overview

EgoTraj-Bench is the first real-world benchmark for pedestrian trajectory prediction under ego-centric noisy observations. Built upon the TBD dataset, it pairs noisy first-person-view (FPV) derived trajectories with clean bird's-eye-view (BEV) ground truth, enabling robust evaluation of trajectory prediction models under deployment-realistic conditions.

Data Structure

This repository provides data at three levels:

Level Folder Description Size
L2 L2-processed/ Ready-to-use .npz files for training/evaluation ~44 MB
L1 L1-intermediate/ Frame-level FPV detections + BEV GT CSVs + matching results Coming soon
L0 L0-raw/ Link to TBD raw dataset (~170 GB) See README

L2: Processed Data (Start Here)

L2-processed/
β”œβ”€β”€ EgoTraj-TBD/                       # Real-world ego-centric noise (17 recording sessions)
β”‚   β”œβ”€β”€ egotraj_tbd_train.npz
β”‚   β”œβ”€β”€ egotraj_tbd_val.npz
β”‚   └── egotraj_tbd_test.npz
└── T2FPV-ETH/                         # Simulated ego-centric noise (5 folds)
    └── t2fpv_{fold}_{split}.npz       # folds: eth, hotel, univ, zara1, zara2

Each .npz file contains:

{
    "all_obs":       np.array [N, 8, 7],   # Noisy FPV history (8 past frames)
    "all_pred":      np.array [N, 20, 7],  # Clean BEV trajectory (8 past + 12 future)
    "num_peds":      np.array [S],          # Number of agents per scene
    "seq_start_end": np.array [S, 2],       # Scene boundaries in agent dimension
}
# 7 features = [x, y, orientation, img_x, img_y, valid_mask, agent_id]
# x, y: world coordinates (meters)
# valid_mask: 1 = FPV observation valid, 0 = occluded/missing

Quick Start

import numpy as np

data = np.load("L2-processed/EgoTraj-TBD/egotraj_tbd_test.npz")
noisy_history = data["all_obs"][:, :, :2]      # [N, 8, 2] FPV noisy xy
clean_past    = data["all_pred"][:, :8, :2]     # [N, 8, 2] BEV clean past xy
clean_future  = data["all_pred"][:, 8:, :2]     # [N, 12, 2] BEV clean future xy
valid_mask    = data["all_obs"][:, :, 5]         # [N, 8] visibility mask

Dataset Details

EgoTraj-TBD

  • Source: TBD dataset (CMU campus, 17 recording sessions)
  • Perception: YOLOv8 detection + BotSort tracking on real ego-view video
  • Matching: Hungarian algorithm with weighted MSE (location + velocity + acceleration)
  • Sampling: 2.5 fps (stride 12 at 30fps source), 8-frame obs (3.2s) + 12-frame pred (4.8s)
  • Statistics: 36,947 sequences, FPV noisy rate 0.37, history MSE 0.66m

T2FPV-ETH

  • Source: T2FPV simulated ego-centric noise on ETH-UCY
  • Folds: eth, hotel, univ, zara1, zara2 (leave-one-out evaluation)
  • Version: Balanced variant (original_bal)

Citation

@inproceedings{liu2025egotraj,
  title={EgoTraj-Bench: Towards Robust Trajectory Prediction under Ego-view Noisy Observations},
  author={Liu, Jiayi and Zhou, Jiaming and Ye, Ke and Lin, Kun-Yu and Wang, Allan and Liang, Junwei},
  booktitle={IEEE International Conference on Robotics and Automation (ICRA)},
  year={2025}
}

License

This dataset is released under CC BY-NC 4.0. The underlying TBD raw data is subject to its own license β€” please refer to the TBD dataset page.

Downloads last month
35

Paper for ZoeyLIU1999/EgoTraj-Bench