Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    ImportError
Message:      To support decoding NIfTI files, please install 'nibabel'.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                         ^^^^^^^^^
                File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
                  return func(*args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2690, in __iter__
                  for key, example in ex_iterable:
                                      ^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2240, in __iter__
                  example = _apply_feature_types_on_example(
                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2159, in _apply_feature_types_on_example
                  decoded_example = features.decode_example(encoded_example, token_per_repo_id=token_per_repo_id)
                                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 2204, in decode_example
                  column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 1508, in decode_nested_example
                  return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/features/nifti.py", line 172, in decode_example
                  raise ImportError("To support decoding NIfTI files, please install 'nibabel'.")
              ImportError: To support decoding NIfTI files, please install 'nibabel'.

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Dataset Card for AutoPET-RG-Lym dataset

Data Summary

To promote community benchmarking of PET/CT report generation (PETRG), we construct AutoPET-RG-Lym from lymphoma cases in the public AutoPET dataset, which was collected at University Hospital Tübingen and University Hospital of the LMU Munich. It contains patients with lymphoma, malignant melanoma, and lung cancer.

We carefully selected 135 lymphoma cases from AutoPET and commissioned two senior nuclear medicine physicians from top-tier Chinese hospitals to independently compose structured reports. These reports underwent cross-review and iterative refinement to ensure high clinical fidelity. The resulting dataset provides an open validation benchmark for PETRG research.

Data Preprocessing

Starting from de-identified DICOM data, we extracted patient metadata (e.g., weight, radiotracer dose, injection time) and converted PET and CT volumes to NIfTI format. All volumes were reoriented to RAS and resampled to a uniform spacing of $1.5 \times 1.5 \times 3$ mm, with CT matched to PET dimensions. CT voxel intensities were converted to Hounsfield Units (HU) and clipped to $[-1000, +1000]$. Standardized uptake value (SUV) normalization was performed using the extracted metadata. We employed TotalSegmentator to remove the scanning bed from CT backgrounds. Given the presence of both “head-to-midthigh” and “head-to-toe” scan protocols—and the observation that reports seldom describe regions below the mid-thigh—we uniformly cropped all volumes at the upper thigh. Additional preprocessing details, including voxel intensity standardization, scanning bed removing, body part cropping, and report text cleaning, can be find at our paper: Vision-Language Models for Automated 3D PET/CT Report Generation.

data_preproc

Dataset Structure

  • images/: Contains the preprocessed PET and CT images for all 135 patients.
    • Naming Convention: {patient_id}_{modality_subfix}.nii.gz
    • patient_id: The unique identifier for the patient.
    • modality_subfix: Indicates the imaging modality, where 0000 denotes CT and 0001 denotes PET.
    • Example: AP_0b57b247b6-0_0000.nii.gz and AP_0b57b247b6-0_0001.nii.gz correspond to the CT and PET images for patient AP_0b57b247b6-0, respectively.
  • reports/: Stores the clinical report files for all patients. The reports are written in Chinese and are named using the format: {patient_id}.json.
  • labels/: Contains annotations extracted from the report texts. These include 5 PET uptake labels and 8 CT structural density labels across 24 whole-body anatomical regions. For comprehensive details regarding the extraction and manual verification processes, please refer to our paper: Vision-Language Models for Automated 3D PET/CT Report Generation.

Citation Information

@article{jiao2025vision,
  title={Vision-Language Models for Automated {3D} {PET/CT} Report Generation},
  author={Jiao, Wenpei and Shang, Kun and Li, Hui and Yan, Ke and Zhang, Jiajin and Yang, Guangjie and Guo, Lijuan and Wan, Yan and Yang, Xing and Jin, Dakai and others},
  journal={arXiv preprint arXiv:2511.20145},
  year={2025}
}
Downloads last month
194

Paper for jwppku/AutoPET-RG-Lym