Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    AttributeError
Message:      'list' object has no attribute 'get'
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1585, in _prepare_split_single
                  example = self.info.features.encode_example(record) if self.info.features is not None else record
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 2041, in encode_example
                  return encode_nested_example(self, example)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1336, in encode_nested_example
                  {k: encode_nested_example(schema[k], obj.get(k), level=level + 1) for k in schema}
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1336, in <dictcomp>
                  {k: encode_nested_example(schema[k], obj.get(k), level=level + 1) for k in schema}
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1336, in encode_nested_example
                  {k: encode_nested_example(schema[k], obj.get(k), level=level + 1) for k in schema}
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1336, in <dictcomp>
                  {k: encode_nested_example(schema[k], obj.get(k), level=level + 1) for k in schema}
              AttributeError: 'list' object has no attribute 'get'
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1451, in compute_config_parquet_and_info_response
                  parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 994, in stream_convert_to_parquet
                  builder._prepare_split(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1447, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1604, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

json
dict
__key__
string
__url__
string
{ "fintabnet_annotations": { "bbox": [ 17.9999928, 275.42460983011205, 596.5750492582391, 359.02065639168006 ], "filename": "INTU/2009/page_46.pdf", "html": { "cells": [ { "bbox": [ 17.99, 331.78, 83, 339.5...
Fintabnet_Logical/train/annotations/INTU_2009pdf_34307
hf://datasets/saeed11b95/Fintabnet-Logical@bd58c100d8f1f4374474ff0df761f5e5e53c57c5/Fintabnet-Logical.tar.gz
{ "fintabnet_annotations": { "bbox": [ 319.5, 309.82280000000003, 558.4911999999998, 712.5575 ], "filename": "STT/2014/page_58.pdf", "html": { "cells": [ { "bbox": [ 322.5, 702.2, 418.78, 709.71 ]...
Fintabnet_Logical/train/annotations/STT_2014pdf_39897
hf://datasets/saeed11b95/Fintabnet-Logical@bd58c100d8f1f4374474ff0df761f5e5e53c57c5/Fintabnet-Logical.tar.gz
{ "fintabnet_annotations": { "bbox": [ 45, 142.995, 549, 264.527 ], "filename": "HPE/2016/page_237.pdf", "html": { "cells": [ { "bbox": null, "tokens": [] }, { "bbox": [ 472.88, 255.33, ...
Fintabnet_Logical/train/annotations/HPE_2016pdf_64592
hf://datasets/saeed11b95/Fintabnet-Logical@bd58c100d8f1f4374474ff0df761f5e5e53c57c5/Fintabnet-Logical.tar.gz
{"fintabnet_annotations":{"bbox":[21.034900000000107,232.3685,540.0409,680.518],"filename":"BAC/2014(...TRUNCATED)
Fintabnet_Logical/train/annotations/BAC_2014pdf_15281
"hf://datasets/saeed11b95/Fintabnet-Logical@bd58c100d8f1f4374474ff0df761f5e5e53c57c5/Fintabnet-Logic(...TRUNCATED)
{"fintabnet_annotations":{"bbox":[23.4140625,136.62290034,583.83129375,224.77763600999992],"filename(...TRUNCATED)
Fintabnet_Logical/train/annotations/QCOM_2017pdf_25498
"hf://datasets/saeed11b95/Fintabnet-Logical@bd58c100d8f1f4374474ff0df761f5e5e53c57c5/Fintabnet-Logic(...TRUNCATED)
{"fintabnet_annotations":{"bbox":[69.0,135.515,535.8,221.891],"filename":"EW/2016/page_39.pdf","html(...TRUNCATED)
Fintabnet_Logical/train/annotations/EW_2016pdf_10535
"hf://datasets/saeed11b95/Fintabnet-Logical@bd58c100d8f1f4374474ff0df761f5e5e53c57c5/Fintabnet-Logic(...TRUNCATED)
{"fintabnet_annotations":{"bbox":[318.0,619.0623999999999,565.016,677.308],"filename":"PNC/2009/page(...TRUNCATED)
Fintabnet_Logical/train/annotations/PNC_2009pdf_29833
"hf://datasets/saeed11b95/Fintabnet-Logical@bd58c100d8f1f4374474ff0df761f5e5e53c57c5/Fintabnet-Logic(...TRUNCATED)
{"fintabnet_annotations":{"bbox":[17.9999928,472.584530966112,596.5750956554615,699.34052026368],"fi(...TRUNCATED)
Fintabnet_Logical/train/annotations/INTU_2009pdf_34312
"hf://datasets/saeed11b95/Fintabnet-Logical@bd58c100d8f1f4374474ff0df761f5e5e53c57c5/Fintabnet-Logic(...TRUNCATED)
{"fintabnet_annotations":{"bbox":[45.0,328.87,533.96,445.01],"filename":"LMT/2015/page_102.pdf","htm(...TRUNCATED)
Fintabnet_Logical/train/annotations/LMT_2015pdf_68061
"hf://datasets/saeed11b95/Fintabnet-Logical@bd58c100d8f1f4374474ff0df761f5e5e53c57c5/Fintabnet-Logic(...TRUNCATED)
{"fintabnet_annotations":{"bbox":[221.46,472.8978568,405.87818919,549.4284228],"filename":"OKE/2007/(...TRUNCATED)
Fintabnet_Logical/train/annotations/OKE_2007pdf_13464
"hf://datasets/saeed11b95/Fintabnet-Logical@bd58c100d8f1f4374474ff0df761f5e5e53c57c5/Fintabnet-Logic(...TRUNCATED)
End of preview.

Fintabnet-Logical

Dataset Summary

Fintabnet-Logical is a derivative of the original FinTabNet dataset, specifically re-processed to create high-quality ground truth for logical table structure recognition (TSR).

While the original dataset provides cell content and HTML structure, this version parses that HTML to generate precise logical coordinates for every cell, correctly handling complex tables with rowspan and colspan. Furthermore, it processes the source PDFs to group text into line-level cells, assigning each line the logical coordinates of its parent cell.

The result is a clean, ready-to-use dataset for training models that predict not just the content of a table, but its fundamental logical grid structure. All table images are provided as high-resolution (144 DPI) crops for improved visual quality.

Supported Tasks

  • Table Structure Recognition: This dataset is primarily designed for training and evaluating models that recognize the logical row and column structure of tables, including row and column spans. The line-level cells with logical coordinates are ideal for this task.

Dataset Structure

The dataset is organized into train, val, and test splits, mirroring the original FinTabNet. Each instance consists of a table image and a corresponding JSON annotation file.

Data Instances

A typical annotation file (.json) has the following structure:

{
    "fintabnet_annotations": { "... original fintabnet data ..." },
    "fintabnet_cells": [
        {
            "bbox": [187.0, 4.0, 261.0, 14.0],
            "tokens": ["...", "Practitioners", "..."],
            "logical_coords": [0, 0, 1, 5]
        }
    ],
    "word_cells": [
        {
            "text": "Practitioners",
            "bbox": [187.0, 4.0, 261.0, 14.0],
            "logical_coords": [0, 0, 1, 5]
        }
    ],
    "line_cells": [
        {
            "text": "General Practitioners",
            "bbox": [187.0, 4.0, 261.0, 14.0],
            "logical_coords": [0, 0, 1, 5]
        },
        {
            "text": "1. Antipsychotic drug treatment",
            "bbox": [4.0, 58.0, 133.0, 86.0],
            "logical_coords": [2, 2, 0, 0]
        }
    ]
}

Data Fields

The most important key for training is line_cells:

  • line_cells: A list of dictionaries, where each entry represents a single line of text within a table cell.
    • text (str): The text content of the line.
    • bbox (list[float]): The bounding box of the text line, in [x_min, y_min, x_max, y_max] format relative to the cropped table image.
    • logical_coords (list[int]): The logical coordinates of the parent cell in [row_start, row_end, col_start, col_end] format. An unspanned cell at the top-left would be [0, 0, 0, 0]. A cell spanning the first two rows in the first column would be [0, 1, 0, 0].

Data Splits

The dataset retains the original splits from FinTabNet:

Split Number of Tables
train 82,422
validation 9,539
test 9,599
Total 101,560

Dataset Creation

Curation Rationale

Many table recognition datasets provide only bounding boxes for cells, without the explicit logical row/column indices needed to understand the grid structure. This dataset was created to fill that gap. By parsing the HTML structure provided by FinTabNet, we generate a reliable ground truth for logical coordinates, which is invaluable for training and evaluating modern Table Structure Recognition models.

Source Data

This dataset is derived from the FinTabNet dataset, which consists of tables from the annual financial reports of S&P 500 companies.

Annotations

The annotation process is fully automated by a script that performs the following steps for each table:

  1. Parse HTML: The structure tokens from the original annotations are parsed to build a virtual grid of the table.
  2. Calculate Logical Coordinates: By traversing the virtual grid, the script calculates the [row_start, row_end, col_start, col_end] for every cell, accurately accounting for rowspan and colspan attributes.
  3. Extract Words: The source PDF is processed to extract all words and their bounding boxes within the table region.
  4. Group into Lines: Words are assigned to their parent cells based on spatial overlap. Within each cell, the words are grouped into lines based on reading order.
  5. Assign Coordinates to Lines: Each generated line is assigned the logical coordinates of its parent cell, creating the final line_cells ground truth.

Citation

If you use this dataset, please cite the original FinTabNet paper:

@article{zheng2021global,
  title={Global table extractor (gte): A framework for joint table identification and cell structure recognition using visual context},
  author={Zheng, Xinyi and Burdick, Doug and Popa, Lucian and Sthankiya, Shachi and Teslee, Mitchell and Thomas, Bibin},
  journal={arXiv preprint arXiv:2109.04946},
  year={2021}
}
Downloads last month
238

Paper for saeed11b95/Fintabnet-Logical