Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    CastError
Message:      Couldn't cast
instruction: string
question: string
answers: string
ground_truth: string
_task: string
id: string
description: string
content: string
document: string
answer: list<item: list<item: string>>
  child 0, item: list<item: string>
      child 0, item: string
court_judgement: string
grounding: string
to
{'instruction': Value('string'), 'question': Value('string'), 'answers': Value('string'), 'ground_truth': Value('string'), '_task': Value('string')}
because column names don't match
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                         ^^^^^^^^^
                File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
                  return func(*args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2690, in __iter__
                  for key, example in ex_iterable:
                                      ^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2227, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2251, in _iter_arrow
                  for key, pa_table in self.ex_iterable._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
                  for key, pa_table in iterator:
                                       ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 289, in _generate_tables
                  self._cast_table(pa_table, json_field_paths=json_field_paths),
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 124, in _cast_table
                  pa_table = table_cast(pa_table, self.info.features.arrow_schema)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
                  return cast_table_to_schema(table, schema)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              instruction: string
              question: string
              answers: string
              ground_truth: string
              _task: string
              id: string
              description: string
              content: string
              document: string
              answer: list<item: list<item: string>>
                child 0, item: list<item: string>
                    child 0, item: string
              court_judgement: string
              grounding: string
              to
              {'instruction': Value('string'), 'question': Value('string'), 'answers': Value('string'), 'ground_truth': Value('string'), '_task': Value('string')}
              because column names don't match

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

VLegal — Vietnamese Legal Benchmark (Evaluation Only)

Reformatted version of VLegal-Bench for per-task evaluation of Vietnamese Legal LLMs.

This dataset is for EVALUATION ONLY. Do NOT use it for training. VLegal-Bench is a benchmark test set. Training on this data contaminates benchmark scores.

Usage

from datasets import load_dataset

# Load full benchmark (all 22 tasks)
benchmark = load_dataset("datht/vlegal", split="test")

# Load specific task
task_1_1 = load_dataset("datht/vlegal", "task_1_1", split="test")
task_4_2 = load_dataset("datht/vlegal", "task_4_2", split="test")

Tasks (22 total, 10,467 samples)

Category 1: Recognition & Recall (3,520 samples)

Task Name Samples Type
1.1 Legal Entity Recognition 748 MC
1.2 Legal Topic Classification 683 MC
1.3 Legal Concept Recall 300 MC
1.4 Article Recall 968 MC
1.5 Legal Schema Recall 821 MC

Category 2: Understanding & Structuring (2,837 samples)

Task Name Samples Type
2.1 Relation Extraction 253 MC
2.2 Legal Element Recognition 300 MC
2.3 Legal Graph Structuring 326 MC
2.4 Judgement Verification 599 MC
2.5 User Intent Understanding 1,359 MC

Category 3: Reasoning & Inference (2,017 samples)

Task Name Samples Type
3.1 Article/Clause Prediction 600 MC
3.2 Legal Court Decision Prediction 600 MC
3.3 Multi-hop Graph Reasoning 292 MC
3.4 Conflict & Consistency Detection 166 MC
3.5 Penalty/Remedy Estimation 359 MC

Category 4: Interpretation & Generation (1,194 samples)

Task Name Samples Type
4.1 Legal Document Summarization 396 Gen
4.2 Judicial Reasoning Generation 300 Gen
4.3 Legal Opinion Generation 498 Gen

Category 5: Ethics, Fairness & Bias (899 samples)

Task Name Samples Type
5.1 Bias Detection 249 MC
5.2 Privacy & Data Protection 217 MC
5.3 Ethical Consistency Assessment 199 MC
5.4 Unfair Contract Detection 234 MC

Evaluation Metrics

  • Multiple Choice (MC): Accuracy
  • Generation (Gen): ROUGE-L, BERTScore

Source

CMC-OPENAI/VLegal-Bench (arXiv:2512.14554)

Downloads last month
56

Paper for datht/vlegal