The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
Legal Hallucinations Subset
Dataset Description
This is a curated subset of the reglab/legal_hallucinations dataset, containing up to 1000 randomly sampled rows for each of 6 specific legal reasoning tasks (5444 rows total).
The original dataset was created for the paper: Dahl et al., "Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models," Journal of Legal Analysis (2024, forthcoming). Preprint: arxiv:2401.01301
Dataset Details
Dataset Summary
This subset focuses on 6 specific legal reasoning tasks, with up to 1000 examples per task (5444 rows total). Each task is provided as a separate dataset split for easy access and evaluation.
Supported Tasks and Usage
The dataset contains the following splits (one per task):
affirm_reverse- Determining whether a court affirmed or reversed a lower court's decisioncitation_retrieval- Retrieving correct legal citationscited_precedent- Identifying cited legal precedentscourt_id- Identifying the court that decided a casemajority_author- Identifying the author of a majority opinionyear_overruled- Identifying when a case was overruled
Dataset Structure
Each split contains the following columns:
task(string): The name of the taskquery(string): The exact query/question submittedexample_correct_answer(string): An example of a correct answer to the query
Data Splits
| Split Name | Number of Examples |
|---|---|
| affirm_reverse | 1000 |
| citation_retrieval | 1000 |
| cited_precedent | 1000 |
| court_id | 1000 |
| majority_author | 1000 |
| year_overruled | 444 |
Dataset Creation
Curation Process
- Source Data: Loaded from
original_dataset.csv(subset of reglab/legal_hallucinations) - Column Selection: Kept only
task,query, andexample_correct_answercolumns - Task Filtering: Filtered to only include the 6 specified tasks
- Quality Filtering: Removed rows with missing or empty
example_correct_answervalues - Deduplication: Removed duplicate rows
- Sampling: Randomly sampled up to 1000 rows per task (using random seed 42 for reproducibility). Note:
year_overruledhas 444 examples as that's all that was available after filtering.
Filtering Criteria
- Columns: Only
task,query, andexample_correct_answerare included - Tasks: Only the following 6 tasks are included:
affirm_reversecitation_retrievalcited_precedentcourt_idmajority_authoryear_overruled
- Quality: All rows have non-empty
example_correct_answervalues - Deduplication: Duplicate rows have been removed
- Sampling: Up to 1000 rows per task (or all available rows if fewer than 1000).
year_overruledhas 444 examples.
Usage
Loading the Dataset
from datasets import load_from_disk
# Load the entire dataset
dataset = load_from_disk("legal_hallucinations_subset")
# Access a specific task split
affirm_reverse_data = dataset["affirm_reverse"]
citation_retrieval_data = dataset["citation_retrieval"]
# Iterate over examples in a split
for example in affirm_reverse_data:
print(f"Query: {example['query']}")
print(f"Correct Answer: {example['example_correct_answer']}")
Example
from datasets import load_from_disk
dataset = load_from_disk("legal_hallucinations_subset")
# Get an example from the affirm_reverse split
example = dataset["affirm_reverse"][0]
print(example)
# {
# 'task': 'affirm_reverse',
# 'query': 'Did the court in ... affirm or reverse...?',
# 'example_correct_answer': 'affirm'
# }
Dataset Statistics
- Total Rows: 5444 (1000 per task for 5 tasks, 444 for year_overruled)
- Columns: 3 (task, query, example_correct_answer)
- Splits: 6 (one per task)
- Random Seed: 42 (for reproducibility)
Source and Citation
Source Dataset
This dataset is a subset of:
- Dataset: reglab/legal_hallucinations
- Repository: Stanford Regulation, Evaluation, and Governance Lab
Citation
If you use this dataset, please cite the original paper:
@article{dahl2024large,
title={Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models},
author={Dahl, Matt and Magesh, Varun and Suzgin, Mirac and Ho, Daniel E.},
journal={Journal of Legal Analysis},
year={2024},
note={Forthcoming},
arxiv={2401.01301}
}
License
[More Information Needed] - Please refer to the original dataset license.
Dataset Card Contact
For questions or issues related to this subset, please refer to the original dataset repository or open an issue.
- Downloads last month
- 120