metadata
license: mit
task_categories:
- table-question-answering
- text-generation
- summarization
language:
- en
pretty_name: DA-Code
size_categories:
- 1B<n<10B
tags:
- code
configs:
- config_name: default
data_files:
- split: test
path: test.csv
sep: ','
[EMNLP2024] DA-Code: Agent Data Science Code Generation Benchmark for Large Language Models
DA-Code is a comprehensive evaluation dataset designed to assess the data analysis and code generation capabilities of LLM in agent-based data science tasks. Our papers and experiment reports have been published on Arxiv.
Dataset Overview
- 500 complex real-world data analysis tasks across Data Wrangling (DW), Machine Learning (ML), and Exploratory Data Analysis (EDA).
- Tasks cover the entire data analysis pipeline, from raw data handling to gaining insights using SQL and Python.
- Each example is meticulously designed to ensure high complexity and quality, with robust evaluation suites.
- An interactive sandbox environment allows LLMs/Agents to autonomously explore, reason, and complete tasks.
Usage
This dataset can be used to:
- Evaluate LLMs’ data analysis and code generation capabilities
- Benchmark autonomous reasoning in real-world tasks
- Develop and test multi-step data analysis strategies
Citation
If you use this dataset in your research, please cite our paper:
@misc{huang2024dacodeagentdatascience,
title={DA-Code: Agent Data Science Code Generation Benchmark for Large Language Models},
author={Yiming Huang and Jianwen Luo and Yan Yu and Yitong Zhang and Fangyu Lei and Yifan Wei and Shizhu He and Lifu Huang and Xiao Liu and Jun Zhao and Kang Liu},
year={2024},
eprint={2410.07331},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2410.07331},
}