Datasets:
The dataset could not be loaded because the splits use different data file formats, which is not supported. Read more about the splits configuration. Click for more details.
Error code: FileFormatMismatchBetweenSplitsError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
EU AI Act Compliance Benchmark
55 Python AI agent files with ground-truth compliance labels across 6 EU AI Act articles.
The first public benchmark dataset for evaluating EU AI Act compliance scanners. Each sample is a realistic Python AI agent that either passes or fails specific articles — verified against the AIR Blackbox scanner with 100% label accuracy.
Why This Exists
The EU AI Act deadline is August 2, 2026. Fines reach €35M or 7% of global annual turnover. Multiple tools claim to scan for compliance, but there's no standard way to measure their accuracy. This dataset fixes that.
What's Inside
samples/ # 55 Python AI agent files (.py)
metadata/
labels.jsonl # Ground-truth labels (one JSON object per line)
Each sample is a realistic 20-80 line Python AI agent using one of 5 frameworks: OpenAI, LangChain, CrewAI, AutoGen, or RAG pipelines.
Each label contains:
{
"id": "bare_openai_agent",
"filename": "bare_openai_agent.py",
"description": "Bare OpenAI agent with zero compliance patterns",
"framework": "openai",
"difficulty": "easy",
"labels": {
"art9": "FAIL",
"art10": "FAIL",
"art11": "PASS",
"art12": "FAIL",
"art14": "FAIL",
"art15": "FAIL"
},
"score": "1/6",
"scanner_checks": { ... }
}
Articles Checked
| Article | Requirement | What the Scanner Looks For |
|---|---|---|
| 9 | Risk Management | Risk classification, access control, RBAC |
| 10 | Data Governance | Input validation (Pydantic), PII handling, data schemas |
| 11 | Technical Documentation | Logging imports, docstrings, type hints |
| 12 | Record-Keeping | Structured logging (structlog), audit trails, timestamps, HMAC |
| 14 | Human Oversight | Human-in-the-loop, kill switch, notifications |
| 15 | Robustness & Security | Injection detection, error handling, testing, rate limiting |
Distribution
By compliance score:
| Score | Count | Description |
|---|---|---|
| 1/6 | 10 | Minimal compliance (usually just docstrings) |
| 2/6 | 9 | Two articles passing |
| 3/6 | 13 | Half compliant |
| 4/6 | 10 | Most articles passing |
| 5/6 | 4 | Nearly complete |
| 6/6 | 9 | Full compliance |
By framework:
| Framework | Count |
|---|---|
| OpenAI | 26 |
| LangChain | 10 |
| CrewAI | 7 |
| AutoGen | 2 |
| RAG | 6 |
| Other | 4 |
Usage
Evaluate a compliance scanner
import json
# Load ground truth
labels = {}
with open("metadata/labels.jsonl") as f:
for line in f:
entry = json.loads(line)
labels[entry["filename"]] = entry
# Run your scanner on each sample
for filename, truth in labels.items():
with open(f"samples/{filename}") as f:
code = f.read()
# your_scanner_result = your_scanner(code)
# Compare your_scanner_result to truth["labels"]
Evaluate AIR Blackbox scanner
pip install air-blackbox-mcp
from air_blackbox_mcp.scanner import scan_code
with open("samples/bare_openai_agent.py") as f:
result = scan_code(f.read())
print(result["compliance_score"]) # "1/6"
Sample Categories
The dataset includes these categories of samples:
Zero/minimal compliance — Bare agents from tutorials with no governance patterns. Common in production despite being high-risk.
Single-article compliance — Agents that pass exactly one article. Tests scanner precision.
Partial compliance (2-4 articles) — The most realistic category. Most production agents have some compliance but significant gaps.
Near-full compliance (5/6) — Agents missing exactly one article. Tests whether scanners can identify the specific gap.
Full compliance (6/6) — Reference implementations across multiple frameworks showing what "passing" looks like.
Edge cases — Dead code with compliance patterns, imported-but-unused libraries, compliance terms in comments/strings only, multi-framework files.
High-risk domains — Medical diagnosis, hiring/screening, autonomous trading, code execution, email sending. These domains face extra scrutiny under the EU AI Act.
How Labels Were Generated
Labels are generated by running the AIR Blackbox scanner (regex-based, deterministic) on each sample file. The scanner's output IS the ground truth. This means:
- Labels are 100% reproducible — run the scanner yourself and get identical results
- Labels reflect regex-based detection (not semantic understanding)
- Some edge cases exist: comments mentioning compliance terms may trigger detection
This is intentional. The benchmark tests what scanners actually detect, including their false positives and limitations.
Limitations
This benchmark tests pattern-based compliance detection, not semantic compliance. A file with risk_classification in a comment will pass Article 9 even though no risk management is actually implemented. This reflects the current state of compliance scanning tools, which are predominantly regex-based.
Future versions may include:
- Semantic labels (does the code actually implement the compliance measure?)
- More samples (targeting 200+)
- Additional frameworks (Anthropic Agent SDK, Pydantic AI, DSPy)
- Multilingual samples
Citation
@dataset{air_blackbox_eu_ai_act_benchmark_2026,
title={EU AI Act Compliance Benchmark},
author={AIR Blackbox},
year={2026},
url={https://huggingface.co/datasets/air-blackbox/eu-ai-act-compliance-benchmark},
note={55 Python AI agent files with ground-truth compliance labels across 6 EU AI Act articles}
}
License
Apache-2.0
Links
- Scanner: air-blackbox-mcp
- Website: airblackbox.ai
- GitHub: github.com/airblackbox
- Downloads last month
- 94