Datasets:
The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
NCA Pre-Pretraining Data
Pre-tokenized Neural Cellular Automata (NCA) trajectory data for transformer pre-pretraining experiments. Based on Lee et al. 2026, "Training Language Models via Neural Cellular Automata".
Files
| File | Method | Tokens | Size | Description |
|---|---|---|---|---|
nca_data_rule_diverse.bin |
Rule-diverse (ours) | 100M | 191 MB | 1 simulation per rule, ~49,653 unique rules |
nca_data_paper.bin |
Paper baseline (Lee et al.) | 100M | 191 MB | 100 simulations per rule, ~504 rules |
Both files are flat binary arrays of uint16 token IDs, ready to be memory-mapped or loaded directly.
How the data is generated
NCA rules
Each NCA rule is a small convolutional network (614 parameters) that defines the update dynamics on a 12×12 grid with 10 cell states. Rules are created deterministically from integer seeds.
Complexity filtering
Not all rules produce interesting dynamics — many converge to fixed points or simple periodic patterns. We filter rules using gzip compression ratio on the tokenized trajectory: only rules with gzip complexity in [0.5, 1.0] are kept (~16% pass rate). This selects for spatiotemporally complex dynamics.
Tokenization
Each grid state is tokenized using 2×2 patches with base-10 positional encoding:
- Patch token =
a + 10*b + 100*c + 1000*dfor cells[a, b, c, d]→ tokens 0–9999 - Grid delimiter tokens: 10000 (start) and 10001 (end)
- Total vocabulary: 10,002 tokens
Each trajectory has 53 recorded grid states (after 10 warmup steps), giving 53 × 38 = 2,014 tokens per trajectory.
Two generation methods
Rule-diverse (nca_data_rule_diverse.bin): Our contribution. One simulation per rule. Each seed determines both the rule weights and the initial grid state. Diversity comes from scanning many different rules (~49,653 unique rules for 100M tokens). The hypothesis is that exposing the model to a wider variety of dynamical systems teaches more general sequential structure.
Paper baseline (nca_data_paper.bin): Reproduction of the method from Lee et al. 2026. Multiple simulations per rule — 504 rules are pre-filtered, then each gets 100 simulations with different random initial grid states. Diversity comes from varying initial conditions within fewer, pre-approved rules. The original paper uses 16,000 rules × 500 sims; we use fewer rules/sims to match the same 100M token budget.
Parameters (matching the paper's actual training script)
| Parameter | Value |
|---|---|
| Grid size | 12×12 |
| Cell states | 10 |
| Temperature | 1e-4 |
| Identity bias | 0.0 |
| Warmup steps | 10 |
| Patch size | 2×2 |
| Gzip threshold | [0.5, 1.0] |
Usage
import numpy as np
# Load the data
data = np.fromfile("nca_data.bin", dtype=np.uint16)
print(f"Tokens: {len(data):,}") # 100,000,000
# Delimiter tokens are the last two in the vocabulary
START_TK = 10000
END_TK = 10001
Loss masking
Grid delimiter tokens (10000, 10001) should have their loss masked during training (target = -100). The model learns to predict only patch tokens, not structural markers.
Reproducibility
Data is fully deterministic given the generation scripts and PyTorch version. Generated with PyTorch on macOS (Apple Silicon).
| File | MD5 |
|---|---|
nca_data_rule_diverse.bin |
c857b8397249cd0a5304685fca835553 |
nca_data_paper.bin |
e8b8ad97afb830f41569813af15476ed |
Citation
If you use this data, please cite the original NCA pre-pretraining paper:
@article{lee2026nca,
title={Training Language Models via Neural Cellular Automata},
author={Lee, Daniel Hyun and others},
journal={arXiv preprint arXiv:2603.10055},
year={2026}
}
- Downloads last month
- 12