The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/parquet/parquet.py", line 118, in _split_generators
self.info.features = datasets.Features.from_arrow_schema(pq.read_schema(f))
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pyarrow/parquet/core.py", line 2392, in read_schema
file = ParquetFile(
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pyarrow/parquet/core.py", line 328, in __init__
self.reader.open(
File "pyarrow/_parquet.pyx", line 1656, in pyarrow._parquet.ParquetReader.open
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Deep-Signal Weight Fingerprints
Dataset Summary
This repository distributes pre-computed deep-signal weight fingerprints for use with Model Provenance Kit (Model ProvenanceKit). The primary deliverable is deep-signals.zip: a compressed archive of Apache Parquet files that store dense numerical features extracted from publicly released transformer (and related) model weights on the Hugging Face Hub or equivalent sources.
Each file encodes multi-signal weight-level evidence (embedding anchors, norm-layer vectors, layer energy profiles, embedding norm histograms, and per-layer correlation-style summaries) in a long-form tabular schema. These fingerprints enable fast similarity search and provenance-style matching against a reference catalog when paired with the ProvenanceKit CLI or library. They are not natural-language text and are not intended for language-model pretraining in the usual sense.
Languages
Not applicable. This dataset contains numeric tensors serialized as Parquet tables. Metadata may reference English-only model card text from upstream models, but the data itself is language-agnostic.
Supported Tasks and Leaderboards
other— model provenance / family identification: These artifacts support comparing an unknown model's weight-derived signals to reference fingerprints organized by model family, as implemented in Model ProvenanceKit (provenancekit scan,provenancekit compare). Success is measured by retrieval quality and calibration in that tooling's benchmark (see upstream documentation), not by a single Hub leaderboard metric.
Dataset Structure
Data Instances
The archive expands to a directory tree of the form:
features/deep-signals/by-family/<family_id>/<asset_id>_deep-signals.parquet
family_id: Stable identifier for a model family in the Model ProvenanceKit seed catalog.asset_id: Identifier for a specific base model asset (e.g., a particular checkpoint) within that family.- One Parquet file per asset, containing all deep-signal tensors for that asset in merged form.
A typical row inside a Parquet file looks like:
{
"signal": "eas_self_sim",
"layer": null,
"row": 3,
"col": 7,
"value": 0.8723
}
Each file contains thousands of such rows covering all weight signals for a single model asset.
Data Fields
Parquet column-oriented schema (long form; all signals share the same table):
| Column | Type | Description |
|---|---|---|
signal |
string | Name of the signal (e.g., eas_self_sim, nlf_vector, lep_profile, end_histogram, wsp_signature, wvc_layer_sigs). |
layer |
int32 (nullable) | Layer index for signals that vary per layer (e.g., WVC-style per-layer summaries); null when not layer-scoped. |
row |
int32 | Row index within the logical tensor for vector and matrix signals. |
col |
int32 (nullable) | Column index for 2D signals (e.g., embedding anchor self-similarity matrices); null when not applicable. |
value |
float32 | Numeric value at (signal, layer, row, col). |
Reconstruction (consumer behavior): Conforming loaders (see ProvenanceKit DatabaseService.load_deep_signals) group by signal and pivot into dense arrays. For example:
eas_self_sim→ 2Dfloat32matrix (script-aware embedding anchor self-similarity).wvc_layer_sigs→ mapping from layer index → 1Dfloat32vector.- Other signals → 1D
float32vectors sorted byrow.
Signal names correspond to Model ProvenanceKit weight signals (e.g., EAS, NLF, LEP, END, WVC) described in the upstream README.
Data Splits
Not applicable. This release is a reference fingerprint store, not a train/validation/test split. Updates may add families or assets over time; version the zip or use ProvenanceKit's --update workflow when refreshing.
Dataset Creation
Curation Rationale
These fingerprints exist to make large-scale weight comparison practical: they avoid re-scanning full weights for every query when a precomputed, family-organized store is available. They are curated to support security and governance workflows (e.g., detecting whether a model likely derives from a known base) as described by the Model Provenance Kit project.
Source Data
Initial Data Collection and Normalization
- Sources: Public model repositories (primarily Hugging Face Hub model repos) and their published weight artifacts (e.g., Safetensors / PyTorch checkpoints), as referenced by the Model ProvenanceKit catalog.
- Processing: Features are extracted with the same pipeline as the open-source toolkit (metadata gate, tokenizer features where applicable, and deep weight signals). Arrays are normalized to the long-form Parquet schema above for storage and partial I/O.
Who are the source language producers?
N/A. Upstream model publishers (organizations and individuals) released the base weights; this dataset contains derived numerical summaries, not raw training text.
Annotations
Annotation process
No human annotation is involved. All data is machine-generated by the Model ProvenanceKit feature-extraction pipeline, which reads model weight files and computes numerical signal vectors automatically.
Who are the annotators?
Not applicable. The extraction is fully automated; no human annotators participated.
Personal and Sensitive Information
This dataset is intended to contain no personal data. Fingerprints are derived from public model weights and numerical summaries. If any upstream artifact improperly contained sensitive data, it would not typically appear in these aggregates; nevertheless, do not treat similarity scores as legal proof of model lineage. See limitations below.
Considerations for Using the Data
Social Impact of Dataset
Positive: Can improve transparency around model reuse, compliance checks, and research on model relatedness.
Risks: Misuse for false claims ("this model is a copy") without independent verification, or over-reliance on scores in high-stakes decisions. Scores are statistical evidence, not ground truth.
Discussion of Biases
Fingerprint distributions reflect which public models were included in the reference set and historical popularity of certain families on the Hub. Underrepresented families may have weaker or sparser coverage. No demographic bias in the human sense applies to numeric weight summaries; upstream model biases remain a property of each base model, not of this fingerprint store.
Other Known Limitations
- Not cryptographic proof of provenance; results are evidence-weighted, as stated in Model ProvenanceKit documentation.
- Coverage gaps: New or rare architectures may not match any family until the catalog is updated.
- Versioning: The
deep-signals.zipartifact should be versioned and integrity-checked (e.g., SHA-256) by consumers; Model ProvenanceKit optionally verifies a pinned hash on download. - Large files: The zip may be multi-gigabyte; partial downloads and extraction require sufficient disk space and safe unzip practices.
Additional Information
Dataset Curators
Cisco (Model Provenance Kit / Cisco AI Defense) Email : cisco-ai-oss@cisco.com
Licensing Information
This dataset (the deep-signals.zip archive and its Parquet contents) is released under the Creative Commons Attribution 4.0 International (CC BY 4.0) license. You may share and adapt the data provided you give appropriate credit, provide a link to the license, and indicate if changes were made.
The Model Provenance Kit software is licensed separately (see that repository). Using this dataset does not change upstream model licenses: confirm that source model publishers' terms allow your use case when combining fingerprints with specific model names or redistributing derived works.
Citation Information
If you use this dataset or the associated toolkit, please cite the Model Provenance Kit repository and version you used, and cite this Hugging Face dataset (with its DOI or URL once published) to satisfy CC BY 4.0 attribution. Example BibTeX for the toolkit (fill in version/commit; add a separate @dataset entry for your Hub repo when live):
@software{model_provenance_kit,
author = {{Cisco Systems, Inc. and its affiliates}},
title = {Model Provenance Kit},
url = {https://github.com/cisco-ai-defense/model-provenance-kit},
year = {2026},
note = {Deep-signal fingerprints (CC BY 4.0) may be distributed separately on the Hugging Face Hub}
}
Contributions
Thanks to the Hugging Face community for dataset cards and hosting infrastructure.
File layout on the Hub (recommended)
| File | Description |
|---|---|
deep-signals.zip |
Archive containing features/deep-signals/by-family/.../*.parquet as above. |
README.md |
This dataset card (copy this file to the dataset repo root as README.md). |
After publishing, consumers can point Model ProvenanceKit at your dataset root or mirror the resolve URL pattern used by the toolkit's default settings (.../resolve/main/deep-signals.zip), adjusting PROVENANCEKIT_HF_DEEP_SIGNALS_URL and PROVENANCEKIT_HF_DEEP_SIGNALS_SHA256 to match your release.
- Downloads last month
- 60