PredANNpp-NMEDT-SongID-EncoderOnly-Entropy-ctx16-pt10000-ft3500-seed42

Model description

This repository contains a PredANN++ encoder-only PyTorch Lightning checkpoint for NMED-T Song ID classification from 3-second EEG segments.

  • Repository: Shogo-Noguchi/PredANNpp-NMEDT-SongID-EncoderOnly-Entropy-ctx16-pt10000-ft3500-seed42
  • Checkpoint file: predannpp_nmedt_songid_encoderonly_entropy_ctx16_pt10000_ft3500_seed42.ckpt
  • Stage: pretraining + finetuning
  • Target / teacher representation: MusicGen Entropy, 16 s context
  • Architecture: Encoder-only Song ID classifier
  • Pretraining: 10k epochs with 50% masking
  • Finetuning: 3.5k epochs for Song ID classification
  • Random seed: 42
  • SHA256: d46c9a163bb6c98f8f6669915a4e964cc2afcff901d2237781f097b5183566f2

This checkpoint is a task-specific Song ID classifier. It directly maps a 3-second EEG segment to 10-class Song ID logits.

Capabilities

  • Input: 128-channel EEG, 125 Hz, 3-second segments.
  • Output: 10-class Song ID logits.
  • Intended module type: encoder-only finetuning module.

For masked teacher-token prediction, use a PredANNpp-Pretrain-* checkpoint instead.

Training data

  • Dataset: NMED-T (Naturalistic Music EEG Dataset – Tempo), 10 songs, 20 subjects, trial=1.
  • Teacher / target source: MusicGen Entropy token sequences with 16-second context.

Training procedure

  1. Multitask pretraining with masked teacher-token prediction and auxiliary Song ID learning.
  2. Encoder-only finetuning for NMED-T Song ID classification.

Intended use

  • Reproducing PredANN++ Song ID classification experiments.
  • Evaluating EEG-based music identification from 3-second EEG segments.
  • Comparing acoustic and expectation-related teacher representations.

Not intended use

  • Medical diagnosis, clinical decision making, or biometric identification.
  • Commercial use without checking the PredANN++ code license, NMED-T terms, and upstream model/feature licenses.
  • Masked-token pretraining evaluation; this repository contains a finetuned encoder-only classifier.

License and upstream dependencies

This checkpoint uses MusicGen / AudioCraft-derived features. The checkpoint is released under CC-BY-NC-4.0 for compatibility with NMED-T-derived training artifacts and upstream feature dependencies.

Reproducibility notes

  • metadata.json stores standardized release metadata.
  • SHA256SUMS stores the checkpoint checksum.
  • Use the PredANN++ GitHub repository for model definitions and evaluation scripts.

Links

Citation

If you use this checkpoint, please cite:

@misc{noguchi2026expectationacousticneuralnetwork,
  title={Expectation and Acoustic Neural Network Representations Enhance Music Identification from Brain Activity},
  author={Shogo Noguchi and Taketo Akama and Tai Nakamura and Shun Minamikawa and Natalia Polouliakh},
  year={2026},
  eprint={2603.03190},
  archivePrefix={arXiv},
  primaryClass={cs.AI},
  url={https://arxiv.org/abs/2603.03190}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including Shogo-Noguchi/PredANNpp-NMEDT-SongID-EncoderOnly-Entropy-ctx16-pt10000-ft3500-seed42

Paper for Shogo-Noguchi/PredANNpp-NMEDT-SongID-EncoderOnly-Entropy-ctx16-pt10000-ft3500-seed42