| --- |
| language: |
| - en |
| license: apache-2.0 |
| tags: |
| - sentence-transformers |
| - sparse-encoder |
| - sparse |
| - splade |
| - generated_from_trainer |
| - dataset_size:5749 |
| - loss:SpladeLoss |
| - loss:SparseCosineSimilarityLoss |
| - loss:FlopsLoss |
| base_model: naver/splade-cocondenser-ensembledistil |
| widget: |
| - text: There is no 'still' that is not relative to some other object. |
| - text: A woman is adding oil on fishes. |
| - text: Minimum wage laws hurt the least skilled, least productive the most. |
| - text: Although I believe Searle is mistaken, I don't think you have found the problem. |
| - text: A man plays the guitar. |
| datasets: |
| - sentence-transformers/stsb |
| pipeline_tag: feature-extraction |
| library_name: sentence-transformers |
| metrics: |
| - pearson_cosine |
| - spearman_cosine |
| - active_dims |
| - sparsity_ratio |
| co2_eq_emissions: |
| emissions: 0.004571308812647019 |
| energy_consumed: 0.0019229652366223092 |
| source: codecarbon |
| training_type: fine-tuning |
| on_cloud: false |
| cpu_model: AMD Ryzen 9 6900HX with Radeon Graphics |
| ram_total_size: 30.6114501953125 |
| hours_used: 0.016 |
| hardware_used: 1 x NVIDIA GeForce RTX 3070 Ti Laptop GPU |
| model-index: |
| - name: 'splade-cocondenser-ensembledistil trained on ' |
| results: |
| - task: |
| type: semantic-similarity |
| name: Semantic Similarity |
| dataset: |
| name: sts dev |
| type: sts-dev |
| metrics: |
| - type: pearson_cosine |
| value: 0.8760417145994235 |
| name: Pearson Cosine |
| - type: spearman_cosine |
| value: 0.8704199278417449 |
| name: Spearman Cosine |
| - type: active_dims |
| value: 49.305667877197266 |
| name: Active Dims |
| - type: sparsity_ratio |
| value: 0.9983845859420353 |
| name: Sparsity Ratio |
| - task: |
| type: semantic-similarity |
| name: Semantic Similarity |
| dataset: |
| name: sts test |
| type: sts-test |
| metrics: |
| - type: pearson_cosine |
| value: 0.840843473698782 |
| name: Pearson Cosine |
| - type: spearman_cosine |
| value: 0.8291534166645268 |
| name: Spearman Cosine |
| - type: active_dims |
| value: 47.07070350646973 |
| name: Active Dims |
| - type: sparsity_ratio |
| value: 0.9984578106445688 |
| name: Sparsity Ratio |
| --- |
| |
| # splade-cocondenser-ensembledistil trained on |
|
|
| This is a [SPLADE Sparse Encoder](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) model finetuned from [naver/splade-cocondenser-ensembledistil](https://huggingface.co/naver/splade-cocondenser-ensembledistil) on the [stsb](https://huggingface.co/datasets/sentence-transformers/stsb) dataset using the [sentence-transformers](https://www.SBERT.net) library. It maps sentences & paragraphs to a 30522-dimensional sparse vector space and can be used for semantic search and sparse retrieval. |
| ## Model Details |
|
|
| ### Model Description |
| - **Model Type:** SPLADE Sparse Encoder |
| - **Base model:** [naver/splade-cocondenser-ensembledistil](https://huggingface.co/naver/splade-cocondenser-ensembledistil) <!-- at revision 25178a62708a3ab1b5c4b5eb30764d65bfddcfbb --> |
| - **Maximum Sequence Length:** 256 tokens |
| - **Output Dimensionality:** 30522 dimensions |
| - **Similarity Function:** Cosine Similarity |
| - **Training Dataset:** |
| - [stsb](https://huggingface.co/datasets/sentence-transformers/stsb) |
| - **Language:** en |
| - **License:** apache-2.0 |
|
|
| ### Model Sources |
|
|
| - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) |
| - **Documentation:** [Sparse Encoder Documentation](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) |
| - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) |
| - **Hugging Face:** [Sparse Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=sparse-encoder) |
|
|
| ### Full Model Architecture |
|
|
| ``` |
| SparseEncoder( |
| (0): MLMTransformer({'max_seq_length': 256, 'do_lower_case': False}) with MLMTransformer model: BertForMaskedLM |
| (1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522}) |
| ) |
| ``` |
|
|
| ## Usage |
|
|
| ### Direct Usage (Sentence Transformers) |
|
|
| First install the Sentence Transformers library: |
|
|
| ```bash |
| pip install -U sentence-transformers |
| ``` |
|
|
| Then you can load this model and run inference. |
| ```python |
| from sentence_transformers import SparseEncoder |
| |
| # Download from the 🤗 Hub |
| model = SparseEncoder("arthurbresnu/splade-cocondenser-ensembledistil-sts") |
| # Run inference |
| sentences = [ |
| 'While Queen may refer to both Queen regent (sovereign) or Queen consort, the King has always been the sovereign.', |
| 'There is a very good reason not to refer to the Queen\'s spouse as "King" - because they aren\'t the King.', |
| 'A man plays the guitar.', |
| ] |
| embeddings = model.encode(sentences) |
| print(embeddings.shape) |
| # (3, 30522) |
| |
| # Get the similarity scores for the embeddings |
| similarities = model.similarity(embeddings, embeddings) |
| print(similarities.shape) |
| # [3, 3] |
| ``` |
|
|
| <!-- |
| ### Direct Usage (Transformers) |
|
|
| <details><summary>Click to see the direct usage in Transformers</summary> |
|
|
| </details> |
| --> |
|
|
| <!-- |
| ### Downstream Usage (Sentence Transformers) |
|
|
| You can finetune this model on your own dataset. |
|
|
| <details><summary>Click to expand</summary> |
|
|
| </details> |
| --> |
|
|
| <!-- |
| ### Out-of-Scope Use |
|
|
| *List how the model may foreseeably be misused and address what users ought not to do with the model.* |
| --> |
|
|
| ## Evaluation |
|
|
| ### Metrics |
|
|
| #### Semantic Similarity |
|
|
| * Datasets: `sts-dev` and `sts-test` |
| * Evaluated with [<code>SparseEmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseEmbeddingSimilarityEvaluator) |
|
|
| | Metric | sts-dev | sts-test | |
| |:--------------------|:-----------|:-----------| |
| | pearson_cosine | 0.876 | 0.8408 | |
| | **spearman_cosine** | **0.8704** | **0.8292** | |
| | active_dims | 49.3057 | 47.0707 | |
| | sparsity_ratio | 0.9984 | 0.9985 | |
| |
| <!-- |
| ## Bias, Risks and Limitations |
| |
| *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* |
| --> |
| |
| <!-- |
| ### Recommendations |
| |
| *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* |
| --> |
| |
| ## Training Details |
| |
| ### Training Dataset |
| |
| #### stsb |
| |
| * Dataset: [stsb](https://huggingface.co/datasets/sentence-transformers/stsb) at [ab7a5ac](https://huggingface.co/datasets/sentence-transformers/stsb/tree/ab7a5ac0e35aa22088bdcf23e7fd99b220e53308) |
| * Size: 5,749 training samples |
| * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> |
| * Approximate statistics based on the first 1000 samples: |
| | | sentence1 | sentence2 | score | |
| |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------| |
| | type | string | string | float | |
| | details | <ul><li>min: 6 tokens</li><li>mean: 10.0 tokens</li><li>max: 28 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 9.95 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.45</li><li>max: 1.0</li></ul> | |
| * Samples: |
| | sentence1 | sentence2 | score | |
| |:-----------------------------------------------------------|:----------------------------------------------------------------------|:------------------| |
| | <code>A plane is taking off.</code> | <code>An air plane is taking off.</code> | <code>1.0</code> | |
| | <code>A man is playing a large flute.</code> | <code>A man is playing a flute.</code> | <code>0.76</code> | |
| | <code>A man is spreading shreded cheese on a pizza.</code> | <code>A man is spreading shredded cheese on an uncooked pizza.</code> | <code>0.76</code> | |
| * Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters: |
| ```json |
| { |
| "loss": "SparseCosineSimilarityLoss(loss_fct='torch.nn.modules.loss.MSELoss')", |
| "lambda_corpus": 0.003 |
| } |
| ``` |
| |
| ### Evaluation Dataset |
|
|
| #### stsb |
|
|
| * Dataset: [stsb](https://huggingface.co/datasets/sentence-transformers/stsb) at [ab7a5ac](https://huggingface.co/datasets/sentence-transformers/stsb/tree/ab7a5ac0e35aa22088bdcf23e7fd99b220e53308) |
| * Size: 1,500 evaluation samples |
| * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> |
| * Approximate statistics based on the first 1000 samples: |
| | | sentence1 | sentence2 | score | |
| |:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------| |
| | type | string | string | float | |
| | details | <ul><li>min: 5 tokens</li><li>mean: 15.1 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.11 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.42</li><li>max: 1.0</li></ul> | |
| * Samples: |
| | sentence1 | sentence2 | score | |
| |:--------------------------------------------------|:------------------------------------------------------|:------------------| |
| | <code>A man with a hard hat is dancing.</code> | <code>A man wearing a hard hat is dancing.</code> | <code>1.0</code> | |
| | <code>A young child is riding a horse.</code> | <code>A child is riding a horse.</code> | <code>0.95</code> | |
| | <code>A man is feeding a mouse to a snake.</code> | <code>The man is feeding a mouse to the snake.</code> | <code>1.0</code> | |
| * Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters: |
| ```json |
| { |
| "loss": "SparseCosineSimilarityLoss(loss_fct='torch.nn.modules.loss.MSELoss')", |
| "lambda_corpus": 0.003 |
| } |
| ``` |
|
|
| ### Training Hyperparameters |
| #### Non-Default Hyperparameters |
|
|
| - `eval_strategy`: steps |
| - `per_device_train_batch_size`: 16 |
| - `per_device_eval_batch_size`: 16 |
| - `learning_rate`: 4e-06 |
| - `num_train_epochs`: 1 |
| - `bf16`: True |
| - `batch_sampler`: no_duplicates |
| |
| #### All Hyperparameters |
| <details><summary>Click to expand</summary> |
| |
| - `overwrite_output_dir`: False |
| - `do_predict`: False |
| - `eval_strategy`: steps |
| - `prediction_loss_only`: True |
| - `per_device_train_batch_size`: 16 |
| - `per_device_eval_batch_size`: 16 |
| - `per_gpu_train_batch_size`: None |
| - `per_gpu_eval_batch_size`: None |
| - `gradient_accumulation_steps`: 1 |
| - `eval_accumulation_steps`: None |
| - `torch_empty_cache_steps`: None |
| - `learning_rate`: 4e-06 |
| - `weight_decay`: 0.0 |
| - `adam_beta1`: 0.9 |
| - `adam_beta2`: 0.999 |
| - `adam_epsilon`: 1e-08 |
| - `max_grad_norm`: 1.0 |
| - `num_train_epochs`: 1 |
| - `max_steps`: -1 |
| - `lr_scheduler_type`: linear |
| - `lr_scheduler_kwargs`: {} |
| - `warmup_ratio`: 0.0 |
| - `warmup_steps`: 0 |
| - `log_level`: passive |
| - `log_level_replica`: warning |
| - `log_on_each_node`: True |
| - `logging_nan_inf_filter`: True |
| - `save_safetensors`: True |
| - `save_on_each_node`: False |
| - `save_only_model`: False |
| - `restore_callback_states_from_checkpoint`: False |
| - `no_cuda`: False |
| - `use_cpu`: False |
| - `use_mps_device`: False |
| - `seed`: 42 |
| - `data_seed`: None |
| - `jit_mode_eval`: False |
| - `use_ipex`: False |
| - `bf16`: True |
| - `fp16`: False |
| - `fp16_opt_level`: O1 |
| - `half_precision_backend`: auto |
| - `bf16_full_eval`: False |
| - `fp16_full_eval`: False |
| - `tf32`: None |
| - `local_rank`: 0 |
| - `ddp_backend`: None |
| - `tpu_num_cores`: None |
| - `tpu_metrics_debug`: False |
| - `debug`: [] |
| - `dataloader_drop_last`: False |
| - `dataloader_num_workers`: 0 |
| - `dataloader_prefetch_factor`: None |
| - `past_index`: -1 |
| - `disable_tqdm`: False |
| - `remove_unused_columns`: True |
| - `label_names`: None |
| - `load_best_model_at_end`: False |
| - `ignore_data_skip`: False |
| - `fsdp`: [] |
| - `fsdp_min_num_params`: 0 |
| - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} |
| - `tp_size`: 0 |
| - `fsdp_transformer_layer_cls_to_wrap`: None |
| - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} |
| - `deepspeed`: None |
| - `label_smoothing_factor`: 0.0 |
| - `optim`: adamw_torch |
| - `optim_args`: None |
| - `adafactor`: False |
| - `group_by_length`: False |
| - `length_column_name`: length |
| - `ddp_find_unused_parameters`: None |
| - `ddp_bucket_cap_mb`: None |
| - `ddp_broadcast_buffers`: False |
| - `dataloader_pin_memory`: True |
| - `dataloader_persistent_workers`: False |
| - `skip_memory_metrics`: True |
| - `use_legacy_prediction_loop`: False |
| - `push_to_hub`: False |
| - `resume_from_checkpoint`: None |
| - `hub_model_id`: None |
| - `hub_strategy`: every_save |
| - `hub_private_repo`: None |
| - `hub_always_push`: False |
| - `gradient_checkpointing`: False |
| - `gradient_checkpointing_kwargs`: None |
| - `include_inputs_for_metrics`: False |
| - `include_for_metrics`: [] |
| - `eval_do_concat_batches`: True |
| - `fp16_backend`: auto |
| - `push_to_hub_model_id`: None |
| - `push_to_hub_organization`: None |
| - `mp_parameters`: |
| - `auto_find_batch_size`: False |
| - `full_determinism`: False |
| - `torchdynamo`: None |
| - `ray_scope`: last |
| - `ddp_timeout`: 1800 |
| - `torch_compile`: False |
| - `torch_compile_backend`: None |
| - `torch_compile_mode`: None |
| - `dispatch_batches`: None |
| - `split_batches`: None |
| - `include_tokens_per_second`: False |
| - `include_num_input_tokens_seen`: False |
| - `neftune_noise_alpha`: None |
| - `optim_target_modules`: None |
| - `batch_eval_metrics`: False |
| - `eval_on_start`: False |
| - `use_liger_kernel`: False |
| - `eval_use_gather_object`: False |
| - `average_tokens_across_devices`: False |
| - `prompts`: None |
| - `batch_sampler`: no_duplicates |
| - `multi_dataset_batch_sampler`: proportional |
|
|
| </details> |
|
|
| ### Training Logs |
| | Epoch | Step | Training Loss | Validation Loss | sts-dev_spearman_cosine | sts-test_spearman_cosine | |
| |:------:|:----:|:-------------:|:---------------:|:-----------------------:|:------------------------:| |
| | -1 | -1 | - | - | 0.8366 | - | |
| | 0.2778 | 100 | 0.0298 | 0.0267 | 0.8631 | - | |
| | 0.5556 | 200 | 0.0306 | 0.0264 | 0.8686 | - | |
| | 0.8333 | 300 | 0.0289 | 0.0257 | 0.8704 | - | |
| | -1 | -1 | - | - | - | 0.8292 | |
|
|
|
|
| ### Environmental Impact |
| Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon). |
| - **Energy Consumed**: 0.002 kWh |
| - **Carbon Emitted**: 0.000 kg of CO2 |
| - **Hours Used**: 0.016 hours |
|
|
| ### Training Hardware |
| - **On Cloud**: No |
| - **GPU Model**: 1 x NVIDIA GeForce RTX 3070 Ti Laptop GPU |
| - **CPU Model**: AMD Ryzen 9 6900HX with Radeon Graphics |
| - **RAM Size**: 30.61 GB |
|
|
| ### Framework Versions |
| - Python: 3.12.9 |
| - Sentence Transformers: 4.2.0.dev0 |
| - Transformers: 4.50.3 |
| - PyTorch: 2.6.0+cu124 |
| - Accelerate: 1.6.0 |
| - Datasets: 3.5.0 |
| - Tokenizers: 0.21.1 |
|
|
| ## Citation |
|
|
| ### BibTeX |
|
|
| #### Sentence Transformers |
| ```bibtex |
| @inproceedings{reimers-2019-sentence-bert, |
| title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", |
| author = "Reimers, Nils and Gurevych, Iryna", |
| booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", |
| month = "11", |
| year = "2019", |
| publisher = "Association for Computational Linguistics", |
| url = "https://arxiv.org/abs/1908.10084", |
| } |
| ``` |
|
|
| #### SpladeLoss |
| ```bibtex |
| @misc{formal2022distillationhardnegativesampling, |
| title={From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective}, |
| author={Thibault Formal and Carlos Lassance and Benjamin Piwowarski and Stéphane Clinchant}, |
| year={2022}, |
| eprint={2205.04733}, |
| archivePrefix={arXiv}, |
| primaryClass={cs.IR}, |
| url={https://arxiv.org/abs/2205.04733}, |
| } |
| ``` |
|
|
| #### FlopsLoss |
| ```bibtex |
| @article{paria2020minimizing, |
| title={Minimizing flops to learn efficient sparse representations}, |
| author={Paria, Biswajit and Yeh, Chih-Kuan and Yen, Ian EH and Xu, Ning and Ravikumar, Pradeep and P{'o}czos, Barnab{'a}s}, |
| journal={arXiv preprint arXiv:2004.05665}, |
| year={2020} |
| } |
| ``` |
|
|
| <!-- |
| ## Glossary |
|
|
| *Clearly define terms in order to be accessible across audiences.* |
| --> |
|
|
| <!-- |
| ## Model Card Authors |
|
|
| *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* |
| --> |
|
|
| <!-- |
| ## Model Card Contact |
|
|
| *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* |
| --> |