Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch
Paper • 2311.03099 • Published • 33
This is a merge of pre-trained language models created using mergekit.
This model was merged using the DARE TIES merge method using N-Bot-Int/MaidEllaA-1B as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
# =====================================================
# Project: Penetrator-3.2-1B
# Objective:
# Keep MaidElla coherent while injecting
# GRPO-miniThinky_v1
# Philosophy:
# "Tiny incision, not brain damage."
# =====================================================
base_model: N-Bot-Int/MaidEllaA-1B
merge_method: dare_ties
dtype: bfloat16
out_dtype: float16
models:
# =================================================
# PRIMARY BRAIN
# =================================================
- model: N-Bot-Int/MaidEllaA-1B
parameters:
weight:
# Preserve language structure
- filter: self_attn
value: 1.02
# Slightly reinforce RP behavior
- filter: mlp
value: 1.04
# Global anchor
- value: 1.0
# =================================================
# GRPO MICRO-INJECTION
# =================================================
- model: NickyNicky/Llama-1B-base-GRPO-miniThinky_v1
parameters:
weight:
# ONLY touch behavioral MLPs lightly
# Higher values already proved unstable
- filter: mlp
value: 0.045
# Extremely conservative global influence
- value: 0.018
# =====================================================
# GLOBAL PARAMETERS
# =====================================================
parameters:
# Extremely conservative density
# enough to influence behavior
# without liquefying semantics
density: 0.11
# Critical for tiny models
# normalize=true caused collapse before
normalize: false
int8_mask: false
# =====================================================
# TOKENIZER SAFETY
# =====================================================
tokenizer_source: N-Bot-Int/MaidEllaA-1B
# =====================================================
# EMBEDDINGS
# =====================================================
tie_word_embeddings: true
tie_output_embeddings: true