--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - Roman0/Qwen3-4B-Instruct-2507-heretic - Jackrong/GPT-5-Distill-Qwen3-4B-Instruct - thinkwee/NOVER1-Qwen3-4B --- # Qwen3-4B-Exp Qwen3-4B-Exp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [Roman0/Qwen3-4B-Instruct-2507-heretic](https://huggingface.co/Roman0/Qwen3-4B-Instruct-2507-heretic) * [Jackrong/GPT-5-Distill-Qwen3-4B-Instruct](https://huggingface.co/Jackrong/GPT-5-Distill-Qwen3-4B-Instruct) * [thinkwee/NOVER1-Qwen3-4B](https://huggingface.co/thinkwee/NOVER1-Qwen3-4B) ## 🧩 Configuration ```yaml models: - model: Roman0/Qwen3-4B-Instruct-2507-heretic parameters: density: 0.3 weight: 0.3 - model: Jackrong/GPT-5-Distill-Qwen3-4B-Instruct parameters: density: 0.4 weight: 0.4 - model: thinkwee/NOVER1-Qwen3-4B parameters: density: 0.4 weight: 0.4 merge_method: ties base_model: Roman0/Qwen3-4B-Instruct-2507-heretic parameters: normalize: true dtype: float16 ```