Coding Assistant Adapter
This repository provides a LoRA adapter for Qwen/Qwen2.5-Coder-7B-Instruct, developed for Python coding assistance. The adapter is intended to support code-focused questions in a repository-grounded setting.
Model Details
- Base model:
Qwen/Qwen2.5-Coder-7B-Instruct - Adapter type: LoRA
- Task type: Causal language modeling
- Primary use case: Code assistance in repository-grounded settings
Adapter Configuration
- Rank (
r): 16 - LoRA alpha: 16
- LoRA dropout: 0.05
- Target modules:
q_proj,k_proj,v_proj,o_proj
Notes
This artifact contains the adapter weights only and is not a standalone model. It must be loaded together with the corresponding base model.
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base_model_id = "Qwen/Qwen2.5-Coder-7B-Instruct"
adapter_path = "path/to/adapter"
tokenizer = AutoTokenizer.from_pretrained(adapter_path)
base_model = AutoModelForCausalLM.from_pretrained(base_model_id)
model = PeftModel.from_pretrained(base_model, adapter_path)
- Downloads last month
- 26