Coding Assistant Adapter

This repository provides a LoRA adapter for Qwen/Qwen2.5-Coder-7B-Instruct, developed for Python coding assistance. The adapter is intended to support code-focused questions in a repository-grounded setting.

Model Details

  • Base model: Qwen/Qwen2.5-Coder-7B-Instruct
  • Adapter type: LoRA
  • Task type: Causal language modeling
  • Primary use case: Code assistance in repository-grounded settings

Adapter Configuration

  • Rank (r): 16
  • LoRA alpha: 16
  • LoRA dropout: 0.05
  • Target modules: q_proj, k_proj, v_proj, o_proj

Notes

This artifact contains the adapter weights only and is not a standalone model. It must be loaded together with the corresponding base model.

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

base_model_id = "Qwen/Qwen2.5-Coder-7B-Instruct"
adapter_path = "path/to/adapter"

tokenizer = AutoTokenizer.from_pretrained(adapter_path)
base_model = AutoModelForCausalLM.from_pretrained(base_model_id)
model = PeftModel.from_pretrained(base_model, adapter_path)
Downloads last month
26
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for catalin-pangaleanu/qwen25coder-7b-lora-adapter

Base model

Qwen/Qwen2.5-7B
Adapter
(610)
this model