LLM
Collection
25 items • Updated
https://huggingface.co/mradermacher/Qwen3-4B-Thinking-2509-Genius-Coder-AI-Full-GGUF
This qwen3 model was trained 2x faster with Unsloth and Huggingface's TRL library.
import torch
import re
from transformers import AutoTokenizer, AutoModelForCausalLM
# ---------------- Config ----------------
MODEL_NAME = "rahul7star/Qwen3-4B-Thinking-2509-Genius-Coder-AI-Full"
# Patch prompt: explicitly tell the model to provide reasoning AND final answer
PROMPT = (
"write a python code using pytorch for a simple neural network demo?\n\n"
"Please provide step-by-step reasoning inside <think> tags, AND then output the final Python code after </think>."
)
# ---------------- Load Tokenizer & Model ----------------
print("Loading tokenizer...")
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, trust_remote_code=True)
print("Loading model...")
model = AutoModelForCausalLM.from_pretrained(
MODEL_NAME,
torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32,
device_map="auto",
trust_remote_code=True,
low_cpu_mem_usage=True
)
if tokenizer.pad_token_id is None:
tokenizer.pad_token = tokenizer.eos_token
print("✅ Model loaded on device:", model.device)
# ---------------- Prepare Messages ----------------
messages = [
{"role": "user", "content": PROMPT}
]
# Apply chat template
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
input_length = inputs.input_ids.shape[-1]
# ---------------- Generate ----------------
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True, # required for Thinking models
temperature=0.9, # encourage variation
top_p=0.95,
repetition_penalty=1.05,
use_cache=True
)
# ---------------- Decode ----------------
# Raw output
raw_output = tokenizer.decode(outputs[0])
print("\n--- RAW OUTPUT ---")
print(raw_output)
# Only new tokens
generated_tokens = outputs[0][input_length:]
sliced_output = tokenizer.decode(generated_tokens, skip_special_tokens=False)
print("\n--- SLICED OUTPUT ---")
print(sliced_output)
# Remove <think> for clarity
clean_output = re.sub(r"<think>.*?</think>", "", sliced_output, flags=re.DOTALL).strip()
print("\n--- CLEANED OUTPUT (after <think> removed) ---")
print(clean_output)
--- RAW OUTPUT ---
<|im_start|>user
write a python code using pytorch for a simple neural network demo?
Please provide step-by-step reasoning inside <think> tags, AND then output the final Python code after </think>.<|im_end|>
<|im_start|>assistant
<think>
**Planning the step-by-step**
I need to create a simple PyTorch neural network. The plan is: import torch and torch.nn; define a neural network with one input layer, one hidden layer (ReLU activation), one output layer, set up random data, train the network using gradient descent and a loss function (MSE), evaluate its performance on new data. Code will be written in a single script following this logic.
</think>
```python
import torch
import torch.nn as nn
import numpy as np
from torch.utils.data import Dataset, DataLoader
# Define a simple dataset
class SimpleDataset(Dataset):
def __init__(self, n_samples=100):
self.n = n_samples
# random x between -1 and 1
self.x = np.linspace(-1, 1, self.n).reshape((-1, 1))
self.y = 2 * self.x + np.random.normal(0, 0.1, self.n)
def __len__(self):
return len(self.x)
def __getitem__(self, idx):
return torch.tensor(self.x[idx], dtype=torch.float32), \
torch.tensor(self.y[idx], dtype=torch.float32)
# Set up data
n_train = 60
n_val = 20
n_test = 20
train_data = SimpleDataset(n_samples=n_train)
val_data = SimpleDataset(n_samples=n_val)
test_data = SimpleDataset(n_samples=n_test)
batch_size = 4
train_loader = DataLoader(train_data, batch_size=batch_size, shuffle=True)
val_loader = DataLoader(val_data, batch_size=batch_size)
test_loader = DataLoader(test_data, batch_size=batch_size)
# Define the neural net
class Net(nn.Module):
def __init__(self):
super().__init__()
self.linear1 = nn.Linear(1, 64) # input size=1
self.relu = nn.ReLU()
self.linear2 = nn.Linear(64, 32)
self.linear3 = nn.Linear(32, 1)
def forward(self, x):
x = self.linear1(x)
x = self.relu(x)
x = self.linear2(x)
x = self.relu(x)
x = self.linear3(x)
return x
net = Net()
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(net.parameters(), lr=0.01)
--- CLEANED OUTPUT (after <think> removed) ---
https://huggingface.co/spaces/rahul7star/Qwen3-4B-Thinking-2507-Genius-Coder-GGUF-UD-Q4_K_M
Samle question asked
explain what are these
Qwen3.5-27B-Q2_K.gguf
10.5 GB
xet
Upload folder using huggingface_hub
1 day ago
Qwen3.5-27B-Q3_K_M.gguf
13.5 GB
xet
Upload folder using huggingface_hub
1 day ago
Qwen3.5-27B-Q3_K_S.gguf
12.3 GB
xet
Upload folder using huggingface_hub
1 day ago
Qwen3.5-27B-Q4_0.gguf
15.7 GB
xet
Upload folder using huggingface_hub
1 day ago
Qwen3.5-27B-Q4_1.gguf
17.2 GB
xet
Upload folder using huggingface_hub
1 day ago
Qwen3.5-27B-Q4_K_M.gguf
16.7 GB
xet
Upload folder using huggingface_hub
1 day ago
Qwen3.5-27B-Q4_K_S.gguf
15.8 GB
xet
Upload folder using huggingface_hub
1 day ago
Qwen3.5-27B-Q5_K_M.gguf
19.6 GB
xet
Upload folder using huggingface_hub
1 day ago
Qwen3.5-27B-Q5_K_S.gguf
18.9 GB
xet
Upload folder using huggingface_hub
1 day ago
Qwen3.5-27B-Q6_K.gguf
22.5 GB
xet
Upload folder using huggingface_hub
1 day ago
Qwen3.5-27B-Q8_0.gguf
28.6 GB
xet
Upload folder using huggingface_hub
1 day ago
Qwen3.5-27B-UD-Q4_K_XL.gguf
16.7 GB
xet
Upload folder using huggingface_hub
1 day ago
Qwen3.5-27B-UD-Q5_K_XL.gguf
19.6 GB
xet
Upload folder using huggingface_hub
1 day ago
Qwen3.5-27B-UD-Q6_K_XL.gguf
23.1 GB
xet
Upload folder using huggingface_hub
1 day ago
Qwen3.5-27B-UD-Q8_K_XL.gguf
<<<reasoning_content_start>>>**Explaining Qwen3.5-27B variants**
These are quantized versions of Qwen3.5-27B, a 27 billion parameter language model, with different precision levels (Q2 to Q8 and "UD") that reduce size and speed. Q2/Q3/K variants offer smaller files (10–12 GB) for fast inference but lower accuracy, while Q5/Q6/Q8/XL variants balance size/accuracy. "UD" indicates mixed-precision quantization for better quality. Each ends with `.gguf`, a format used by tools like llama.cpp. The "K" or "S" suffix denotes quantization strategy (K for K-Means, S for Sparse), and "XL" applies to quantized models with additional optimization. These files enable efficient deployment on limited hardware.<<<reasoning_content_end>>>In the Qwen3.5-27B GitHub repository, those are quantized weights for Qwen3.5-27B, the 27B parameter model from Qwen3.5, packaged in GGUF format for use with tools like llama.cpp.
- **Q2–Q8**: Quantization precision levels: lower numbers mean less precision, smaller model size, and faster inference.
- **K**: K-Means quantization for better accuracy and smoothness.
- **S**: Sparse quantization (some weights are zero).
- **XL**: Mixed-precision quantization ("UD" = Unstructured, Q4_K_XL, Q5_K_XL, Q6_K_XL, Q8_K_XL) has higher accuracy and lower precision than Q4_0/K, Q5_0/K, Q6_K, Q8_0.
- **GGUF**: A standard quantized format used by llama.cpp.
The numbers (Q2, Q3, Q4, Q5, Q6, Q8) refer to the bits per weight; Q2=2, Q3=3, Q4=4, Q5=5, Q6=6, Q8=8. The model can be run at different precisions (e.g., Q4_K_M gives a smaller file than Q4_K, Q4_S is even smaller but less accurate). The “-UD” suffix indicates a quantization variant with higher accuracy than the basic Qn_K. For example:
- Q2_K: 10.5 GB – minimal precision, very fast but lowest accuracy
- Q4_K: 16–17 GB – good balance for most use cases
- Q8_K: 28 GB – near full precision
- Q4_K_XL: 16–19 GB – optimized accuracy at Q4 level
These files can be used with `llama.cpp` or other GGUF-aware frameworks to run Qwen3.5 on devices with limited RAM.
<<<reasoning_content_start>>>**Finalizing a clean table for readability**
The final table provides a clean, structured overview of quantized Qwen3.5-27B GGUF files, showing quantization levels, precision, accuracy trade-offs, and approximate RAM usage per precision level for quick comparison. It clearly distinguishes between Qn_K variants and the “UD”-prefixed variants with improved accuracy, and orders entries by size and accuracy, highlighting the best balance points for different needs.<<<reasoning_content_end>>>Here's a clean, tabular breakdown of the quantized Qwen3.5-27B GGUF files for quick comparison:
| Quantization Format | Description | Size | RAM Usage | Accuracy Notes | Best For |
|------------------------|---------------------------------|----------|---------------|--------------------------------------------|-----------------------------|
| `Q2_K` | 2-bit, dense quantization | 10.5 GB | Very low | Lowest accuracy, fastest inference | Minimal RAM, speed focus |
| `Q3_K_M` | 3-bit, dense | 13.5 GB | Low | Good speed/accuracy balance | Lightweight use cases |
| `Q3_K_S` | 3-bit, sparse | 12.3 GB | Low | Slightly lower accuracy than Q3_K_M | Lower memory than Q3_K_M |
| `Q4_0` | 4-bit, dense | 15.7 GB | Medium | Standard precision | General use |
| `Q4_K_M` | 4-bit, dense + K-means | 16.7 GB | Medium | Balanced accuracy/speed | Most common trade-off |
| `Q4_K_S` | 4-bit, sparse | 15.8 GB | Medium | Slightly lower accuracy than Q4_K_M | Lower RAM than Q4_K_M |
| `Q4_K_XL` | 4-bit, dense + mixed precision | 16.7 GB | Medium | Higher accuracy than Q4_0/K, faster than Q4_K_XL | Better quality at Q4 level |
| `Q5_K_M` | 5-bit, dense | 19.6 GB | High | High accuracy, larger file | High quality tasks |
| `Q5_K_S` | 5-bit, sparse | 18.9 GB | High | Slightly lower accuracy than Q5_K_M | Memory-limited tasks |
| `Q5_K_XL` | 5-bit, mixed precision (UD) | 19.6 GB | High | Improved accuracy over Q5_0/K | Quality-focused use |
| `Q6_K` | 6-bit, dense | 22.5 GB | High | Near full precision | Balanced quality/size |
| `Q6_K_XL` | 6-bit, mixed precision (UD) | 23.1 GB | High | Best Q6 accuracy | High quality |
| `Q8_0` | 8-bit, dense | 28.6 GB | Full precision | Near full model precision | Full accuracy |
| `Q8_K_XL` | 8-bit, mixed precision (UD) | 28.6 GB | Full precision | Highest accuracy for 8-bit | Best 8-bit quality |
### Notes
- **Qn_K**: `K` = K-means clustering for smoother weights, better accuracy than basic `Qn_0`.
- **Qn_S**: Sparse quantization (some zeros), smaller size but lower accuracy than `Qn_K`.
- **Qn_K_XL/UD**: "Unstructured" quantization variants (`-UD`) have better accuracy than standard `Qn_K`.
- **RAM**: Numbers are rough estimates; actual usage depends on your system and inference setup.
- **Trade-offs**: Lower `Qn` = smaller file, faster inference; higher `Qn` = better accuracy, more RAM.
This format highlights the best options for speed vs. quality trade-offs without technical overload.
Base model
rikunarita/Qwen3-4B-Thinking-2507-Genius