⚠️ REQUIRED —
jangtq_runtime.safetensorssidecar must be downloadedOsaurus uses the native Swift JANGTQ runtime. Every JANGTQ bundle on OsaurusAI ships a small
jangtq_runtime.safetensorssidecar (10 KB–165 KB) alongside the weight shards. The Swift loader will refuse to start with the errorError: Model '<name>' declares JANGTQ (weight_format: "mxtq") but is missing required sidecar file 'jangtq_runtime.safetensors'. Re-download the full model or obtain the sidecar from the original publisher.if the file is absent.
If your local copy doesn't have it (older download, partial sync, etc):
hf download OsaurusAI/Laguna-XS.2-JANGTQ jangtq_runtime.safetensors --local-dir <your-dir>The file holds the deterministic codebooks + Hadamard rotation signs the Swift loader uses to decode
*.tq_packedweights. It must match the seed the bundle was quantized with (mxtq_seed=42).
OsaurusAI/Laguna-XS.2-JANGTQ
Quantized poolside Laguna-XS.2 for Apple Silicon (MLX) — agentic-coding 33B-active-3B Mixture-of-Experts.
| Source | poolside/Laguna-XS.2 |
| Architecture | laguna (40 layers, 256 routed experts top-8 + 1 shared, hybrid SWA+full attention) |
| Quant format | JANGTQ (TurboQuant 2-bit, Hadamard pre-rotation, group_size=64) |
| Bundle size on disk | 10.10 GB (10 safetensors shards) |
| License | Apache-2.0 (inherits from upstream) |
| Modalities | Text in / text out (no vision, no audio, no video) |
What's quantized
- Routed-expert linears (39 layers × {gate_up_proj, down_proj} stacked across all 256 experts) → TurboQuant 2-bit with Hadamard rotation
- Attention projections (q/k/v/o/g_proj), shared-expert FFN, layer-0 dense FFN, embed_tokens, lm_head → affine 8-bit (
mx.quantize) - All RMSNorms (input/post/q_norm/k_norm) + router gate +
e_score_correction_bias→ fp16 passthrough
Architecture notes (preserved verbatim from upstream)
- 40 layers; per-layer attention head count alternates 48 (full-attn) / 64 (SWA) with shared 8 KV heads (GQA)
- 1:3 ratio of full-attn ↔ sliding-window-attention (window = 512), explicit
layer_typeslist - Dual RoPE: full-attn = YaRN (base 500K, factor 32, original 4096, β_fast 64, β_slow 1, partial_rotary 0.5); SWA = default (base 10K, full rotary)
- 256 routed experts (top-8) + 1 shared expert; sigmoid + per-head gating (
g_proj);q_norm/k_normin attention - 131k context window
- Layer 0 dense MLP; layers 1-39 sparse MoE
Run on Apple Silicon
pip install mlx safetensors transformers
python -m jang_tools.laguna.runtime \
--src ~/.mlxstudio/models/OsaurusAI/Laguna-XS.2-JANGTQ \
--prompt "def fibonacci(n):" --max-new 64
The runtime auto-detects weight_format (mxtq / mxfp4 / bf16) and loads the matching path (jang_tools/laguna/weight_loader_bf16.py).
Build
Reproduce locally from the bf16 source:
python -m jang_tools.convert_laguna_jangtq \
~/.mlxstudio/models/_sources/Laguna-XS.2 \
~/.mlxstudio/models/JANGQ-AI/Laguna-XS.2-JANGTQ JANGTQ2
Credits
Quantized by Jinho Jang (eric@osaurus.ai). MLX-native pipeline, runs on M-series Macs.
- Downloads last month
- 119
Quantized
Model tree for OsaurusAI/Laguna-XS.2-JANGTQ
Base model
poolside/Laguna-XS.2