Can Large Language Models Reinvent Foundational Algorithms? Paper • 2604.05716 • Published 28 days ago • 8
Cut Your Losses! Learning to Prune Paths Early for Efficient Parallel Reasoning Paper • 2604.16029 • Published 18 days ago • 23
Elucidating the SNR-t Bias of Diffusion Probabilistic Models Paper • 2604.16044 • Published 18 days ago • 74
HLWQ Unified (Weights Q5 + KV Cache Q3) Collection Full-stack HLWQ: Q5 weights + torchao INT4 + Q3 KV cache · formerly PolarQuant Unified • 16 items • Updated 17 days ago • 3
HLWQ Models Collection Hadamard-Lloyd Weight Quantization · arXiv:2603.29078 · formerly PolarQuant • 26 items • Updated 17 days ago • 1
HLWQ Gemma Models Collection Google Gemma family quantized with HLWQ (Hadamard-Lloyd) · formerly PolarQuant Gemma • 5 items • Updated 22 days ago • 5
Gemma 4 Collection Gemma 4 is Google's new model family including including E2B, E4B, 26B-A4B, and 31B. • 28 items • Updated 13 days ago • 171
Qwen3.5-27B HLWQ Collection Qwen3.5-27B · HLWQ Q5 weight quantization · formerly PolarQuant • 1 item • Updated 22 days ago • 1
MSA: Memory Sparse Attention for Efficient End-to-End Memory Model Scaling to 100M Tokens Paper • 2603.23516 • Published Mar 6 • 49
Calibri: Enhancing Diffusion Transformers via Parameter-Efficient Calibration Paper • 2603.24800 • Published Mar 25 • 68
Intern-S1-Pro: Scientific Multimodal Foundation Model at Trillion Scale Paper • 2603.25040 • Published Mar 26 • 131
Generation Models Know Space: Unleashing Implicit 3D Priors for Scene Understanding Paper • 2603.19235 • Published Mar 19 • 95