general
updated
Paper
• 2502.14855
• Published
• 7
Make LoRA Great Again: Boosting LoRA with Adaptive Singular Values and
Mixture-of-Experts Optimization Alignment
Paper
• 2502.16894
• Published
• 32
Generating Skyline Datasets for Data Science Models
Paper
• 2502.11262
• Published
• 7
Crowd Comparative Reasoning: Unlocking Comprehensive Evaluations for
LLM-as-a-Judge
Paper
• 2502.12501
• Published
• 6
Large Language Diffusion Models
Paper
• 2502.09992
• Published
• 126
Region-Adaptive Sampling for Diffusion Transformers
Paper
• 2502.10389
• Published
• 53
Train Small, Infer Large: Memory-Efficient LoRA Training for Large
Language Models
Paper
• 2502.13533
• Published
• 13
PAFT: Prompt-Agnostic Fine-Tuning
Paper
• 2502.12859
• Published
• 15
Injecting Domain-Specific Knowledge into Large Language Models: A
Comprehensive Survey
Paper
• 2502.10708
• Published
• 4
FinMTEB: Finance Massive Text Embedding Benchmark
Paper
• 2502.10990
• Published
• 6
SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance
Software Engineering?
Paper
• 2502.12115
• Published
• 46
I Think, Therefore I Diffuse: Enabling Multimodal In-Context Reasoning
in Diffusion Models
Paper
• 2502.10458
• Published
• 38
Fine-Tuning Small Language Models for Domain-Specific AI: An Edge AI
Perspective
Paper
• 2503.01933
• Published
• 13
Chain of Draft: Thinking Faster by Writing Less
Paper
• 2502.18600
• Published
• 50
Gemini Embedding: Generalizable Embeddings from Gemini
Paper
• 2503.07891
• Published
• 45