Using "IQuest-Coder-V1-40B-Loop-Instruct" on Cursor
#14 opened about 14 hours ago
by
Maub69
What vLLM version should I use to deploy this model?
1
#13 opened about 15 hours ago
by
yyg201708
Benchmaxxed
๐ง
๐
7
2
#12 opened 2 days ago
by
Tom-Neverwinter
Availability of 7B and 14B models mentioned in the paper
๐
1
#11 opened 2 days ago
by
Sopelllka
Question about K2/V2 cache computation in prefill vs generation
#10 opened 3 days ago
by
kernelpool
typo in readme
#9 opened 3 days ago
by
madsheepPL
need official fp8 weights
#7 opened 3 days ago
by
wangruiai2023
LM Studio Support with Q4_K_S please?
โค๏ธ
1
1
#6 opened 3 days ago
by
Cagannn
่ฟๅๆฏ่ฐๅฎถ้ซๆไบ๏ผ๏ผ
๐ค
1
1
#3 opened 4 days ago
by
shangyue2333
smaller + thinking models
๐
๐
11
#2 opened 4 days ago
by
Fizzarolli
can you share benchmarks for all models you released
#1 opened 4 days ago
by
Narutoouz