Model Card for e5-R-mistral-7b
Model Description
e5-R-mistral-7b is a LLM retriever fine-tuned from mistralai/Mistral-7B-v0.1.
- Downloads last month
- 714
Model tree for BeastyZ/e5-R-mistral-7b
Dataset used to train BeastyZ/e5-R-mistral-7b
Spaces using BeastyZ/e5-R-mistral-7b 11
Evaluation results
- map_at_1 on MTEB ArguAnatest set self-reported33.570
- map_at_10 on MTEB ArguAnatest set self-reported49.952
- map_at_100 on MTEB ArguAnatest set self-reported50.673
- map_at_1000 on MTEB ArguAnatest set self-reported50.674
- map_at_3 on MTEB ArguAnatest set self-reported44.915
- map_at_5 on MTEB ArguAnatest set self-reported47.877
- mrr_at_1 on MTEB ArguAnatest set self-reported34.211
- mrr_at_10 on MTEB ArguAnatest set self-reported50.190
- mrr_at_100 on MTEB ArguAnatest set self-reported50.905
- mrr_at_1000 on MTEB ArguAnatest set self-reported50.906
- mrr_at_3 on MTEB ArguAnatest set self-reported45.128
- mrr_at_5 on MTEB ArguAnatest set self-reported48.097
- ndcg_at_1 on MTEB ArguAnatest set self-reported33.570
- ndcg_at_10 on MTEB ArguAnatest set self-reported58.994
- ndcg_at_100 on MTEB ArguAnatest set self-reported61.806
- ndcg_at_1000 on MTEB ArguAnatest set self-reported61.825
- ndcg_at_3 on MTEB ArguAnatest set self-reported48.681
- ndcg_at_5 on MTEB ArguAnatest set self-reported54.001
- precision_at_1 on MTEB ArguAnatest set self-reported33.570
- precision_at_10 on MTEB ArguAnatest set self-reported8.784