Model Card for Villanova-2B-2512-Preview

Villanova.AI logo

Villanova is a family of multilingual and multimodal Large Language Models (LLMs).

This repo contains GGUF format model files for the VillanovaAI/Villanova-2B-2512-Preview model.

See all the models in the collection here.

DISCLAIMER: This model is a preview.

About GGUF

GGUF is a format introduced by llama.cpp.

It is a file format for storing and distributing LLMs that is designed for portability and efficient inference on the edge.

Quick Usage with llama.cpp

You can run this model directly using the llama-cli tool (part of llama.cpp).

To run the model with the Q8_0 quantization directly from Hugging Face:

llama-cli -hf VillanovaAI/Villanova-2B-2512-Preview-GGUF:Q8_0 
Downloads last month
83
GGUF
Model size
3B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for VillanovaAI/Villanova-2B-2512-Preview-GGUF

Collection including VillanovaAI/Villanova-2B-2512-Preview-GGUF