Instructions to use MDDDDR/Llama-3.2-1B-Instruct-FFT-coder-python with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use MDDDDR/Llama-3.2-1B-Instruct-FFT-coder-python with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="MDDDDR/Llama-3.2-1B-Instruct-FFT-coder-python") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("MDDDDR/Llama-3.2-1B-Instruct-FFT-coder-python") model = AutoModelForCausalLM.from_pretrained("MDDDDR/Llama-3.2-1B-Instruct-FFT-coder-python") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use MDDDDR/Llama-3.2-1B-Instruct-FFT-coder-python with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "MDDDDR/Llama-3.2-1B-Instruct-FFT-coder-python" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "MDDDDR/Llama-3.2-1B-Instruct-FFT-coder-python", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/MDDDDR/Llama-3.2-1B-Instruct-FFT-coder-python
- SGLang
How to use MDDDDR/Llama-3.2-1B-Instruct-FFT-coder-python with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "MDDDDR/Llama-3.2-1B-Instruct-FFT-coder-python" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "MDDDDR/Llama-3.2-1B-Instruct-FFT-coder-python", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "MDDDDR/Llama-3.2-1B-Instruct-FFT-coder-python" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "MDDDDR/Llama-3.2-1B-Instruct-FFT-coder-python", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use MDDDDR/Llama-3.2-1B-Instruct-FFT-coder-python with Docker Model Runner:
docker model run hf.co/MDDDDR/Llama-3.2-1B-Instruct-FFT-coder-python
metadata
license: apache-2.0
language:
- ko
base_model:
- meta-llama/Llama-3.2-1B-Instruct
pipeline_tag: text-generation
library_name: transformers
datasets:
- nayohan/CodeFeedback-Filtered-Instruction-ko
Model Card for Model ID
- base_model : meta-llama/Llama-3.2-1B-Instruct
Training dataset
- data_set : nayohan/CodeFeedback-Filtered-Instruction-ko
- ํด๋น ๋ฐ์ดํฐ์
์ ์ ๋ถ ์ฌ์ฉํ๊ฑด ์๋๋ฉฐ Python์ธ์ด๋ฅผ ์ฐ์ ์ถ์ถํ๋ค์ ๋ฐ์ดํฐ์
๋ค์ ์๊น์๋ฅผ ํ์
, ๊ทธ ๋ค์ ์ ์ฒ๋ฆฌ๊ฐ ๊ณตํต์ ์ผ๋ก ๋ค์ด๊ฐ๋งํ ๋ฐ์ดํฐ๋ฅผ ๋ค์ ์ถ์ถํ์ฌ ํ์ต์ ์ฌ์ฉํ์ต๋๋ค.
- ์ด ํ์ต ๋ฐ์ดํฐ ๊ฑด : 49,859 ๊ฑด
- ํด๋น ๋ฐ์ดํฐ์
์ ์ ๋ถ ์ฌ์ฉํ๊ฑด ์๋๋ฉฐ Python์ธ์ด๋ฅผ ์ฐ์ ์ถ์ถํ๋ค์ ๋ฐ์ดํฐ์
๋ค์ ์๊น์๋ฅผ ํ์
, ๊ทธ ๋ค์ ์ ์ฒ๋ฆฌ๊ฐ ๊ณตํต์ ์ผ๋ก ๋ค์ด๊ฐ๋งํ ๋ฐ์ดํฐ๋ฅผ ๋ค์ ์ถ์ถํ์ฌ ํ์ต์ ์ฌ์ฉํ์ต๋๋ค.
Basic usage
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = 'MDDDDR/Llama-3.2-1B-Instruct-FFT-coder-python'
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id,
device_map="cuda:0",
torch_dtype=torch.bfloat16)
instruction = '''LCS(Longest Common Subsequence, ์ต์ฅ ๊ณตํต ๋ถ๋ถ ์์ด)๋ฌธ์ ๋ ๋ ์์ด์ด ์ฃผ์ด์ก์ ๋, ๋ชจ๋์ ๋ถ๋ถ ์์ด์ด ๋๋ ์์ด ์ค ๊ฐ์ฅ ๊ธด ๊ฒ์ ์ฐพ๋ ๋ฌธ์ ์ด๋ค.
์๋ฅผ ๋ค์ด, ACAYKP์ CAPCAK์ LCS๋ ACAK๊ฐ ๋๋ค.
###์
๋ ฅ : ์ฒซ์งธ ์ค๊ณผ ๋์งธ ์ค์ ๋ ๋ฌธ์์ด์ด ์ฃผ์ด์ง๋ค. ๋ฌธ์์ด์ ์ํ๋ฒณ ๋๋ฌธ์๋ก๋ง ์ด๋ฃจ์ด์ ธ ์์ผ๋ฉฐ, ์ต๋ 1000๊ธ์๋ก ์ด๋ฃจ์ด์ ธ ์๋ค.
###์ถ๋ ฅ : ์ฒซ์งธ ์ค์ ์
๋ ฅ์ผ๋ก ์ฃผ์ด์ง ๋ ๋ฌธ์์ด์ LCS์ ๊ธธ์ด๋ฅผ ์ถ๋ ฅํ๋ค.
###์
๋ ฅ ์์ :
ACAYKP
CAPCAK
###์ถ๋ ฅ ์์ : 4
'''
messages = [
{
"role":"user",
"content":"์๋๋ ๋ฌธ์ ๋ฅผ ์ค๋ช
ํ๋ ์ง์์ฌํญ์
๋๋ค. ์ด ์์ฒญ์ ๋ํด ์ ์ ํ๊ฒ ๋ต๋ณํด์ฃผ์ธ์.\n###์ง์์ฌํญ:{instruction}\n###๋ต๋ณ:".format(instruction=instruction)
}
]
with torch.no_grad():
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=False)
inputs = tokenizer(prompt, return_tensors="pt", padding=False).to('cuda')
outputs = model.generate(**inputs,
use_cache=False,
max_length=256,
top_p=0.9,
temperature=0.7,
repetition_penalty=1.0,
pad_token_id=tokenizer.pad_token_id)
output_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
final_output = output_text.split('๋ต๋ณ:')[-1].strip()
print(final_output)
# ```python
# def longest_common_subsequence(str1, str2):
# m = len(str1)
# n = len(str2)
# dp = [[0] * (n+1) for _ in range(m+1)]
#
# for i in range(m+1):
# for j in range(n+1):
# if i == 0 or j == 0:
# dp[i][j] = 0
# elif str1[i-1] == str2[j-1]:
# dp[i][j] = dp[i-1][j-1] + 1
# else:
# dp[i][j] = max(dp[i-1][j], dp[i][j-1])
#
# return dp[m][n]
#
# print(longest_common_subsequence("ACAYKP", "CAPCAK")) # Output: 4
# ```
Hardware
- A100 40GB x 1
- Training Time : 1 hour 45 minutes