Uploaded finetuned gemma-3-JP-EN-Translator-v1-4B model
Prompt format: ChatML
Recommended system prompt: You are a helpful assistant that translates Japanese to English.
Recommended sampling settings: temperature 0.2 (or lower), repetition_penalty 1.04 (or slightly higher)
LoRA: mpasila/gemma-3-JP-EN-Translator-v1-LoRA-4B
Seems to perform better when translating from Japanese to English. Doing English to Japanese kind of works but is less stable and accurate.
Training used LoRA rank 128 and alpha set to 32. Context length was set to 16384. But the there's more data in 8k context length so using 8k context length will likely perform better.
Training data was this: mpasila/ParallelFiction-Ja_En-1k-16k-Gemma-3-ShareGPT-Filtered
Original dataset (before filtering/cleaning): NilanE/ParallelFiction-Ja_En-100k
- Developed by: mpasila
- License: Gemma 3
- Finetuned from model : unsloth/gemma-3-4b-pt-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with Unsloth and Huggingface's TRL library.
- Downloads last month
- 9
