Jahanzaib
Models & Training

Fine-Tuning

Continuing training on your own data to adjust the base model's behavior or knowledge.

Last updated: April 26, 2026

Definition

Fine-tuning takes a pre-trained model and continues training on your dataset to shift behavior. Common use cases: enforcing a specific output format, teaching a domain vocabulary, replicating a brand voice. What fine-tuning is bad at: adding new knowledge (RAG is better), changing reasoning ability (model size matters more), or fixing a one-off bug. The cost has dropped dramatically. Claude Haiku and GPT-5 mini fine-tuning is now under $20 per million training tokens.

When To Use

When you need consistent format/style output or to bake in domain conventions. Try prompt engineering and RAG first. Fine-tuning is the last resort, not the first.

Related Terms

Building with Fine-Tuning?

I've shipped this pattern in real production systems. If you want a second pair of eyes on your architecture, that's what I do.