Fine-Tuning
The process of further training a pre-trained language model on domain-specific data to adapt its behaviour, terminology, or output format to a particular use case or organisation.
How it works
Fine-tuning takes a foundation model and continues training on a smaller, task-specific dataset. The dataset is typically a few thousand to a few hundred thousand examples of inputs paired with the desired outputs. Common techniques include full fine-tuning (updating all parameters), LoRA (low-rank adaptation, updating a small set of parameters), and instruction tuning. The practical use cases are: adapting model output to a specific tone or terminology, embedding firm-specific knowledge that cannot be passed in context, and improving accuracy on a narrow class of tasks. For most UK enterprise deployments, RAG outperforms fine-tuning because the firm's data changes faster than a model can be retrained. Fine-tuning is the right choice when the task structure itself is the thing being learned, not the data.
Related terms
Large Language Model (LLM)
A neural network trained on large text corpora to predict the next token given context, used for text generation, summarisation, classification, and reasoning tasks across enterprise software.
Retrieval-Augmented Generation (RAG)
An architecture pattern that grounds language model outputs in retrieved documents from a private corpus, reducing hallucination and enabling answers based on the firm's own data rather than the model's training set.
Private AI
AI deployed on infrastructure the client controls (on-premise, in the client's cloud tenancy, or air-gapped), with no third-party LLM provider in the data path and no inference-time data export.
Want to see this technology in action?
Book a Discovery Call