Full fine‐tuning - Goekdeniz-Guelmez/mlx-lm-lora GitHub Wiki

Full (Training Type)

In Full fine-tuning mode (--train-type full), all model parameters are updated (no adapters). Use this when the model is small enough or you need maximum accuracy. Note this requires much more memory and compute.

  • Typical use cases: Small models (e.g. 100M-1B parameters), or when you have abundant resources and want to fully adapt the model.

Example:

mlx_lm_lora.train
--model mlx-community/Josiefied-Qwen3-8B-abliterated-v1-4bit
--train
--train-type full
--data mlx-community/wikisql
--iters 300
--lr 5e-6

This fully fine-tunes my Josiefied-Qwen3-8B-abliterated-v1-4bit model. (LoRA and DoRA runs without --train-type by default; adding --train-type full switches to full fine-tuning).