LLM fine tuning made easy

Axolotl streamlines your environments, architecture, datasets, configurations, GPU access and much more!
winglian
NanoCode012
tmm1
hamelsmu
mhenrichsen
maximegmd

100+ contributors

Github
Github repository screenshot

Supported LLMs

  • Mistral AI
    Mistral AI
  • llama
    llama
  • Eleuther AI
    Eleuther AI
  • Falcon
    Falcon
  • MPT
    MPT
  • HuggingFace
    HuggingFace
  • Cerebras
    Cerebras
  • XGen
    XGen
  • Qwen
    Qwen
  • RWKV
    RWKV
  • Gemma
    Gemma
  • MS Phi
    MS Phi

Deploy optimized LLMs in a few clicks

Train

Train various Huggingface models such as llama, pythia, falcon, mpt.

Full Finetune

Supports full finetune, lora, qlora, relora, and gptq.

Configuration

Customize configurations & hyperparameters using a simple yaml file or CLI overwrite.

Datasets

Load different dataset formats, use custom formats, or bring your own tokenized datasets.

Integrations

Integrated with xformer, flash attention, rope scaling, and multipacking.

Multi GPU

Works with single GPU or multiple GPUs via FSDP or Deepspeed.

Built with Axolotl

Community showcase

Cloud & GPU Partners

  • Deepspeed
    Deepspeed
  • Vllm
    Vllm
  • Skypilot
    Skypilot
  • Runpod
    Runpod
  • Wandb.ai
    Wandb.ai
  • Jarvislabs.ai
    Jarvislabs.ai
  • LambdaLabs
    LambdaLabs
  • Latitude.sh
    Latitude.sh