Skip to content
Back to Docs

LoRA Training

What is LoRA?

Low-Rank Adaptation (LoRA) freezes pre-trained model weights and injects trainable rank-decomposition matrices into each layer. This drastically reduces the number of trainable parameters.

Configuration

config = TrainingConfig(
    model="openvla-7b",
    method="lora",
    lora_rank=16,         # Rank of decomposition matrices
    lora_alpha=32,        # Scaling factor
    lora_dropout=0.05,    # Dropout probability
    lora_target_modules=["q_proj", "v_proj"],  # Which layers to adapt
)

QLoRA

Quantized LoRA uses 4-bit quantization for even lower VRAM usage:

config = TrainingConfig(
    model="openvla-7b",
    method="qlora",
    lora_rank=16,
)