Skip to content
Back to Docs

Distributed Training

Multi-GPU training

VLAROBOT uses Hugging Face Accelerate for distributed training.

accelerate launch --num_processes=4 -m vlarobot.cli train \
    --model openvla-7b \
    --dataset ./data/demos.hdf5 \
    --method lora

Gradient accumulation

config = TrainingConfig(
    model="openvla-7b",
    gradient_accumulation_steps=4,  # Effective batch = batch_size * 4
    batch_size=4,
)

Mixed precision

config = TrainingConfig(
    mixed_precision="bf16",  # or "fp16" or "no"
)