API Reference
VLAModel
The main model class for loading and running VLA models.
VLAModel.from_preset(name)
Load a model by preset name.
model = VLAModel.from_preset("openvla-7b")
Parameters:
name(str) — Preset name:openvla-7b,smolvla-450m,dream-vla-7b
Returns: VLAModel instance
VLAModel.from_checkpoint(path)
Load a model from a local checkpoint.
model = VLAModel.from_checkpoint("./checkpoints/my-model")
model.predict(image, instruction)
Run inference on a single image-instruction pair.
Parameters:
image(PIL.Image) — Camera imageinstruction(str) — Natural language instruction
Returns: Action object with fields: x, y, z, roll, pitch, yaw, gripper
Action
@dataclass
class Action:
x: float # Position delta x
y: float # Position delta y
z: float # Position delta z
roll: float # Orientation delta roll
pitch: float # Orientation delta pitch
yaw: float # Orientation delta yaw
gripper: float # Gripper state (0=closed, 1=open)
VLATrainer
Training interface for fine-tuning VLA models.
VLATrainer(config)
Parameters:
config(TrainingConfig) — Training configuration
TrainingConfig
| Field | Type | Default | Description |
|---|---|---|---|
| model | str | required | Preset name or checkpoint path |
| dataset | str | required | Path to HDF5 dataset |
| method | str | "lora" | Training method: lora, qlora, full |
| num_epochs | int | 10 | Number of training epochs |
| learning_rate | float | 2e-5 | Learning rate |
| batch_size | int | 16 | Batch size |
| lora_rank | int | 16 | LoRA rank |
| lora_alpha | int | 32 | LoRA scaling factor |
| lora_dropout | float | 0.05 | LoRA dropout |
| wandb_project | str | null | W&B project name |
| mixed_precision | str | "bf16" | Mixed precision mode |
RobotController
Deploy a model to a real or simulated robot.
RobotController(model, robot, control_hz)
Parameters:
model(VLAModel) — Trained modelrobot(str) — Robot name: widowx, franka_panda, ur5control_hz(float) — Control loop frequency
Methods
start()— Initialize robot connectionexecute(instruction, max_steps=50)— Run taskstop()— Clean shutdown
CLI
vlarobot presets # List available presets
vlarobot train --model <name> --dataset <path> --method <method>
vlarobot evaluate --model <name> --benchmark <name>
vlarobot predict --model <name> --image <path> --instruction "<text>"
vlarobot --version # Show version