Workspaces
Wafer workspaces provide on-demand cloud GPU access for development, profiling, and kernel evaluation. Launch GPU instances, sync your code, and run commands without managing infrastructure.
Quick Start
# List available workspaces
wafer workspaces list
# Create a new workspace
wafer workspaces create --gpu H100 --name my-workspace
# SSH into the workspace
wafer workspaces ssh my-workspace
# Run a command
wafer workspaces exec my-workspace "nvidia-smi"
# Delete when done
wafer workspaces delete my-workspace
Commands
wafer workspaces list
List all workspaces and their status:
Output shows:
- Workspace name and ID
- GPU type and count
- Status (running, stopped, provisioning)
- Runtime
wafer workspaces create
Create a new cloud GPU workspace:
wafer workspaces create [OPTIONS]
Options:
| Option | Description |
|---|
--name | Workspace name (auto-generated if not specified) |
--gpu | GPU type: H100, A100, MI300X, etc. |
--count | Number of GPUs (default: 1) |
--image | Docker image to use |
--disk | Disk size in GB |
Examples:
# Create an H100 workspace
wafer workspaces create --gpu H100 --name dev-h100
# Create a multi-GPU workspace
wafer workspaces create --gpu A100 --count 4 --name training
# Create with specific image
wafer workspaces create --gpu H100 --image pytorch/pytorch:latest
wafer workspaces delete
Delete a workspace:
wafer workspaces delete <workspace-name>
This permanently deletes the workspace and any data not synced back. Make sure to pull important files first.
wafer workspaces show
Show detailed workspace information:
wafer workspaces show <workspace-name>
wafer workspaces ssh
SSH into a running workspace:
wafer workspaces ssh <workspace-name>
This opens an interactive SSH session. Your SSH keys are automatically configured.
wafer workspaces exec
Execute a command in the workspace:
wafer workspaces exec <workspace-name> "<command>"
Examples:
# Check GPU status
wafer workspaces exec my-workspace "nvidia-smi"
# Run a Python script
wafer workspaces exec my-workspace "python train.py"
# Install packages
wafer workspaces exec my-workspace "pip install torch"
wafer workspaces sync
Sync files between local machine and workspace:
# Sync current directory to workspace
wafer workspaces sync <workspace-name>
# Sync specific directory
wafer workspaces sync <workspace-name> --local ./src --remote /workspace/src
Options:
| Option | Description |
|---|
--local | Local directory path |
--remote | Remote directory path |
--exclude | Patterns to exclude |
wafer workspaces pull
Pull files from workspace to local machine:
# Pull results directory
wafer workspaces pull my-workspace --remote /workspace/results --local ./results
Workflow Example
A typical development workflow:
Create Workspace
wafer workspaces create --gpu H100 --name kernel-dev
Sync Code
wafer workspaces sync kernel-dev
Run Evaluation
wafer workspaces exec kernel-dev "wafer evaluate gpumode --impl kernel.py"
Pull Results
wafer workspaces pull kernel-dev --remote /workspace/results --local ./results
Cleanup
wafer workspaces delete kernel-dev
SSH Key Management
Add your SSH public key for workspace access:
# List registered keys
wafer config ssh-keys list
# Add a new key
wafer config ssh-keys add ~/.ssh/id_rsa.pub
# Remove a key
wafer config ssh-keys remove <key-id>
Billing
Workspaces are billed by runtime. Check your usage:
# Open billing portal
wafer config billing portal
# Add credits
wafer config billing topup
Next Steps