Skip to main content

Workspaces

Wafer workspaces provide on-demand cloud GPU access for development, profiling, and kernel evaluation. Launch GPU instances, sync your code, and run commands without managing infrastructure.

Quick Start

# List available workspaces
wafer workspaces list

# Create a new workspace
wafer workspaces create --gpu H100 --name my-workspace

# SSH into the workspace
wafer workspaces ssh my-workspace

# Run a command
wafer workspaces exec my-workspace "nvidia-smi"

# Delete when done
wafer workspaces delete my-workspace

Commands

wafer workspaces list

List all workspaces and their status:
wafer workspaces list
Output shows:
  • Workspace name and ID
  • GPU type and count
  • Status (running, stopped, provisioning)
  • Runtime

wafer workspaces create

Create a new cloud GPU workspace:
wafer workspaces create [OPTIONS]
Options:
OptionDescription
--nameWorkspace name (auto-generated if not specified)
--gpuGPU type: H100, A100, MI300X, etc.
--countNumber of GPUs (default: 1)
--imageDocker image to use
--diskDisk size in GB
Examples:
# Create an H100 workspace
wafer workspaces create --gpu H100 --name dev-h100

# Create a multi-GPU workspace
wafer workspaces create --gpu A100 --count 4 --name training

# Create with specific image
wafer workspaces create --gpu H100 --image pytorch/pytorch:latest

wafer workspaces delete

Delete a workspace:
wafer workspaces delete <workspace-name>
This permanently deletes the workspace and any data not synced back. Make sure to pull important files first.

wafer workspaces show

Show detailed workspace information:
wafer workspaces show <workspace-name>

wafer workspaces ssh

SSH into a running workspace:
wafer workspaces ssh <workspace-name>
This opens an interactive SSH session. Your SSH keys are automatically configured.

wafer workspaces exec

Execute a command in the workspace:
wafer workspaces exec <workspace-name> "<command>"
Examples:
# Check GPU status
wafer workspaces exec my-workspace "nvidia-smi"

# Run a Python script
wafer workspaces exec my-workspace "python train.py"

# Install packages
wafer workspaces exec my-workspace "pip install torch"

wafer workspaces sync

Sync files between local machine and workspace:
# Sync current directory to workspace
wafer workspaces sync <workspace-name>

# Sync specific directory
wafer workspaces sync <workspace-name> --local ./src --remote /workspace/src
Options:
OptionDescription
--localLocal directory path
--remoteRemote directory path
--excludePatterns to exclude

wafer workspaces pull

Pull files from workspace to local machine:
# Pull results directory
wafer workspaces pull my-workspace --remote /workspace/results --local ./results

Workflow Example

A typical development workflow:
1

Create Workspace

wafer workspaces create --gpu H100 --name kernel-dev
2

Sync Code

wafer workspaces sync kernel-dev
3

Run Evaluation

wafer workspaces exec kernel-dev "wafer evaluate gpumode --impl kernel.py"
4

Pull Results

wafer workspaces pull kernel-dev --remote /workspace/results --local ./results
5

Cleanup

wafer workspaces delete kernel-dev

SSH Key Management

Add your SSH public key for workspace access:
# List registered keys
wafer config ssh-keys list

# Add a new key
wafer config ssh-keys add ~/.ssh/id_rsa.pub

# Remove a key
wafer config ssh-keys remove <key-id>

Billing

Workspaces are billed by runtime. Check your usage:
# Open billing portal
wafer config billing portal

# Add credits
wafer config billing topup

Next Steps