Deployment¶
Deploy¶
$ threading deploy --target gcp-a100
[deploy] Containerizing workspace...
[deploy] Provisioning cluster [4x A100-80GB]
[deploy] MCP Server live: https://api.threading.cloud/v1/exp-92a
Targets¶
| Target | GPUs | Best for |
|---|---|---|
gcp-a100 |
4× A100-80GB | Large models |
gcp-v100 |
4× V100-16GB | Standard ML |
aws-p4d |
8× A100-40GB | Largest scale |
aws-p3 |
4× V100-16GB | Standard ML |
azure-a100 |
4× A100-80GB | Large models |
local-gpu |
Your GPU(s) | Development |
Load data¶
$ threading ld s3://bio-data/microbiome_full_v2
[dataset] Mounting to /mnt/data...
[dataset] Sharding across 4 nodes
| Scheme | Example |
|---|---|
s3:// |
s3://bucket/path/ |
gs:// |
gs://bucket/path/ |
az:// |
az://container/path/ |
Start jobs¶
Parameter sweeps¶
Monitor¶
Teardown¶
Cost optimization¶
Use spot instances:
Auto-shutdown: