SSH IntegrationIntegration
SSH + Ollama via Grip OS
Distribute local AI inference across fleet machines for parallel processing.
What You Can Do
Distributed inference
Run Ollama models on different fleet machines to parallelize large processing tasks.
Model-per-machine allocation
Assign different models to different machines based on their hardware capabilities.
Fleet-wide model management
Pull, update, and manage Ollama models across all fleet machines from a single command.
How to Set Up
1
Install Ollama on fleet machines via SSH.
2
Configure SSH keys for fleet access.
3
Pull models on each machine matching their hardware capability.
4
Use fleet batch execution to manage Ollama across machines.