Get a node online in under 5 minutes. This walkthrough installs the CLI, runs setup, and verifies the node is heartbeating to the control plane.
curl available on PATHIf youβre on a GPU machine, make sure your GPU drivers are installed before starting. The setup wizard will detect the GPU, but it canβt install drivers for you.
curl -sSL https://infernetprotocol.com/install | bashThis script installs the infernet binary to
~/.local/bin (or /usr/local/bin if you have
write access) and adds it to your PATH. It takes about 30 seconds.
Verify the install:
infernet --version
# infernet 0.9.2Open the Infernet Dashboard, sign in, and navigate to Nodes β Add Node. Copy the one-time registration token shown on that page. It looks like:
inft_reg_7x9k2mNpQwRsLvTbY4cJ
infernet setupThe setup wizard will:
>=48gb,
>=24gb, >=12gb, >=8gb,
or cpu)qwen2.5:7b
for >=8gb, qwen2.5:1.5b for cpu)~/.infernet/config.jsonExample output:
Infernet Node Setup
===================
Registration token: inft_reg_7x9k2mNpQwRsLvTbY4cJ
Detecting hardware...
GPU: NVIDIA RTX 4090 (24 GB VRAM)
Tier: >=24gb
RAM: 64 GB
Checking inference backends...
Ollama: not found
Installing Ollama... done
Default model [qwen2.5:14b]:
Pulling qwen2.5:14b... ββββββββββββββββββββ 100% (8.1 GB)
Generating node keypair...
Public key: npub1abc123...
Config written to ~/.infernet/config.json
Registering node...
Node ID: node_8f3a2c1d
Status: registered
Setup complete. Run `infernet start` to bring your node online.
infernet startThe daemon starts in the foreground by default. Youβll see heartbeat logs:
[2026-04-30 14:23:01] Node node_8f3a2c1d starting...
[2026-04-30 14:23:01] Inference backend: ollama (localhost:11434)
[2026-04-30 14:23:01] Loaded models: qwen2.5:14b
[2026-04-30 14:23:02] Heartbeat OK (latency: 42ms)
[2026-04-30 14:23:32] Heartbeat OK (latency: 39ms)
To run it in the background:
infernet start --detachOr install it as a system service that starts on boot:
infernet service installCheck the node is online from the CLI:
infernet statusNode: node_8f3a2c1d
Status: online
Uptime: 3 minutes
Backend: ollama
Models: qwen2.5:14b
Jobs: 0 completed, 0 pending
Earnings: 0.00 USDC
You should also see the node appear as Online in the dashboard within 30 seconds of starting the daemon.
You can send a test inference job directly from the CLI:
infernet chat "What is the capital of France?"Paris.
Or with streaming visible:
infernet chat --stream "Explain how neural networks learn."Tokens will appear as theyβre generated.