Quick Start Β· The Infernet Book

Quick Start

Get a node online in under 5 minutes. This walkthrough installs the CLI, runs setup, and verifies the node is heartbeating to the control plane.

Prerequisites

If you’re on a GPU machine, make sure your GPU drivers are installed before starting. The setup wizard will detect the GPU, but it can’t install drivers for you.

Step 1: Install the CLI

curl -sSL https://infernetprotocol.com/install | bash

This script installs the infernet binary to ~/.local/bin (or /usr/local/bin if you have write access) and adds it to your PATH. It takes about 30 seconds.

Verify the install:

infernet --version
# infernet 0.9.2

Step 2: Get Your Registration Token

Open the Infernet Dashboard, sign in, and navigate to Nodes β†’ Add Node. Copy the one-time registration token shown on that page. It looks like:

inft_reg_7x9k2mNpQwRsLvTbY4cJ

Step 3: Run Setup

infernet setup

The setup wizard will:

  1. Ask for your registration token
  2. Detect your GPU (NVIDIA, AMD, Apple Silicon, or CPU)
  3. Determine your VRAM tier (>=48gb, >=24gb, >=12gb, >=8gb, or cpu)
  4. Install Ollama if no inference backend is detected
  5. Ask which default model to load (defaults to qwen2.5:7b for >=8gb, qwen2.5:1.5b for cpu)
  6. Generate a secp256k1 keypair for this node
  7. Register the node with the control plane
  8. Write the config to ~/.infernet/config.json

Example output:

Infernet Node Setup
===================
Registration token: inft_reg_7x9k2mNpQwRsLvTbY4cJ

Detecting hardware...
  GPU: NVIDIA RTX 4090 (24 GB VRAM)
  Tier: >=24gb
  RAM: 64 GB

Checking inference backends...
  Ollama: not found
  Installing Ollama...  done

Default model [qwen2.5:14b]: 
Pulling qwen2.5:14b... β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ 100% (8.1 GB)

Generating node keypair...
  Public key: npub1abc123...
  Config written to ~/.infernet/config.json

Registering node...
  Node ID: node_8f3a2c1d
  Status: registered

Setup complete. Run `infernet start` to bring your node online.

Step 4: Start the Daemon

infernet start

The daemon starts in the foreground by default. You’ll see heartbeat logs:

[2026-04-30 14:23:01] Node node_8f3a2c1d starting...
[2026-04-30 14:23:01] Inference backend: ollama (localhost:11434)
[2026-04-30 14:23:01] Loaded models: qwen2.5:14b
[2026-04-30 14:23:02] Heartbeat OK (latency: 42ms)
[2026-04-30 14:23:32] Heartbeat OK (latency: 39ms)

To run it in the background:

infernet start --detach

Or install it as a system service that starts on boot:

infernet service install

Step 5: Verify

Check the node is online from the CLI:

infernet status
Node:     node_8f3a2c1d
Status:   online
Uptime:   3 minutes
Backend:  ollama
Models:   qwen2.5:14b
Jobs:     0 completed, 0 pending
Earnings: 0.00 USDC

You should also see the node appear as Online in the dashboard within 30 seconds of starting the daemon.

Step 6: Send a Test Job

You can send a test inference job directly from the CLI:

infernet chat "What is the capital of France?"
Paris.

Or with streaming visible:

infernet chat --stream "Explain how neural networks learn."

Tokens will appear as they’re generated.


What’s Next