This chapter explains what Infernet Protocol is, why it exists, and how the pieces fit together. By the end you’ll have a mental model of the whole system and a working node (or API call) to prove it.
Anyone with a GPU can run a node. Anyone who needs LLM inference submits a job. The network routes the job to an available node with the right model loaded, the node runs inference and streams back the result, and the client’s payment is settled on-chain. No single company controls routing, pricing, or what models are available. The cryptographic auth layer means nodes never need to trust the control plane with private keys.