Return to Home

NeuraLinker Whitepaper

NeuraLinker is a blockchain-native marketplace that aggregates idle GPU resources into a single, verifiable cloud for artificial-intelligence workloads.

This document details NeuraLinker's vision, technical architecture, token economics, and future development roadmap.

1. Abstract & Vision

1.1 Abstract

NeuraLinker is a blockchain-native marketplace that aggregates idle GPU resources into a single, verifiable cloud for artificial-intelligence workloads. By combining smart-contract scheduling, zero-knowledge performance proofs, and per-second micro-payments, the network turns fragmented hardware—whether in enterprise data centres or consumer gaming rigs—into a censorship-resistant super-computer. Developers gain deterministic pricing, cryptographic auditability, and immediate global reach; resource providers monetise excess capacity without platform lock-in or credit risk. NeuraLinker therefore addresses the growing imbalance between AI demand and the highly centralised supply of compute, unlocking a scalable, trust-less alternative to traditional hyperscale clouds.

1.2 One-Sentence Value Proposition

NeuraLinker converts idle GPUs into a verifiable, permission-less pool of AI compute that anyone can access, audit, and monetise in real time.

1.3 Vision

Our long-term vision is to make AI computation verifiable, permission-less, and globally accessible:

Verifiable – Every inference or training step is accompanied by machine-generated, zero-knowledge attestations recorded on-chain. This creates an immutable audit trail that guarantees the work was executed on the claimed hardware, within the declared service-level agreement, and without exposing proprietary data or models.

Permission-less – Any GPU owner can join the network by running a lightweight node and accepting jobs through open-source tooling. Likewise, any developer can submit a containerised workload without signing legal contracts, undergoing KYC, or relying on a single cloud vendor. The protocol's economic incentives—auction pricing, automatic settlement, and reputation NFTs—replace human gatekeepers with transparent code.

Globally Accessible – Layer-2 payment channels, cross-chain asset support, and auto-scaling resource auctions ensure that compute flows to where it is needed, when it is needed, at a market-clearing price denominated in the user's preferred token. This lowers the barrier for start-ups, academic labs, and emerging-market innovators to harness state-of-the-art AI without capital-intensive infrastructure.

By executing on this vision, NeuraLinker aims to become the foundational layer where machine learning, cryptographic verification, and open markets converge—democratising access to the most critical input of the intelligence era: compute.

2. Market Problem & Opportunity

2.1 Centralised GPU Supply & Cost Volatility

A handful of hyperscale clouds and hardware vendors now control the bulk of professional-grade GPUs. Spot prices for an NVIDIA H100 hover around US $2.49 per hour on the major clouds, while decentralised marketplaces list the same card between US $0.73–1.61—a 35-70 % discount. Severe retail mark-ups persist as well: the consumer RTX 5080 is trading 50 % above MSRP in April 2025, with the flagship RTX 5090 virtually unobtainable even at a US $3 000 street price. Such scarcity and price swings make cost planning for AI projects nearly impossible.

2.2 Run-away Demand for Large-Scale Training & Inference

The addressable market for AI compute has ballooned from niche academic clusters to a global race to build and serve trillion-parameter models. Industry reports peg the GPU market at US $61.6 billion in 2024, with forecasts exceeding US $460 billion by 2032. Meanwhile, the large-language-model segment alone is on a >30 % CAGR trajectory through 2030. Enterprises, start-ups, and public-sector labs increasingly compete for the same finite pool of accelerator cards, driving queue times up and compute budgets sky-high.

2.3 Limitations of Current Centralised Clouds

Central clouds offer convenience but introduce structural risks:

These inefficiencies translate directly into slower model iteration cycles and higher barriers to entry for smaller innovators.

2.4 Blockchain-Based Marketplaces Unlock Idle Capacity

A permission-less, on-chain marketplace can invert the equation:

Early networks such as Akash and Render have validated this thesis, already onboarding professional H100 clusters well below central-cloud rates and scaling toward thousands of facilities. NeuraLinker extends the paradigm to AI-specific workloads with model sharding, zk-proof validation, and a fine-tuning bounty exchange—unlocking the "dark GPU" economy at planetary scale.

3. Technology Architecture

3.1 Network Layers

Client SDK – A TypeScript/Go toolkit and CLI package workloads as signed OCI-compatible containers, encrypt input data, and broadcast bids. The SDK also opens an NeuraLinker streaming-payment channel and tracks zk-proof receipts in real time.

Resource-Auction Layer – A continuous double-auction smart contract pools all incoming bids and matches them to idle GPU offers. Matching logic considers token price, VRAM size, TEE availability, renewable-energy score, and historical reputation. Clearing happens in 30-second blocks; the winning provider locks the job hash and starts execution.

Settlement Layer – Per-second NeuraLinker pay-streams are settled on an Optimistic-rollup L2. The stream throttles or pauses automatically if telemetry drops below the contracted SLA and finalises after a 60-second dispute window. Payments flow to the provider's wallet, while a 1 % protocol fee accrues to the treasury.

3.2 On-Chain Scheduler & Job Lifecycle

  1. Submit – Developer calls submitJob(), uploading the container CID, encrypted data CID, max budget, SLA, and desired hardware class.
  2. Pre-flight Checks – Contract validates signatures, confirms the CID exists on IPFS/Filecoin, and escrows the max NeuraLinker budget.
  3. Auction Match – Scheduler emits a MatchRequest event; GPU nodes respond with sealed bids that include enclave-measurement hashes.
  4. Lock-In – Highest composite-score node receives a JobBegin message; container mounts within its TEE and pulls encrypted inputs.
  5. Run & Stream – Node emits TelemetryCommit every 10 s (hashed metrics + zk proof). Payment micro-stream ticks each second.
  6. Complete – Final model/artifacts are pushed to IPFS/Filecoin; contract verifies the output hash, closes the stream, and releases any budget remainder to the user.
  7. Dispute (Optional) – If output hash or telemetry proof fails, user calls openDispute(); automated verifier replays the proofs and, if invalid, cancels payment from the last valid tick forward.

3.3 Security Primitives

Together, these components form a layered, cryptographically-enforced pipeline that makes every AI workload provable, tamper-resistant, and pay-per-second composable.

4. Advanced Features

4.1 Sharded Training

NeuraLinker's scheduler can decompose a multi-billion-parameter network into smaller, cryptographically-addressed shards.

Outcome – Sharding lets many modest GPUs act as one super-computer while preserving auditability and resistance to censorship or hardware monopolies.

4.2 zk Telemetry

Every job streams performance evidence through a zero-knowledge performance oracle that proves, without revealing raw numbers:

Implementation details: a Groth16 circuit takes hashed Prometheus metrics as public inputs and outputs a succinct proof (~10 kB) every ten seconds. The contract accepts or throttles the micro-payment stream based on these proofs, eliminating the need for reputational staking or manual disputes.

4.3 Fine-Tune DEX

NeuraLinker embeds a miniature exchange for micro-finetuning jobs:

This DEX unlocks new revenue streams for data curators and model builders while keeping compute completely trust-less.

5. Tokenomics & Incentive Design

5.1 Token Overview

5.2 Allocation & Unlock Plan

Allocation Category Amount of Token % of Total Supply Unlock % at TGE Cliff Period (months) Vesting Period (months) TGE % of Total Supply
AI-Compute Ecosystem Incentives 4,000,000,000,000 40 % 27 % 0 24 6.00 %
Team & Core Contributors 2,000,000,000,000 20 % 15 % 12 36 5.00 %
Ecosystem Development & Partners 1,500,000,000,000 15 % 20 % 12 24 3.00 %
Community & Market Growth 1,500,000,000,000 15 % 25 % 0 18 3.75 %
Foundation Reserves 1,000,000,000,000 10 % 10 % 0 12 1.00 %

Release Logic
"Initial unlock" is liquid at the Token Generation Event; the remainder streams block-by-block over the stated vesting horizon. Vesting contracts are immutable and revoke-able only if recipients breach contributor agreements.

5.3 Economic Flow

5.4 Governance Framework

5.5 Token Value Accrual

6. Roadmap & Milestones

Phase Timeframe Milestone Target KPIs (end-phase)
0. Private Alpha 2025 Q3 Internal cluster + early node operator SDK • 20 GPU nodes
• 0.5 PFLOPS sustained
• Emissions audit baseline
1. Public Testnet 2025 Q4 Open RPC endpoint, faucet, on-chain scheduler V1 • 200 nodes
• 5 PFLOPS
• 5 t CO₂ offset
2. Mainnet Launch 2026 Q1 Payment channels, zk-Telemetry, security audit • 2 000 nodes
• 50 PFLOPS
• 50 t CO₂ offset
3. Multi-Chain Liquidity Gateway 2026 Q2 Atomic NeuraLinker swaps (ETH, SOL, BSC) • 3 000 nodes
• 80 PFLOPS
• 100 t CO₂ offset
4. DAO Transition 2026 Q3 Governance contracts, treasury on-chain • 5 000 nodes
• 120 PFLOPS
• 200 t CO₂ offset

7. Team

Dr. Elena Novak — Chief Technology Officer

PhD in distributed systems; former core developer at the IOTA Foundation. Leads the design of NeuraLinker's on-chain scheduler and network architecture.

Gabriel Ortiz — Lead AI Scientist

Ex-OpenAI researcher in reinforcement learning. Architects the model-sharding and large-scale training algorithms that power the platform's parallel compute layer.

Priya Ramanathan — Protocol Engineer

Former Ethereum Foundation fellow. Implements zero-knowledge telemetry circuits and performs formal security audits of all smart-contract modules.

Sophia Martinez — Product & UX Lead

Previously design manager at Figma. Owns developer-tooling UX and creates the dashboard workflows that make job submission friction-less.

Team Snapshot

This compact, four-person squad has shipped open-source GPU-orchestration libraries adopted by Hugging Face and published peer-reviewed research on zk-proof systems. Their combined expertise in blockchain infrastructure, advanced AI research, and developer-centric product design powers NeuraLinker's mission to deliver verifiable, decentralized compute.