NeuraLinker is a blockchain-native marketplace that aggregates idle GPU resources into a single, verifiable cloud for artificial-intelligence workloads.
This document details NeuraLinker's vision, technical architecture, token economics, and future development roadmap.
NeuraLinker is a blockchain-native marketplace that aggregates idle GPU resources into a single, verifiable cloud for artificial-intelligence workloads. By combining smart-contract scheduling, zero-knowledge performance proofs, and per-second micro-payments, the network turns fragmented hardware—whether in enterprise data centres or consumer gaming rigs—into a censorship-resistant super-computer. Developers gain deterministic pricing, cryptographic auditability, and immediate global reach; resource providers monetise excess capacity without platform lock-in or credit risk. NeuraLinker therefore addresses the growing imbalance between AI demand and the highly centralised supply of compute, unlocking a scalable, trust-less alternative to traditional hyperscale clouds.
NeuraLinker converts idle GPUs into a verifiable, permission-less pool of AI compute that anyone can access, audit, and monetise in real time.
Our long-term vision is to make AI computation verifiable, permission-less, and globally accessible:
Verifiable – Every inference or training step is accompanied by machine-generated, zero-knowledge attestations recorded on-chain. This creates an immutable audit trail that guarantees the work was executed on the claimed hardware, within the declared service-level agreement, and without exposing proprietary data or models.
Permission-less – Any GPU owner can join the network by running a lightweight node and accepting jobs through open-source tooling. Likewise, any developer can submit a containerised workload without signing legal contracts, undergoing KYC, or relying on a single cloud vendor. The protocol's economic incentives—auction pricing, automatic settlement, and reputation NFTs—replace human gatekeepers with transparent code.
Globally Accessible – Layer-2 payment channels, cross-chain asset support, and auto-scaling resource auctions ensure that compute flows to where it is needed, when it is needed, at a market-clearing price denominated in the user's preferred token. This lowers the barrier for start-ups, academic labs, and emerging-market innovators to harness state-of-the-art AI without capital-intensive infrastructure.
By executing on this vision, NeuraLinker aims to become the foundational layer where machine learning, cryptographic verification, and open markets converge—democratising access to the most critical input of the intelligence era: compute.
A handful of hyperscale clouds and hardware vendors now control the bulk of professional-grade GPUs. Spot prices for an NVIDIA H100 hover around US $2.49 per hour on the major clouds, while decentralised marketplaces list the same card between US $0.73–1.61—a 35-70 % discount. Severe retail mark-ups persist as well: the consumer RTX 5080 is trading 50 % above MSRP in April 2025, with the flagship RTX 5090 virtually unobtainable even at a US $3 000 street price. Such scarcity and price swings make cost planning for AI projects nearly impossible.
The addressable market for AI compute has ballooned from niche academic clusters to a global race to build and serve trillion-parameter models. Industry reports peg the GPU market at US $61.6 billion in 2024, with forecasts exceeding US $460 billion by 2032. Meanwhile, the large-language-model segment alone is on a >30 % CAGR trajectory through 2030. Enterprises, start-ups, and public-sector labs increasingly compete for the same finite pool of accelerator cards, driving queue times up and compute budgets sky-high.
Central clouds offer convenience but introduce structural risks:
These inefficiencies translate directly into slower model iteration cycles and higher barriers to entry for smaller innovators.
A permission-less, on-chain marketplace can invert the equation:
Early networks such as Akash and Render have validated this thesis, already onboarding professional H100 clusters well below central-cloud rates and scaling toward thousands of facilities. NeuraLinker extends the paradigm to AI-specific workloads with model sharding, zk-proof validation, and a fine-tuning bounty exchange—unlocking the "dark GPU" economy at planetary scale.
Client SDK – A TypeScript/Go toolkit and CLI package workloads as signed OCI-compatible containers, encrypt input data, and broadcast bids. The SDK also opens an NeuraLinker streaming-payment channel and tracks zk-proof receipts in real time.
Resource-Auction Layer – A continuous double-auction smart contract pools all incoming bids and matches them to idle GPU offers. Matching logic considers token price, VRAM size, TEE availability, renewable-energy score, and historical reputation. Clearing happens in 30-second blocks; the winning provider locks the job hash and starts execution.
Settlement Layer – Per-second NeuraLinker pay-streams are settled on an Optimistic-rollup L2. The stream throttles or pauses automatically if telemetry drops below the contracted SLA and finalises after a 60-second dispute window. Payments flow to the provider's wallet, while a 1 % protocol fee accrues to the treasury.
Together, these components form a layered, cryptographically-enforced pipeline that makes every AI workload provable, tamper-resistant, and pay-per-second composable.
NeuraLinker's scheduler can decompose a multi-billion-parameter network into smaller, cryptographically-addressed shards.
Outcome – Sharding lets many modest GPUs act as one super-computer while preserving auditability and resistance to censorship or hardware monopolies.
Every job streams performance evidence through a zero-knowledge performance oracle that proves, without revealing raw numbers:
Implementation details: a Groth16 circuit takes hashed Prometheus metrics as public inputs and outputs a succinct proof (~10 kB) every ten seconds. The contract accepts or throttles the micro-payment stream based on these proofs, eliminating the need for reputational staking or manual disputes.
NeuraLinker embeds a miniature exchange for micro-finetuning jobs:
This DEX unlocks new revenue streams for data curators and model builders while keeping compute completely trust-less.
Allocation Category | Amount of Token | % of Total Supply | Unlock % at TGE | Cliff Period (months) | Vesting Period (months) | TGE % of Total Supply |
---|---|---|---|---|---|---|
AI-Compute Ecosystem Incentives | 4,000,000,000,000 | 40 % | 27 % | 0 | 24 | 6.00 % |
Team & Core Contributors | 2,000,000,000,000 | 20 % | 15 % | 12 | 36 | 5.00 % |
Ecosystem Development & Partners | 1,500,000,000,000 | 15 % | 20 % | 12 | 24 | 3.00 % |
Community & Market Growth | 1,500,000,000,000 | 15 % | 25 % | 0 | 18 | 3.75 % |
Foundation Reserves | 1,000,000,000,000 | 10 % | 10 % | 0 | 12 | 1.00 % |
Release Logic
"Initial unlock" is liquid at the Token Generation Event; the remainder streams block-by-block over the stated vesting horizon. Vesting contracts are immutable and revoke-able only if recipients breach contributor agreements.
Phase | Timeframe | Milestone | Target KPIs (end-phase) |
---|---|---|---|
0. Private Alpha | 2025 Q3 | Internal cluster + early node operator SDK |
• 20 GPU nodes • 0.5 PFLOPS sustained • Emissions audit baseline |
1. Public Testnet | 2025 Q4 | Open RPC endpoint, faucet, on-chain scheduler V1 |
• 200 nodes • 5 PFLOPS • 5 t CO₂ offset |
2. Mainnet Launch | 2026 Q1 | Payment channels, zk-Telemetry, security audit |
• 2 000 nodes • 50 PFLOPS • 50 t CO₂ offset |
3. Multi-Chain Liquidity Gateway | 2026 Q2 | Atomic NeuraLinker swaps (ETH, SOL, BSC) |
• 3 000 nodes • 80 PFLOPS • 100 t CO₂ offset |
4. DAO Transition | 2026 Q3 | Governance contracts, treasury on-chain |
• 5 000 nodes • 120 PFLOPS • 200 t CO₂ offset |
PhD in distributed systems; former core developer at the IOTA Foundation. Leads the design of NeuraLinker's on-chain scheduler and network architecture.
Ex-OpenAI researcher in reinforcement learning. Architects the model-sharding and large-scale training algorithms that power the platform's parallel compute layer.
Former Ethereum Foundation fellow. Implements zero-knowledge telemetry circuits and performs formal security audits of all smart-contract modules.
Previously design manager at Figma. Owns developer-tooling UX and creates the dashboard workflows that make job submission friction-less.
This compact, four-person squad has shipped open-source GPU-orchestration libraries adopted by Hugging Face and published peer-reviewed research on zk-proof systems. Their combined expertise in blockchain infrastructure, advanced AI research, and developer-centric product design powers NeuraLinker's mission to deliver verifiable, decentralized compute.