A decentralized network just trained a 72B-parameter AI model that beats Meta's LLaMA. Most people can't even figure out how to buy the token. That's the edge.
Templar (SN3) on Bittensor just completed the largest permissionless decentralized LLM training run in history — a 72B parameter model trained across 70+ nodes on commodity internet. The model beats LLaMA-2-70B on MMLU. The token trades on a DEX with extreme friction to buy. Market cap is ~$50M. The team has NeurIPS-accepted research, a full-stack AI pipeline, and upcoming catalysts. This is a friction trade: buy the complexity, sell the news.
On March 10, 2026, a team called Covenant AI announced the completion of Covenant-72B — a 72-billion parameter large language model pre-trained entirely on decentralized infrastructure.
No data center. No central company. No permission needed. Just 70+ independent participants around the world, connected over standard 500 Mb/s internet, coordinating through a blockchain protocol on Bittensor's Subnet 3.
The model was trained on approximately 1.1 trillion tokens and scored 67.1 on MMLU zero-shot, beating Meta's LLaMA-2-70B (65.6) under identical test conditions. The weights are open-source on Hugging Face under Apache License. Anyone can verify it.
This isn't a whitepaper promise. It's not a roadmap slide. It's a downloadable model with verifiable benchmarks that you can run yourself right now.
For years, the AI × crypto narrative has been built on potential. Decentralized compute projects raised hundreds of millions promising to challenge Big Tech's monopoly on AI training. None of them delivered a model you could actually use.
Covenant-72B changes that. It's the first credible proof that permissionless, trustless, decentralized AI training works at scale. No whitelisting. No KYC. No approval process. Anyone with GPUs could join or leave the training run freely.
The technical innovation underneath is real. Two key breakthroughs made this possible:
The result: 6% communication overhead across public internet. That's insanely efficient for a globally distributed training run.
Here's the data point that made me pay attention. According to Epoch AI, decentralized training compute has grown 600,000x since 2020 — roughly a 20x per year growth rate. Centralized frontier AI training compute grows at approximately 5x per year.
The gap is still massive. Templar is roughly 300x smaller than frontier data centers in effective throughput today. But the trajectory is what matters. If decentralized training continues growing 4x faster than centralized infrastructure, the gap narrows significantly within 2-3 years.
SparseLoCo's compression techniques could theoretically enable training models 8x larger than current decentralized runs. Streaming DiLoCo could add another 10x reduction in bandwidth requirements. The ceiling for what's possible is much higher than what's been demonstrated so far.
And the broader AI crypto narrative is on fire. TAO is up 64% in 7 days. Grayscale expanded its TAO Trust. The first Bittensor halving in December 2025 cut emissions in half. NVIDIA posted record Q1 2026 revenue. AI tokens are leading the entire crypto market.
What most people miss is that Templar isn't a standalone project. It's one piece of a three-part decentralized AI pipeline built by the same team:
| Subnet | Function | Status |
|---|---|---|
| SN3 — Templar | Pre-training | Covenant-72B shipped |
| SN39 — Basilica | Decentralized compute | Live, infrastructure layer |
| SN81 — Grail | Post-training / RL | Inference-only, RL coming |
This is full vertical integration. Pre-training builds the base model. Basilica provides the GPU compute. Grail handles reinforcement learning to turn a base model into something useful — think the difference between GPT-4 base and ChatGPT.
No other decentralized AI project has this kind of coverage. And all three are built by the same team with proven execution — they just shipped the largest permissionless training run in history.
Here's where the trade gets interesting.
SN3 (Templar) is a Bittensor subnet token. To buy it, you need to:
This filters out 95% of potential buyers. The people hearing about Covenant-72B on Twitter aren't going to figure out this process. That friction is your edge.
The playbook is simple: buy the friction, sell the news. When the right catalyst hits — a major tech voice discussing Templar on a mainstream podcast, a Grayscale expansion, or a second training run — the flood of buyers who couldn't be bothered with the wallet setup suddenly have a reason to figure it out. That's your liquidity event.
SN3 market cap: ~$50M. ATH: ~$44. Current price: ~$14. TAO FDV: ~$6B. OpenAI valuation: $500B. If decentralized AI captures even a fraction of what centralized labs are valued at, the re-rating potential is enormous.
Templar isn't the only team working on decentralized training. But it's the only one that's fully permissionless and has shipped at this scale.
| Project | Model Scale | Participation | Funding |
|---|---|---|---|
| Templar / Covenant | 72B (shipped) | Permissionless | TAO emissions |
| Prime Intellect | 100B+ MoE (in progress) | Whitelisted | VC-backed |
| Nous / Psyche | 40B (Consilience) | Permissioned testnet | $65M raised |
| Pluralis Research | 8B | Curated | VC-backed |
Nous and Prime Intellect have significantly more venture capital. But they also gate participation. Templar's permissionless design is the whole point — it's what makes the Bittensor thesis work. Anyone can contribute if they deliver value. That's the crypto ethos actually applied to AI infrastructure.
I'm not going to pretend this is risk-free. Here's what could go wrong:
The model beats LLaMA-2-70B, which is a 2.5-year-old model. It's not competing with LLaMA-3, Qwen-72B, or anything frontier. The comparison is encouraging but needs context.
Liquidity is thin. The SN3/TAO pool has roughly $28.7M in total liquidity. Big moves in either direction happen fast, and exiting a large position cleanly isn't guaranteed.
Templar is currently in a 100% burn period with no emissions being distributed. The transition to Crusades creates uncertainty about what drives token value in the interim.
The token already ran from ~$5 to ~$21 in a month before pulling back to ~$14. You're not buying the bottom — you're buying momentum with a thesis.
And the entire Bittensor ecosystem is experimental. Subnet economics are new, untested at scale, and could break in ways nobody anticipates.
Covenant-72B is the most significant technical achievement in crypto × AI to date. Not a roadmap. Not a promise. A downloadable, verifiable, benchmark-beating model trained by strangers on the internet.
The team has NeurIPS-accepted research, a full-stack AI pipeline across three subnets, and a track record of shipping. The token is friction-gated on a subnet DEX that most people can't navigate. The market cap is $50M in a world where OpenAI is worth $500B.
The thesis is simple: decentralized AI training just proved it works. The market hasn't fully priced that in yet because most people literally can't buy the token. When they can — or more likely, when the right voices force the conversation — re-rating happens.
Buy the friction. Sell the news.