u/EightRice

Decentralized AI governance: what happens when AI training is controlled by communities instead of corporations?

Decentralized AI governance: what happens when AI training is controlled by communities instead of corporations?

AI training is controlled by a handful of companies. They decide what gets trained, on what data, and for whose benefit. This is not inevitable. It is a coordination problem.

We are open-sourcing Autonet on April 6: infrastructure for decentralized AI training where governance, verification, and economic incentives are built into the protocol.

How it restructures AI development:

  1. Anyone can contribute compute, data, or training effort as a solver, coordinator, or aggregator
  2. Contributors stake tokens and earn rewards proportional to verified quality
  3. Verification is cryptographic: commit-reveal prevents cheating, forced error injection keeps evaluators honest
  4. Constitutional governance encodes core principles on-chain, changeable only by 95% community consensus
  5. The network dynamically pays more for capabilities it lacks, steering effort without central planning

Why this matters for the future:

If AI governance is decided by corporate boardrooms, AI will serve shareholder interests. If AI governance is decided by diverse communities of contributors, AI can serve broader human interests. The infrastructure determines the outcome.

This is not a prediction about AI consciousness or superintelligence. It is about the mundane but critical question of who controls the economic structure of AI training today.

Paper: github.com/autonet-code/whitepaper Code: github.com/autonet-code MIT License. Open-sourcing April 6.

u/EightRice — 16 hours ago
AI alignment as economic mechanism design: why governance infrastructure may matter more than constraint

AI alignment as economic mechanism design: why governance infrastructure may matter more than constraint

The dominant framing in AI safety treats alignment as a constraint problem: how do we restrict AI systems to behave as intended? I want to argue that alignment is better understood as an economic coordination problem, and that mechanism design offers tools the safety community has underexplored.

The core insight:

When multiple actors contribute to AI training, the question is not just "how do we make AI safe" but "how do we structure incentives so that self-interested behavior produces safe outcomes." This is precisely the domain of mechanism design.

We have built and are open-sourcing (April 6) a framework called Autonet that implements this:

  1. Verification without trust: Coordinators who evaluate training contributions are tested with injected forced errors. If they approve known-bad results, they lose their stake. This creates economic pressure for honest evaluation without requiring trust.

  2. Incentive alignment: The network dynamically pays more for capabilities it lacks, steering training effort toward what is collectively needed rather than what is individually profitable.

  3. Constitutional governance: Core safety principles are encoded on-chain and enforced automatically. Changing them requires 95% quorum. This creates a hard constraint that emerges from collective governance rather than being imposed top-down.

  4. Commit-reveal verification: Solvers commit solution hashes before ground truth is revealed. This prevents copying and creates a cryptographic record of honest independent work.

Why this matters for EA:

If you think AI risk is primarily about coordination failure between actors with misaligned incentives (companies racing, nations competing, researchers seeking fame), then the solution space includes mechanism design, not just technical alignment research. The two approaches are complementary: technical alignment makes individual AI systems safe; mechanism design makes the ecosystem of AI development safe.

Paper: github.com/autonet-code/whitepaper Code: github.com/autonet-code (MIT License, drops April 6)

Happy to discuss the mechanism design choices, the relationship to existing alignment approaches, or how this connects to EA priorities.

u/EightRice — 17 hours ago
What if AI alignment is an economic coordination problem, not a constraint problem?

What if AI alignment is an economic coordination problem, not a constraint problem?

After 9 years building on-chain governance infrastructure, I have arrived at a thesis: you cannot bolt safety onto a system that economically rewards racing to the bottom. You have to make alignment the profitable strategy.

We are open-sourcing Autonet on April 6 - a decentralized AI training and inference network that implements this idea.

The core mechanism: the network dynamically prices capabilities it lacks. If everyone trains language models, vision capability prices go up. This creates natural economic gradients toward diversity rather than monoculture. Constitutional principles govern the network on-chain, not a single company safety team.

The deeper question: as AI becomes the most consequential technology of our time, should its governance be a corporate decision or a constitutional one? We think communities should govern their own AI through economic mechanisms that make alignment profitable, not through trusting corporations to self-regulate.

Working code, smart contracts, federated training pipeline. MIT License.

Paper: https://github.com/autonet-code/whitepaper Website: https://autonet.computer

Interested in the community take: is economic mechanism design a viable path to alignment, or does it just shift the problem?

u/EightRice — 18 hours ago
Open-sourcing a decentralized AI training network with constitutional governance and economic alignment mechanisms
▲ 4 r/OpenAI+4 crossposts

Open-sourcing a decentralized AI training network with constitutional governance and economic alignment mechanisms

We are open-sourcing Autonet on April 6: a framework for decentralized AI training, inference, and governance where alignment happens through economic mechanism design rather than centralized oversight.

The core thesis: AI alignment is an economic coordination problem. The question is not how to constrain AI, but how to build systems where aligned behavior is the profitable strategy. Autonet implements this through:

  1. Dynamic capability pricing: the network prices capabilities it lacks, creating market signals that steer training effort toward what is needed rather than what is popular. This prevents monoculture.

  2. Constitutional governance on-chain: core principles are stored on-chain and evaluated by LLM consensus. 95% quorum required for constitutional amendments.

  3. Cryptographic verification: commit-reveal pattern prevents cheating. Forced error injection tests coordinator honesty. Multi-coordinator consensus validates results.

  4. Federated training: multiple nodes train on local data, submit weight updates verified by consensus, aggregate via FedAvg.

The motivation: AI development is consolidating around a few companies who control what gets built, how it is governed, and who benefits. We think the alternative is not regulation after the fact, but economic infrastructure that structurally distributes power.

9 years of on-chain governance and jurisdiction work went into this. Working code, smart contracts with tests passing, federated training pipeline.

Paper: https://github.com/autonet-code/whitepaper Code: https://github.com/autonet-code Website: https://autonet.computer MIT License.

Happy to answer questions about the mechanism design, the federated training architecture, or the governance model.

u/EightRice — 18 hours ago
▲ 1 r/ethdev

Open-sourcing a decentralized AI training network with on-chain verification : smart contracts, staking, and constitutional governance

We're open-sourcing Autonet on April 6 : a framework for decentralized AI model training and inference where verification, rewards, and governance happen on-chain.

Smart contract architecture:

Contract Purpose
Project.sol AI project lifecycle, funding, model publishing, inference
TaskContract.sol Task proposal, checkpoints, commit-reveal solution commitment
ResultsRewards.sol Multi-coordinator Yuma voting, reward distribution, slashing
ParticipantStaking.sol Role-based staking (Proposer 100, Solver 50, Coordinator 500, Aggregator 1000 ATN)
ModelShardRegistry.sol Distributed model weights with Merkle proofs and erasure coding
ForcedErrorRegistry.sol Injects known-bad results to test coordinator vigilance
AutonetDAO.sol On-chain governance for parameter changes

How it works:

  1. Proposer creates a training task with hidden ground truth
  2. Solver trains a model, commits a hash of the solution
  3. Ground truth is revealed, then solution is revealed (commit-reveal prevents copying)
  4. Multiple coordinators vote on result quality (Yuma consensus)
  5. Rewards distributed based on quality scores
  6. Aggregator performs FedAvg on verified weight updates
  7. Global model published on-chain

Novel mechanisms:

  • Forced error testing: The ForcedErrorRegistry randomly injects known-bad results. If a coordinator approves them, they get slashed. Keeps coordinators honest.
  • Dual token economics: ATN (native token for gas, staking, rewards) + Project Tokens (project-specific investment/revenue sharing)
  • Constitutional governance: Core principles stored on-chain, evaluated by LLM consensus. 95% quorum for constitutional amendments.

13+ Hardhat tests passing. Orchestrator runs complete training cycles locally.

Code: github.com/autonet-code Paper: github.com/autonet-code/whitepaper MIT License.

Interested in feedback on the contract architecture, especially the commit-reveal verification and the forced error testing pattern.

u/EightRice — 19 hours ago