r/EffectiveAltruism

"I indexed 383 hours of AI safety podcasts — here's what the Christiano-Yudkowsky debate actually looks like from inside the corpus"

Spotify searches episode titles. Listen Notes searches descriptions. Neither searches what's actually being said inside the conversation.

So I built something that does.

382 episodes. 74,566 searchable moments. Covers Dwarkesh Patel, Lex Fridman, 80,000 Hours, AXRP, The Inside View, Future of Life Institute, and more.

You type an idea, a name, a concept — it finds every moment across the entire corpus where that comes up, with a transcript snippet and a direct link to that exact timestamp on YouTube.

-Some searches worth trying:

- [scaling hypothesis](https://bardoonii-podsearch-alignment.hf.space?q=scaling+hypothesis) — Demis Hassabis and Dario Amodei both address this directly

- [AGI timelines](https://bardoonii-podsearch-alignment.hf.space?q=AGI+timelines) — Victoria Krakovna, Dwarkesh himself, Ege Erdil

- [deceptive alignment](https://bardoonii-podsearch-alignment.hf.space?q=deceptive+alignment) — Evan Hubinger across multiple lectures

- [Christiano Yudkowsky](https://bardoonii-podsearch-alignment.hf.space?q=Christiano+Yudkowsky) — every moment their disagreement comes up across 3 podcasts

Built by one person with zero programming background using AI tools. Free, no login required.

https://bardoonii-podsearch-alignment.hf.space

Curious what searches you'd try.

reddit.com
u/Downtown-Bowler5373 — 2 hours ago
Open-sourcing a decentralized AI training network with constitutional governance and economic alignment mechanisms
▲ 4 r/OpenAI+4 crossposts

Open-sourcing a decentralized AI training network with constitutional governance and economic alignment mechanisms

We are open-sourcing Autonet on April 6: a framework for decentralized AI training, inference, and governance where alignment happens through economic mechanism design rather than centralized oversight.

The core thesis: AI alignment is an economic coordination problem. The question is not how to constrain AI, but how to build systems where aligned behavior is the profitable strategy. Autonet implements this through:

  1. Dynamic capability pricing: the network prices capabilities it lacks, creating market signals that steer training effort toward what is needed rather than what is popular. This prevents monoculture.

  2. Constitutional governance on-chain: core principles are stored on-chain and evaluated by LLM consensus. 95% quorum required for constitutional amendments.

  3. Cryptographic verification: commit-reveal pattern prevents cheating. Forced error injection tests coordinator honesty. Multi-coordinator consensus validates results.

  4. Federated training: multiple nodes train on local data, submit weight updates verified by consensus, aggregate via FedAvg.

The motivation: AI development is consolidating around a few companies who control what gets built, how it is governed, and who benefits. We think the alternative is not regulation after the fact, but economic infrastructure that structurally distributes power.

9 years of on-chain governance and jurisdiction work went into this. Working code, smart contracts with tests passing, federated training pipeline.

Paper: https://github.com/autonet-code/whitepaper Code: https://github.com/autonet-code Website: https://autonet.computer MIT License.

Happy to answer questions about the mechanism design, the federated training architecture, or the governance model.

u/EightRice — 18 hours ago

Fermi Poker: A multiplayer Fermi estimation quiz + poker game with integrated video chat

I previously built a Wordle-style game for practicing Fermi estimation questions, but thought this was not emotionally engaging nor social enough.

In Fermi Poker you have to answer Fermi questions like "How many dentists work in the US?" with a range guess. There are multiple bettings rounds, with hint reveals in between. Based on the new information received you should update your confidence and act accordingly, by, for example, folding or betting more.

You need at least one other person to play and there is a maximum of 8 players per game.

fermi.poker
u/daniel_dolores — 22 hours ago
AI alignment as economic mechanism design: why governance infrastructure may matter more than constraint

AI alignment as economic mechanism design: why governance infrastructure may matter more than constraint

The dominant framing in AI safety treats alignment as a constraint problem: how do we restrict AI systems to behave as intended? I want to argue that alignment is better understood as an economic coordination problem, and that mechanism design offers tools the safety community has underexplored.

The core insight:

When multiple actors contribute to AI training, the question is not just "how do we make AI safe" but "how do we structure incentives so that self-interested behavior produces safe outcomes." This is precisely the domain of mechanism design.

We have built and are open-sourcing (April 6) a framework called Autonet that implements this:

  1. Verification without trust: Coordinators who evaluate training contributions are tested with injected forced errors. If they approve known-bad results, they lose their stake. This creates economic pressure for honest evaluation without requiring trust.

  2. Incentive alignment: The network dynamically pays more for capabilities it lacks, steering training effort toward what is collectively needed rather than what is individually profitable.

  3. Constitutional governance: Core safety principles are encoded on-chain and enforced automatically. Changing them requires 95% quorum. This creates a hard constraint that emerges from collective governance rather than being imposed top-down.

  4. Commit-reveal verification: Solvers commit solution hashes before ground truth is revealed. This prevents copying and creates a cryptographic record of honest independent work.

Why this matters for EA:

If you think AI risk is primarily about coordination failure between actors with misaligned incentives (companies racing, nations competing, researchers seeking fame), then the solution space includes mechanism design, not just technical alignment research. The two approaches are complementary: technical alignment makes individual AI systems safe; mechanism design makes the ecosystem of AI development safe.

Paper: github.com/autonet-code/whitepaper Code: github.com/autonet-code (MIT License, drops April 6)

Happy to discuss the mechanism design choices, the relationship to existing alignment approaches, or how this connects to EA priorities.

u/EightRice — 17 hours ago
Week