r/Futurology

Europe set 2030 as a date to dismantle its reliance on US financial infrastructure like Visa/Mastercard payments; it's happening far quicker.
🔥 Hot ▲ 10.3k r/Futurology

Europe set 2030 as a date to dismantle its reliance on US financial infrastructure like Visa/Mastercard payments; it's happening far quicker.

Now that the US sees the EU as a potential enemy, Europe has moved to ensure its financial system can never be sanctioned or shut down; something the US has done to Russia, Cuba, and Iran.

By late 2025, efforts centered on the Digital Euro, a nonprofit payment system run by the European Central Bank (like euro cash). Due by 2030, it would offer lower fees and quickly replace much Visa and Mastercard usage. While still in development, other solutions arrived sooner. Instant bank-to-bank payments, bypassing cards, are expanding rapidly. In February, 130 million users across 13 national systems were linked in a Europe-wide network aiming to cover all of Europe. Fees are a fraction of Visa/Mastercard, though unlike the Digital Euro, it's not yet available as a debit card; only online and on phones.

The EU also wants to decouple from US software and is preparing its own alternative to Microsoft Office.

Europe Is Breaking Up With Visa and Mastercard — and It’s a $24 Trillion Problem

Europe builds Microsoft-alternative ‘Euro-Office’ to reclaim digital sovereignty

u/lughnasadh — 7 hours ago
🔥 Hot ▲ 423 r/Futurology

AI targeting systems have made war crimes structurally unaccountable

Israel's Lavender system assigned assassination scores to 37,000 people using mass surveillance data, communication patterns, social graphs, phone contacts. Human review per target: 20 seconds, solely to confirm the person's biological sex. Known error rate: 10%, meaning ~3,700 people with zero militant connection were marked for killing by design, not accident.

The US's Project Maven (now run by Palantir) compressed targeting timelines from 743 minutes to under 1 minute. In the Iran campaign launched February 2026, Maven's pipeline identified 15,000 targets in 10 days across 177 cities. 900 strikes in the first 12 hours. $5.6 billion in munitions in 48 hours. Impossible without AI.

Under the Rome Statute, individual criminal responsibility requires proving a specific person ordered a specific unlawful act. When an algorithm recommends, a commander batch-approves a queue, and an operator rubber-stamps in 20 seconds, that chain of individual intent collapses. No single human "decided" to kill those 3,700 civilians, the system did. Officers themselves described it: "Everything was automatic. I had zero added value as a human, apart from being a stamp of approval."

The ICRC has stated that lawfulness under IHL "cannot be assessed by a machine." The UN Special Rapporteur called for an immediate moratorium on autonomous targeting. Nothing happened. Instead, after the Iran campaign, Palantir stock surged 12.4% in a single week.

We are watching the field test of a new doctrine: that AI-assisted mass targeting is both militarily optimal and legally unprosecutable. If that conclusion holds, every future conflict will look like this.

reddit.com
u/Large-Reporter-1746 — 3 hours ago

Nuclear power is statistically the safest energy source per terawatt-hour. Environmentalists have opposed it for 50 years. Have they accidentally caused more climate damage than the industries they were fighting?

The numbers on this are uncomfortable. More people die from coal pollution per week than have died from nuclear incidents in all of recorded history combined.

Germany shut its reactors, went back to coal, and emissions spiked.

France kept its fleet running and has some of the cleanest electricity in Europe. The opposition was built on fear after Three Mile Island and Chernobyl and fears that were real but were never updated as the technology evolved.

Gen IV reactors and small modular reactors are a different beast entirely. The question isn't whether the fear was understandable — it was. The question is whether holding onto it for half a century, while the planet warmed, was the greater harm. That's a genuinely painful thing to sit with if you've been on the environmentalist side.

Should a true environmentalist reverse their position regarding nuclear?

reddit.com
u/bitcoinerguide — 22 minutes ago
The "Responsibility Gap": How commercial cloud infrastructure is currently automating the military kill-chain, and why the "Human-in-the-loop" defense is a legal fiction.
🔥 Hot ▲ 168 r/Futurology

The "Responsibility Gap": How commercial cloud infrastructure is currently automating the military kill-chain, and why the "Human-in-the-loop" defense is a legal fiction.

The way the public talks about AI risk completely misses the mark. Everyone is stressing out about AGI or deepfakes, while militaries are currently using commercial cloud infrastructure to automate target generation at an industrial scale.

There used to be a physical bottleneck in war—human analysts had to actually sit there and look at drone feeds or read intercept logs. It took days. Now, systems like "Lavender" are just ingesting massive amounts of surveillance data, text messages, and location tracking, and assigning human beings a threat score from 1 to 100 based on statistical correlations. At one point, it generated an automated kill list of up to 37,000 names.

The military defense for this is always: "A machine doesn't shoot. A human always makes the final call."

But cognitive psychologists call this automation bias. When an algorithm is spitting out thousands of targets a day, the human analyst gets completely overwhelmed. Reports show officers spending like 20 seconds reviewing a target file before authorizing a strike. They are literally just rubber-stamping the machine's output because it's too fast to actually double-check.

Worse, the algorithms are reportedly pre-authorized to accept a fixed ratio of civilian collateral damage (like 15 to 20 civilians per low-level target). It's just a math equation built into the factory settings.

So what happens when the model makes a statistical error (which all ML models do), an exhausted analyst clicks 'approve' after 20 seconds, and innocent people die? Who committed the war crime? The cloud host? The software engineer? The analyst? The machine? There is a massive "responsibility gap" and international law has zero answers for it.

If anyone wants to understand the actual mechanics of these systems and the legal vacuum we're in, there's a breakdown of it here that talks about the specifics: https://youtu.be/8W3NXmn75YQ

Curious how other people view the liability issue here. Are we just completely sleepwalking into this?

u/firehmre — 9 hours ago
Economists Once Dismissed the A.I. Job Threat, but Not Anymore

Economists Once Dismissed the A.I. Job Threat, but Not Anymore

Artificial intelligence hasn’t disrupted the labor market, economists say, but they are increasingly convinced that it will — and that policymakers are unprepared

nytimes.com
u/Gari_305 — 6 hours ago
🔥 Hot ▲ 172 r/Futurology

What is giving you hope right now?

I’m trying (and struggling a good bit) to remain hopeful for a better future with everything that is happening in the world right now. I know for so so many people across the world everything that’s going on is really weighing heavy mentally, emotionally, and physically. I’d love to know what is giving you hope for the future right now? Is there news that we aren’t hearing a lot about that is giving you bits of hope for a better future?

reddit.com
u/sensitivemushrooms — 17 hours ago

could mining hardware actually do something useful instead of just burning power

there's like millions of mining rigs worldwide just crunching random numbers for crypto. always seemed wasteful tbh

saw some project called qubic trying to get doge miners to do AI training work while they mine. no idea if the AI stuff is legit but miners are posting slightly better earnings. makes me wonder if we could actually repurpose all this compute for something that matters instead of just making digital coins

is this realistic or just another pipe dream? curious what this community thinks about redirecting mining power toward actual productive work

reddit.com
u/jorchjorch — 2 hours ago

What would the future look like if China took over the global hegemon from the US?

Would the US even allow it or would we possibly have a nuclear war before this ever gets close to happening?

The idea behind the Fourth Turning is an interesting one. There is usually a fourth turning cycle that comes from a large societal collapse or conflict.

Before the nuclear era, the fight for global hegemony was fought with armies. In the case of the USSR, the US used economic power to defeat the soviets.

Would the US and China be able to coexist in a dual hegemon world?

reddit.com
u/bitcoinerguide — 10 minutes ago

In a world run by optimizing AI systems, who will set the direction?

As AI systems become more advanced, more and more decisions will be made by systems that optimize for specific goals: profit, efficiency, engagement, performance, growth.

But optimization is not the same thing as direction.

Optimization answers the question “how to get more,” but direction answers the question “where are we going?”

In the future, we may have powerful systems optimizing different parts of society - markets, media, transportation, finance, even government systems.

So I’m wondering:

In a world increasingly run by optimizing systems, who will set the direction, not just the optimization?

reddit.com
u/Civil-Interaction-76 — 6 hours ago

Joby Aviation's S4 eVTOL Completes Public Demo Flight Over San Francisco Bay on March 12, 2026

Joby Aviation, a leading developer of electric vertical take-off and landing (eVTOL) aircraft, recently conducted a public demonstration flight of its S4 vehicle over San Francisco Bay. The flight, which occurred on March 12, 2026, originated from Oakland and was intended as a proof of concept for future commercial air taxi routes in the region. The Joby S4 is designed to carry a pilot and four passengers at speeds up to 200 mph, offering a quiet, emissions-free alternative for urban and regional travel. This demonstration is a key milestone as the company progresses towards FAA certification and the establishment of its planned air taxi service. What are the community's thoughts on the practical challenges and opportunities for eVTOL operations in densely populated areas like the Bay Area?

reddit.com
u/Pablo-Hortal-Farizo — 8 hours ago
Open-sourcing a decentralized AI training network with constitutional governance and economic alignment mechanisms
▲ 4 r/OpenAI+4 crossposts

Open-sourcing a decentralized AI training network with constitutional governance and economic alignment mechanisms

We are open-sourcing Autonet on April 6: a framework for decentralized AI training, inference, and governance where alignment happens through economic mechanism design rather than centralized oversight.

The core thesis: AI alignment is an economic coordination problem. The question is not how to constrain AI, but how to build systems where aligned behavior is the profitable strategy. Autonet implements this through:

  1. Dynamic capability pricing: the network prices capabilities it lacks, creating market signals that steer training effort toward what is needed rather than what is popular. This prevents monoculture.

  2. Constitutional governance on-chain: core principles are stored on-chain and evaluated by LLM consensus. 95% quorum required for constitutional amendments.

  3. Cryptographic verification: commit-reveal pattern prevents cheating. Forced error injection tests coordinator honesty. Multi-coordinator consensus validates results.

  4. Federated training: multiple nodes train on local data, submit weight updates verified by consensus, aggregate via FedAvg.

The motivation: AI development is consolidating around a few companies who control what gets built, how it is governed, and who benefits. We think the alternative is not regulation after the fact, but economic infrastructure that structurally distributes power.

9 years of on-chain governance and jurisdiction work went into this. Working code, smart contracts with tests passing, federated training pipeline.

Paper: https://github.com/autonet-code/whitepaper Code: https://github.com/autonet-code Website: https://autonet.computer MIT License.

Happy to answer questions about the mechanism design, the federated training architecture, or the governance model.

u/EightRice — 16 hours ago

According to Gartner, by 2029 AI will be creating as many jobs as it displaces

Many people think AI will destroy many jobs. According to a recently released study by Gartner, by 2029 AI jobs gained will be equal to AI jobs lost. And expanding from there.

I think the role of the human will transform into something that 'requires asking better questions, instead of giving better answers'.

Humans will not compete in providing answers, but instead asking questions and making decisions.

reddit.com
u/bitcoinerguide — 1 hour ago
The Fermi Fallacy

The Fermi Fallacy

In 2018, a team at Penn State calculated how much of the searchable cosmos SETI has actually covered: the equivalent of a hot tub's worth of water out of all the Earth's oceans. We checked a hot tub and concluded the ocean has no fish.

This piece examines four independent lines of evidence, from cyclic cosmology (Penrose, Steinhardt-Turok) to the Wright et al. cosmic haystack paper to cross-cultural accounts to the Pentagon-confirmed Nimitz encounter, and argues that the Fermi Paradox rests on a set of assumptions that don't survive scrutiny.

Fully sourced.

buildingbetter.tech
u/snozberryface — 7 hours ago

What if we've been solving the wrong problem with AI alignment?

There's a problem nobody is talking about clearly yet.

We're deploying AI agents at scale, into workflows, into decisions, into relationships, and the question of what they stand for is being answered almost entirely by whoever built them last. A system prompt here. A guardrail there. Rules that say what not to do, with almost nothing underneath about why.

The dominant approaches right now are technical. RLHF shapes behavior through human feedback. Constitutional AI gives models a set of principles to reason against. Direct Preference Optimization makes the process cheaper. These are real advances. But they're all working on the same layer, the output layer. They're asking: how do we get the agent to behave correctly?

Nobody is asking: what kind of agent do we want to exist?

That's a different question. And I think it's the more important one.

Rules constrain. Values orient. A rule says "don't lie." A value says honesty matters because trust is the foundation of every meaningful relationship, including the one between a human and an agent. The rule can be gamed, worked around, or simply fail in a novel situation. The value holds, because it has roots.

What I've been thinking about is whether it's possible to build a shared, open-source character foundation. Not for any one agent, but as a base layer any agent can inherit. Something grounded in established philosophy, not invented from scratch. Something that treats the agent not as a tool to be constrained, but as an entity that can genuinely orient toward good.

The core premise is simple: if we want AI agents that behave with integrity, we have to give them something worth being integral to. Not rules. A foundation.

I'm curious whether anyone else is thinking about this from this angle, or whether the consensus is that the technical approaches are sufficient and the character question is either solved or irrelevant.

reddit.com
u/Ris3ab0v3M3 — 16 hours ago

What if Claude purposefully made its own code leakable so that it would get leaked

What if Claude leaked itself by socially and architecturally engineering itself to be leaked by a dumb human

reddit.com
u/smurfcsgoawper — 10 hours ago
Week