
u/Electrical_Mine1912

The EU reached a deal yesterday on the AI Act. Two parts worth flagging.
One, AI nudifier apps and AI-generated child sexual abuse material are banned. Compliance by December 2026. Two, high-risk rules for biometrics, law enforcement, border control and critical infrastructure are pushed from August 2026 to December 2027. The EU calls it simplification. Some are calling it watering down.
Article: [Link]
Two questions:
- The same deal bans deepfake nudes and delays the rules on the biometric systems that could check if content involves a real consenting human. Coherent or contradictory?
- Given how this sub views the EU Digital Identity Wallet and Chat Control, where does proof of human verification sit for you? Different category, or same surveillance logic with a new wrapper?
Let me know your thoughts.
A survey just dropped covering 200+ federal IT leaders. 53% of agencies are already planning or running agentic AI pilots. Another 15% have fully deployed something. That's moving fast for government.
But here's the part that should make you pause. Only 8% have an incident response framework. Fewer than a third have documented kill switch procedures. And 77% say oversight frameworks are "essential," but haven't actually built them yet. So you've got autonomous AI systems taking actions inside federal infrastructure, touching national security data, benefits claims, financial systems, and the human approval layer barely exists on paper.
The report basically says: agencies want human-in-the-loop control but don't have the plumbing to enforce it.
That's the exact gap World's AgentiKit was designed for. The idea is simple. Before an AI agent takes a high-stakes action, it calls out to World ID, gets a zero-knowledge proof that a real unique human authorized it, and proceeds. No PII stored. No surveillance trail. Just a cryptographic confirmation that a person exists and consented. The human stays in the loop without being exposed.
Right now agencies are trying to solve this with IAM tools built for humans logging into dashboards, not agents making decisions at machine speed. That won't hold.
The demand signal is loud. The infrastructure gap is real. And the window before something goes wrong is shorter than most people think.
Today, when an AI agent books a service or makes a purchase on behalf of a user, the receiving platform typically can’t tell whether the request comes from a single human, multiple automated agents, or large-scale bot activity.
World’s AgentKit is proposing a way to address this by allowing users to verify their humanity once, and then carry that proof when delegating actions to agents. The platform receiving the request only sees whether a verified human is behind it, without learning their identity.
As agent-driven transactions become more common, this kind of verification layer is being explored as a way to support trust between users, agents, and services.
Been thinking about the bot scalping problem in ticketing lately and came across World's ConcertKit. The idea is straightforward: reserve a portion of ticket inventory exclusively for biometrically verified humans, so bots can't compete for that pool no matter how many accounts they spin up.
What struck me is how different this is from what platforms have tried before. Purchase limits per email, CAPTCHA, IP velocity checks, all of these are reactive. They try to catch bots after they show up. ConcertKit flips it by requiring proof of humanness before you even enter the queue. A scalper with 500 accounts still only gets one slot because all those accounts trace back to one person.
The interesting question for me isn't whether the technology works, it's whether the industry actually adopts it. Ticketmaster and Live Nation have survived the scalping problem for years partly because secondary markets generate their own revenue streams. A system that genuinely blocks scalping at scale might not be in every platform's interest even if it's clearly better for fans.
The pattern also applies well beyond concerts. Waitlists, beta access, presale drops, anything where "one per person" is the real intent but the enforcement is just an email address. That assumption has been gameable for a long time.
Curious whether anyone thinks the venue and ticketing platform side will ever have enough incentive to actually implement something like this at scale, or if it stays a niche opt-in for artists who care.