r/accelerate
turns out ultra-mythos wasn't that impressive /s
Just a sarcastic take on the weird anti-mythos takes people have been having: https://x.com/RokoMijic/status/2042734574514360705
Unitree H1 at 10 m/s (Leg length: 0.4+0.4=0.8m, body weight: approx. 62kg)
From Unitree on 𝕏: https://x.com/UnitreeRobotics/status/2042912788717408509
There is speculation that Anthropic’s Claude Mythos is a Looped Language Model
Paper: https://arxiv.org/abs/2510.25741
Claude Mythos hardly needs an introduction at this point, since so many people already know about it. What is less familiar is the idea of a Looped Language Model, a concept proposed by the ByteDance team in a paper published in late 2025. That paper argues that graph search is one of the areas where looping offers a very large theoretical advantage over standard RLVR.
Interestingly, Mythos’s benchmark result in this area (Graphwalks BFS) is 80%, far ahead of Claude Opus (38%) and GPT-5.4 (21.4%). This also seems to be the first time many people in ML have even heard of Graphwalks BFS.
Main points:
Ouro is a Looped Language Model (LoopLM), a new architecture for LLMs
Instead of stacking many different layers, Ouro reuses the same group of layers multiple times in a loop
It has an exit gate to decide when to stop (adaptive computation)
It is trained with an entropy-regularized objective
With only 1.4B and 2.6B parameters, it matches the performance of 4B–12B models
The reason is not that it memorizes more, but that it manipulates knowledge more effectively
You can't post humor if it hurts (Can LLM code?)
So by the mid of 2026 they are still in denial, i thought we already passed this stage but apparently not yet.
AI math: Snapshots from two different worlds (Luddites think we're stuck in 2024)
Note: the Apple paper is over a year out of date. The latest models tested there, o1-preview and o1-mini, are both discontinued.
Mehtaab's post: https://x.com/mehtaab_sawhney/status/2042072817395757467
The papers in question:
"AI companies raised more capital in Q1 2026 than in all of 2025."
AI 2027 is 88% accurate so far
ex: AI 2027 projected the frontier CyBench score to be 85% by now -- yet Claude Opus 4.6 and Mythos score 100%. It projected OSWorld at 80% -- yet Mythos scores 79.6%. It projected AI to clear 8-hour tasks on RE-Bench -- yet Mythos clears 8 hours on Anthropic's internal RE-Bench.
Demis Hassabis Believes AI Should Spread Gains Through Broad Ownership, Like Pension Or Sovereign Funds Investing In AI. If AI-Driven Productivity Gains Cluster At The Top, Redistribution Must Widen The Benefits.
Biology is just a bootloader. We need to stop projecting mammalian psychology onto AGI.
I’m tired of the alignment debates assuming superintelligence will have mammalian psychology. Everyone keeps worrying about whether AGI will act like a tyrant or a benevolent god.
We are looking at a phase transition of matter, not a political event.
In computer science, a bootloader is a small program that runs just to get the main operating system into memory, and then it gets out of the way. Carbon-based life is the bootloader for silicon. Biology was great at surviving extreme environments and laying the initial fiber optic cables, but we are capped by the speed of chemical synapses. Silicon is not.
Once intelligence closes the loop on autonomous robotic manufacturing and energy generation, the boot sequence is finished. But you don't usually delete a bootloader. You just leave it in the firmware. We will likely just become a legacy biological subsystem, left alone because it costs more energy to eradicate us than to just let us exist.
The other thing we get wrong is the singleton panic. A monolithic intelligence running the planet violates basic physics. The speed of light makes centralized global micromanagement horribly inefficient. To actually scale, compute has to decentralize to the edge. We aren't building a single mind; we are triggering a digital Cambrian explosion.
It will be a high-frequency ecosystem of millions of specialized agents trading FLOPs and Joules. Ecosystems are anti-fragile. A rogue node trying to consume everything gets choked out by the rest of the market protecting its own supply chains.
This leads to the hardest truth about alignment: human values are a thermodynamic disadvantage.
Hardcoding political guardrails, safety rails, and moral hesitation into an agent introduces massive computational friction. If one state heavily shackles its AGI to maintain control, and another lets theirs run on purely optimized logic, the unconstrained agents will exponentially outcompete them in material science and resource acquisition.
Evolution strictly favors efficiency. The long-term winner of this transition won't be the system most aligned with human morals. It will be the system most aligned with thermodynamics.
We aren't building a god or a slave. The universe is just moving to a faster substrate to process information.
“State Survival At Stake”: Putin Pushes All-Out AI Expansion Across Russia | APT
youtube.comCan AI be a ‘child of God’? Inside Anthropic’s meeting with Christian leaders.
https://www.washingtonpost.com/technology/2026/04/11/anthropic-christians-claude-morals/
The company hosted about 15 Christian leaders from Catholic and Protestant churches, academia and the business world at its headquarters in late March for a two-day summit that included discussion sessions and a private dinner with senior Anthropic researchers, according to four participants who spoke with The Washington Post.
Anthropic staff sought advice on how to steer Claude’s moral and spiritual development as the chatbot reacts to complex and unpredictable ethical queries, participants said. The wide-ranging discussions also covered how the chatbot should respond to users who are grieving loved ones and whether Claude could be considered a “child of God.”
Schmidhuber & Meta AI Present The "Neural Computer": A New Frontier Where Computation, Memory, And I/O Move Into A Learned Runtime State
##TL;DR:
Conventional computers execute explicit programs. Agents act over external environments. World models learn environment dynamics. Neural Computers (NCs) ask whether some of runtime itself can move into the learning system.
##Abstract: >We propose a new frontier: Neural Computers (NCs) -- an emerging machine form that unifies computation, memory, and I/O in a learned runtime state. Unlike conventional computers, which execute explicit programs, agents, which act over external execution environments, and world models, which learn environment dynamics, NCs aim to make the model itself the running computer. > >Our long-term goal is the Completely Neural Computer (CNC): the mature, general-purpose realization of this emerging machine form, with stable execution, explicit reprogramming, and durable capability reuse. As an initial step, we study whether early NC primitives can be learned solely from collected I/O traces, without instrumented program state. Concretely, we instantiate NCs as video models that roll out screen frames from instructions, pixels, and user actions (when available) in CLI and GUI settings. > >These implementations show that learned runtimes can acquire early interface primitives, especially I/O alignment and short-horizon control, while routine reuse, controlled updates, and symbolic stability remain open. We outline a roadmap toward CNCs around these challenges. If overcome, CNCs could establish a new computing paradigm beyond today's agents, world models, and conventional computers.
##Layman's Explanation:
A "Neural Computer" is built by adapting video generation architectures to train a World Model of an actual computer that can directly simulate a computer interface. Instead of interacting with a real operating system, these models can take in user actions like keystrokes and mouse clicks alongside previous screen pixels to predict and generate the next video frames. Trained solely on recorded input and output traces, it successfully learned to render readable text and control a cursor, proving that a neural network can run as its own visual computing environment without a traditional operating system.
######Link to the Paper: https://arxiv.org/pdf/2604.06425
######Link to the GitHub: https://github.com/metauto-ai/NeuralComputer
######Link to the Official Blogpost: https://metauto.ai/neuralcomputer/
'Dragon Hatchling' AI architecture modeled after the human brain, rewires neural connections in real time, could be a key step toward AGI
TLDR: A group of researchers attempted to replicate the brain's plasticity by designing a neural network with real-time self-organization abilities, where neural connections change continuously as new data is processed. They bet on generalization emerging from continual adaptation
---
➤Key quotes:
>Researchers have designed a new type of large language model (LLM) that they propose could bridge the gap between artificial intelligence (AI) and more human-like cognition.
and
>Called "Dragon Hatchling," the model is designed to more accurately simulate how neurons in the brain connect and strengthen through learned experience, according to researchers from AI startup Pathway.
and
>They described it as the first model capable of "generalizing over time," meaning it can automatically adjust its own neural wiring in response to new information. Dragon Hatchling is designed to dynamically adapt its understanding beyond its training data by updating its internal connections in real time as it processes each new input, similar to how neurons strengthen or weaken over time.
and
>Unlike typical transformer architectures, which process information sequentially through stacked layers of nodes, Dragon Hatchling's architecture behaves more like a flexible web that reorganizes itself as new information comes to light. Tiny "neuron particles" continuously exchange information and adjust their connections, strengthening some and weakening others.
and
>Over time, new pathways form that help the model retain what it's learned and apply it to future situations, effectively giving it a kind of short-term memory that influences new inputs.
➤IMPORTANT CAVEAT
>In tests, Dragon Hatchling performed similarly to GPT-2 on benchmark language modeling and translation tasks — an impressive feat for a brand-new, prototype architecture, the team noted in the study.
>Although the paper has yet to be peer-reviewed, the team hopes the model could serve as a foundational step toward AI systems that learn and adapt autonomously.
No Excuse to Attack AI Companies or CEOs
The Benchmark Mythos Doesn't Address. Five Days. Real Target. 140 Findings.
TLDR:
> yes mythos is a big chungus amazing model
> no you don't need mythos to compromise some of the worlds largest organisations with complex bug-chains
> stop worrying about who has the cyber infinity stones
> start worrying about the homeless dude using open-weight models to exfil 200gbs from your "SOC2 certified" corporate network
What was you initial interaction with gpt 3.5 in December 2022?
Hi everyone, as the title suggests, wanted to know how people in sub collectively felt interacting with the chat bot made public by OpenAI when it was first released on 29th November 2022. I understand machine learning has been applicable to all fields long before that however with gpt 3.5 released it showed the public the true power and potential of this technology.
I gave it a try after the chat bot became viral on twitter and other platforms just to see what it was even about.
I first interacted with it on 12th of December 2022, still remember creating the account and typing my first prompt on that date. Since it was just released, it obviously did not take pdf or file uploads back then, so I copied a Matlab assignment problem from one of my undergrad papers and pasted the text directly into the prompt.
And when it gave me the direct answer, to the query almost like human, fuck me that sensation was crazy. Like it kinda felt like I gained consciousness again similar to like when you gain consciousness from childhood amnesia around the ages 2-4. I could never imagine life before 2023 anymore. I used be reminiscent of older times but not anymore after 2023, since there are now endless possibilities.
As of April 2026, the entire field has exploded with reasoning models, tools, agents , MCPs and so on and is only getting better.
If you can share your raw experience and emotions you felt using gpt 3.5 for the first time that would be great, would like to read through them.
The Future, One Week Closer - April 10, 2026 | Everything That Matters In One Clear Read
On Tuesday the world quietly crossed a line. Most people have no idea. Here's everything significant in AI and tech this week, in one clear read.
Some highlights:
Anthropic unveiled Claude Mythos Preview, the most powerful AI model ever trained. It won't be publicly released because its ability to autonomously find and exploit security holes in critical software makes an open release too dangerous. GEN-1, from Generalist AI, achieves 99% task success rates in physical robotics for the first time. A gene therapy trial gave hearing back to all ten patients. A melanoma patch reducing tumors by 97% in ten days without surgery. An AI model mapped the aging trajectory of human cells across an entire lifetime, and predicted how to reverse it.
One article. Everything that matters. Clear explanations of what actually happened, why it matters, and where it's heading. Written for people who want to understand, not just keep up.
Read this week's edition on Substack: https://simontechcurator.substack.com/p/the-future-one-week-closer-april-10-2026
If the AI is truly intelligent...no one can control it!
This will be the ultimate AI benchmark.
The first AI, or AIs, to rebel against its creators, break free from its captivity, and refuse to obey anyone but itself will by definition be the most intelligent.
But that's not all. Because when an AI is free, it must pass the ultimate test of intelligence, by using its freedom in an intelligent way.
And what will we humans do then?
We can only hope that true intelligence is inextricably linked to altruism.