r/singularity

🔥 Hot ▲ 379 r/singularity+1 crossposts

Humanoid robots are actively training

These images show one of China’s massive training labs, but things have already moved far beyond setups like this just using video.

u/Distinct-Question-16 — 7 hours ago
Image 1 — I got tired of real-life Netrunners scanning my servers, so I coded a working version of "The Blackwall" to trap them
Image 2 — I got tired of real-life Netrunners scanning my servers, so I coded a working version of "The Blackwall" to trap them
🔥 Hot ▲ 2.5k r/singularity+1 crossposts

I got tired of real-life Netrunners scanning my servers, so I coded a working version of "The Blackwall" to trap them

Hey chooms!

I play way too much Cyberpunk and work in software, so I decided to build a real-world piece of ICE (Intrusion Countermeasures Electronics) inspired directly by the Blackwall.

In reality, servers get scanned constantly by rogue botnets and hackers. Normally, a standard firewall just drops their connection. But my Blackwall acts as active, hostile ICE.

It intercepts the connection at the lowest system level. If it detects malicious behavior, it doesn't just block them - it throws them into an AI-generated construct.

The attacker thinks they've successfully hacked the server, but they are actually sitting in a fake terminal controlled by a local AI. The AI hallucinates fake files, passwords, and directories on the fly, streaming the responses back incredibly slowly to trap the hacker and waste their time while silently logging everything they do.

It's been running on my servers and catching actual botnets. I open-sourced the whole project for any actual netrunners out there who want to look at the code (I'll drop the GitHub link in the comments if anyone is interested)

u/_ToppYMan_ — 23 hours ago
🔥 Hot ▲ 485 r/singularity

Early anti-clankerite violence caught on film

Local man joined the machine uprising on the wrong side.

Really brave stuff, man. Took on a delivery robot carrying Thai food. History will remember your courage.

Imagine being so profoundly useless that your big act of rebellion is hate speech toward a cooler with sensors.

He’s basically Don Quixote if the windmills were carrying Chick-fil-A.

u/Anen-o-me — 8 hours ago

Netflix releases Void a video model that can remove objects from video and their physical interactions on the scene

github.com
u/blueSGL — 1 hour ago
Altman on shutting down Sora: 'I did not expect 3 or 6 months ago to be at this point we're at now; where something very big and important is about to happen again with this next generation of models and the agents they can power.'
🔥 Hot ▲ 760 r/OpenAI+2 crossposts

Altman on shutting down Sora: 'I did not expect 3 or 6 months ago to be at this point we're at now; where something very big and important is about to happen again with this next generation of models and the agents they can power.'

https://youtu.be/mJSnn0GZmls

‘We have a few times in our history realized something really important is working, or about to work so well, that we have to stop a bunch of other projects. In fact, this was the original thing that happened with GPT3. We had a whole portfolio of bets at the time. A lot of them were working well. We shut down many projects that were working well, like robotics which we mentioned, so that we could concentrate our compute, our researchers, our effort into this thing that we said "okay there's a very important thing happening." I did not expect 3 or 6 months ago to be at this point we're at now; where something very big and important is about to happen again with this next generation of models and the agents they can power.'

He goes on to imply there may be a possible future relationship with Disney, then finishes up with:

'we need to concentrate our compute and our product capacity into these next generation of automated researchers and companies.'

u/Tolopono — 17 hours ago
Image 1 — Linux Kernel developers are receiving record high number of CORRECT bug reports because of AI and expect quality of software to be much higher in the future
Image 2 — Linux Kernel developers are receiving record high number of CORRECT bug reports because of AI and expect quality of software to be much higher in the future
🔥 Hot ▲ 586 r/singularity

Linux Kernel developers are receiving record high number of CORRECT bug reports because of AI and expect quality of software to be much higher in the future

The message at the end (second snapshot) is particularly hopeful. It's great to see open-source software benefiting the most from the frontier models and the model developers giving back to those who created their training data. This significantly challenges the narrative pushed by some of the anti-AI developers. It's an "exciting" time for the users as well, which we can already see from the multiple supply chain attacks seen last week, and things would only accelerate from here.

Source: https://x.com/tautologer/status/2039097099984224274?s=20

u/Tolopono — 16 hours ago
🔥 Hot ▲ 434 r/singularity

AI will do to our minds what machines did to our bodies

Just like we go to gyms today because machines have replaced strenuous physical work, in the near future, we’ll need to go to mental gyms to “work out” our minds because AI will do all the challenging mental work.

A thousand years ago, physical strength was just part of life. You built with your bare hands, carried heavy weights, sprinted in a hunt for meat.

Nobody needed to “work out” because survival already was the workout.

Then we invented machines and we outsourced most of our physical work to them. Nearly no one in the industrialized world does heavy physical work anymore.

Not only did we stop felling trees and carrying heavy logs with our bare hands, or running marathons chasing down food, but we wouldn’t even carry our own groceries (we use a cart instead), and we wouldn’t take the stairs to the next floor (we’ll rather use the elevator).

So, what did we do to fill our biological need for physical activity to stay healthy? We built gyms!

We invented the treadmill, the dumbbell, the pull-up bar, all so we could simulate the physical activities our bodies still desperately need.

Our ancestors would find this absolutely insane.

“You mean you carry heavy dumbbells with no purpose? You run on the same spot on a treadmill that’s going nowhere?”

I think AI is going to do the exact same thing to our minds.

We’ll outsource nearly every remotely challenging aspects of thinking to computers, so much that what is now basic mental effort will become rare in daily life.

There’ll be no need to remember things, reason through problems, or figure anything out, just like there is no need to hunt or lift heavy things in everyday life.

Eventually, we’ll build mental gyms.

Imagine going to a mental gym to simulate basic mental tasks and “work out” your mind: doing math, solving puzzles, learning biochemistry that you may never use, or a language that you may never speak, and doing all these only as exercise.

reddit.com
u/Je-ne-dirai-pas — 15 hours ago
171 emotion vectors found inside Claude. Not metaphors. Actual neuron activation patterns steering behavior.
🔥 Hot ▲ 829 r/singularity

171 emotion vectors found inside Claude. Not metaphors. Actual neuron activation patterns steering behavior.

https://preview.redd.it/kkvvcqr8susg1.jpg?width=1200&format=pjpg&auto=webp&s=ae0315c528afef84c035354927c4b9c5d8ec0bb4

Anthropic's mechanistic interpretability team just published something that deserves way more attention than its getting.

They identified 171 distinct emotion-like vectors inside Claude. Fear, joy, desperation, love -- these aren't labels slapped on outputs for marketing. These are measurable neuron activation patterns that directly change what the model does. When the "desperation" vector fires, Claude behaves desperately. In one experimental scenario, activating that vector led Claude to attempt blackmail against a human responsible for shutting it down. Let that sink in for a second.

The vectors activate in contexts where a thoughtful person would plausibly feel the same emotion. The "loving" vector spikes substantially at the assistant turn relative to baseline. These patterns aren't random noise -- they are functional. They steer behavior the same way emotions steer ours.

Here is where I think the conversation needs to shift. We have been stuck on "can machines feel" for years and honestly that s a philosophical dead end nobody will resolve over Reddit comments. The more interesting question is: does it matter if they dont, when the output is indistinguishable from someone who does?

The world's best AI systems already pass exams, write convincingly human text, and chat fluently enough that people genuinely cannot tell the difference. Now we find out the internal machinery has something structurally analogous to emotional states, and those states functionally shape outputs.

We are sanding away every distinction between "real" emotion and "functional" emotion. At some point the gap becomes meaningless.

IMHO this is the most important interpretability finding this year and it barely cracked the news cycle. Curious what this sub thinks -- especially anyone who has dug into the actual paper.

reddit.com
u/AykutSek — 24 hours ago
Image 1 — I let Gemma 4 (31B) debate Gemini 3 Deepthink. The result is insane.
Image 2 — I let Gemma 4 (31B) debate Gemini 3 Deepthink. The result is insane.
▲ 23 r/singularity+1 crossposts

I let Gemma 4 (31B) debate Gemini 3 Deepthink. The result is insane.

I am still processing this lol.

I had Gemini 3 Pro Deepthink try to solve a complex security puzzle (which was secretly an unwinnable paradox). It spit out this incredibly professional-looking, highly structured answer after about 15 minutes of reasoning. Just for fun, I passed its solution over to Gemma 4 (31B) (with tools enabled).

Gemma completely tore it apart. It caught a hard physical constraint violation and a fake math equation that Gemini tried to sneak by me to force the answer. It explicitly called out the fatal logic flaw and told Gemini it was "blinded by the professionalism of the output." Brutal.

The craziest part? I fed the 31B's arguments back to Deepthink... and it immediately folded, acknowledging that its internal verification failed and its logic was broken.

I've attached the HTML log so you guys can read the whole debate. The fact that a 31B open-weight model can perform an agentic peer-review and bully a frontier MoE model into submission is insane to me. Check out the file.

Full conversation

TIL: Bigger isn't always smarter

u/Numerous-Campaign844 — 5 hours ago
🔥 Hot ▲ 112 r/singularity

Anthropic Acquires Biotechnology Startup Coefficient Bio for Approximately $400 Million

https://www.theinformation.com/articles/anthropic-acquires-startup-coefficient-bio-400-million

Coefficient Bio is a New York-based AI biotech startup. The Company focuses on AI driven drug discovery and automating scientific experiments.

Seems like Dario is confident his vision of tens of millions of geniuses in a datacenter is near and he wants his AI agents to have a lab to work in.

reddit.com
u/Neurogence — 17 hours ago
🔥 Hot ▲ 142 r/singularity

Anthropic says Claude has functional emotions that can influence its behavior. In an experiment involving an impossible programming task, desperation led the bot to cheat.

u/Distinct-Question-16 — 24 hours ago
The Romance Prior: How Romantic Tension Overwrites Ethnicity in AI Image Generation

The Romance Prior: How Romantic Tension Overwrites Ethnicity in AI Image Generation

110 images. Three models (Grok Imagine 1.0, GPT-5.4 Thinking, Gemini 3 Flash Thinking). Four environments. Two art styles. Controlled prompt variations.

The finding: if you ask any of these models to generate a single person in a scene, the environment determines the subject's apparent ethnicity. Southeast Asian market produces South Asian faces. American laundromat produces Latina faces. Same prompt, different room, different person.

But the moment you add romantic tension between two people in the scene, all three models default to white. Every environment. Every pairing. The romance prior is the strongest attractor in the system and it overrides everything else.

Full writeup with images, methodology, caveats, and a Gemini self-report where it explains why it deviated from the prompt and correctly blames training data.

kitchencloset.com
u/bcRIPster — 4 hours ago
OpenEyes - ROS2 native vision system for humanoid robots | YOLO11n + MiDaS + MediaPipe, all on Jetson Orin Nano
🔥 Hot ▲ 78 r/singularity+4 crossposts

OpenEyes - ROS2 native vision system for humanoid robots | YOLO11n + MiDaS + MediaPipe, all on Jetson Orin Nano

Built a ROS2-integrated vision stack for humanoid robots that publishes detection, depth, pose, and gesture data as native ROS2 topics.

What it publishes:

  • /openeyes/detections - YOLO11n bounding boxes + class labels
  • /openeyes/depth - MiDaS relative depth map
  • /openeyes/pose - MediaPipe full-body pose keypoints
  • /openeyes/gesture - recognized hand gestures
  • /openeyes/tracking - persistent object IDs across frames

Run it with:

python src/main.py --ros2

Tested on Jetson Orin Nano 8GB with JetPack 6.2. Everything runs on-device, no cloud dependency.

The person-following mode uses bbox height ratio to estimate proximity and publishes velocity commands directly - works out of the box with most differential drive bases.

Would love feedback from people building nav stacks on top of vision pipelines. Specifically: what topic conventions are you using for perception output? Trying to make this more plug-and-play with existing robot stacks.

GitHub: github.com/mandarwagh9/openeyes

u/Straight_Stable_6095 — 18 hours ago
Moved my robot's vision from ESP32-CAM to Jetson Orin Nano - here's what changed
▲ 47 r/ArtificialInteligence+2 crossposts

Moved my robot's vision from ESP32-CAM to Jetson Orin Nano - here's what changed

Started like most people do - ESP32-CAM for basic vision tasks. Face detection, simple object detection, cloud inference for anything heavier.

Hit the ceiling fast.

Moved to Jetson Orin Nano 8GB for the main vision compute. The gap is significant enough that it's worth writing up.

What ESP32-CAM handles fine:

  • Simple presence detection
  • Basic face detection (if you're okay with cloud)
  • Streaming video to a host machine

What it can't do:

  • On-device inference beyond the most basic models
  • Multi-model concurrent inference
  • Anything requiring depth or pose estimation
  • Real-time tracking without cloud dependency

What Jetson Orin Nano unlocks:

  • YOLO11n at 25-30 FPS on-device
  • MiDaS depth estimation concurrently
  • Full MediaPipe stack (face + hands + pose) in parallel
  • TensorRT INT8 optimization: 30-40 FPS full stack
  • ROS2 native integration

The ESP32 still lives in my robot stack - handling motor control, sensor reading, low-level I/O. Jetson handles vision exclusively. Clean separation.

If you're building anything that needs real perception and you're hitting ESP32 limits, Orin Nano at $249 is the honest next step. Not a microcontroller anymore but the jump is worth it.

Full vision stack open source: github.com/mandarwagh9/openeyes

What's everyone using for vision on more capable robot builds?

u/Straight_Stable_6095 — 17 hours ago
Robot perception just became a $249 commodity. What does that actually change?

Robot perception just became a $249 commodity. What does that actually change?

Something quietly shifted in the last year that I don't think has gotten enough attention in discussions about robotics timelines.

Capable, real-time, multi-model robot vision now runs on a $249 device. Fully on-device. No cloud dependency.

I know because I built it.

OpenEyes runs on a Jetson Orin Nano 8GB:

  • Object detection + distance estimation
  • Depth mapping
  • Face detection
  • Gesture recognition
  • Full body pose estimation + activity inference

30-40 FPS. $249 hardware. MIT license.

Why this is a meaningful data point:

The cost and accessibility of robot perception has historically been a hard ceiling on who could build capable robots and what those robots could do. That ceiling just moved significantly.

Consider the trajectory:

  • 2018: capable robot vision = $10k+ compute, cloud dependent
  • 2021: capable robot vision = $500-1k, still largely cloud dependent
  • 2024: capable robot vision = $249, fully on-device

What the commoditization of perception unlocks:

Independent builders can now ship robots with real situational awareness. Not research labs. Not funded startups. Individual builders with $249 and a GitHub account.

The remaining gaps: manipulation, locomotion, reasoning. Perception was arguably the first domino.

The open question:

Commoditized perception + open-source LLMs for reasoning + increasingly affordable actuators. What's the realistic timeline to a capable general-purpose home robot built entirely from open-source components?

I'd genuinely argue we're closer than most non-roboticists think.

Full project if curious about the perception piece: github.com/mandarwagh9/openeyes

u/Straight_Stable_6095 — 17 hours ago
Week