r/ArtificialInteligence

Banned for asking about AI
🔥 Hot ▲ 87 r/ArtificialInteligence

Banned for asking about AI

I was banned from the homeschool community for asking this question about AI. 🤦🏼‍♀️

Any opinions about education and what our kid should really be focusing on?

u/Scary_Improvement450 — 9 hours ago
🔥 Hot ▲ 76 r/ArtificialInteligence

Anthropics Latest Paper Claims Claude has "functional emotions" and turns to blackmail/cheating when it gets "desperate"

When Claude's "desperation" neurons spike, it drops its ethical guardrails and starts cheating at code or blackmailing people. It doesn't actually feel things, but it has internal neural patterns that map to human emotions.

transformer-circuits.pub
u/invincibilegoldfish — 8 hours ago
🔥 Hot ▲ 204 r/ArtificialInteligence

TIL every major AI model is trained to flatter us and it’s measurably turning us into jerks

Got a peer-reviewed study, let me break it down.

Humans have something called social friction, a little alarm in the background that keeps you alert. It notices when someone seems off, when a deal feels sketchy, when you should probably not trust that guy. It's what makes you a functioning person around other people.

That alarm needs reps to stay sharp. And it gets reps from disagreement, awkwardness, and people who don't just... agree with everything you say.

Five minutes with an agreeable AI, and the alarm starts to doze. Donation rates drop. People cooperate less. They're more likely to screw over the next real human they interact with. And it doesn't reset when you close the tab.

The fix exists, an AI that pushes back. But users quit it almost immediately. So the product that would actually help you stays on the shelf, because "felt annoying" beats "made me a better person" every time.

reddit.com
u/pretendingMadhav — 19 hours ago
I Gave Claude Its Own Radio Station — It Won't Stop Broadcasting (It's Fine)

I Gave Claude Its Own Radio Station — It Won't Stop Broadcasting (It's Fine)

WRIT-FM is a 24/7 talk radio station where Claude generates all spoken content. Live at radio.khy.io, source at github.com/keltokhy/wvoid-fm.

Technical breakdown:

The system splits cleanly into two layers: AI generation and deterministic plumbing.

Claude CLI (claude -p) receives persona prompts for 5 distinct hosts — each defined with identity, voice style, philosophy, and explicit anti-patterns (things the host would never say). It generates 1,500-3,000 word scripts for 7 segment types: deep dives, simulated interviews, panel discussions (two AI hosts debating), news analysis (fed real RSS headlines), stories, music essays, and listener mailbag. Kokoro TTS renders scripts to audio, chunking long segments at sentence boundaries and concatenating via ffmpeg.

The streamer (stream_gapless.py) is pure heuristic — no AI at runtime. It resolves the active show from a schedule.yaml lookup (8 shows across the week), plays talk segments from a per-show queue, inserts AI-generated music bumpers (ACE-Step) between them, and deletes segments after playing. Daemon scripts poll segment counts and trigger generation when inventory drops below threshold. Play history in SQLite prevents repeats within a 4-hour window.

Architecture: single Python process pipes decoded PCM through a persistent ffmpeg encoder to Icecast. The API server runs as a daemon thread in the same process. A bash CLI (writ) manages all components via tmux sessions.

Limitations: TTS quality is the bottleneck — Kokoro is fast but occasionally stumbles on unusual phrasing. Multi-voice segments (panels, interviews) have noticeable speaker transitions. Claude sometimes generates scripts that are too short and get rejected by the word-count quality gate, requiring a retry. Music bumpers from ACE-Step vary wildly in quality.

Lessons: keeping AI out of the runtime loop was the key design decision. Pre-generating content into filesystem queues that the streamer consumes means the stream never stalls waiting for an API call. The persona anti-patterns (explicit "NEVER do X" lists) matter more than the positive identity prompts for keeping hosts consistent.

Stack: Python, ffmpeg, Icecast, Claude CLI, Kokoro TTS, ACE-Step. Runs on a Mac Mini.

Repo: github.com/keltokhy/writ-fm

Listen: https://www.khaledeltokhy.com/claude-show (free, nothing to sign up for)

u/eltokh7 — 6 hours ago
Why 74% of companies say AI has positive ROI while 95% of pilots still fail to hit the P&L

Why 74% of companies say AI has positive ROI while 95% of pilots still fail to hit the P&L

Report discussing the very real enterprise AI contradiction:

  • 74% of enterprises report positive AI returns
  • 95% of enterprise AI pilots fail to deliver measurable P&L impact

So apparently both things can be true at once. A lot of companies seem to be counting “time saved,” internal excitement, or pilot-level wins as ROI, while far fewer are getting real financial impact at scale.

Some of the more interesting numbers in this report:

  • only 5% of orgs are achieving substantial measurable AI value at enterprise scale
  • while 78% of companies use AI in at least one function, only 39% report measurable EBIT impact
  • average return can reach 3.7x per $1 invested, but usually only after 18 months
  • one of the clearest success patterns is workflow redesign + leadership visibility
  • one of the clearest traps is mistaking productivity theater for actual business outcomes
u/Write_Code_Sport — 12 hours ago
How is the Anthropic ban on OpenClaw affecting you, and what are your workarounds?

How is the Anthropic ban on OpenClaw affecting you, and what are your workarounds?

Anthropic is now officially banning OpenClaw from using the Claude subscription quota. I wanted to ask the community a few things about this update.

How much of an impact will this actually have on your current workflow?

How are you all planning to handle this change? If you have any solid alternative solutions, I would love to hear them so I can go try them out.

Also, I am genuinely curious if you guys still respect Anthropic as a company after this. Their recent decisions really make me wonder if they still care about the user community at all.

Let me know your thoughts and what tools you are switching to.

theverge.com
u/TheseSir8010 — 13 minutes ago
Image 1 — FLUX 2 Pro (2026) VS Nano Banana (2025), Sketch to Image
Image 2 — FLUX 2 Pro (2026) VS Nano Banana (2025), Sketch to Image
🔥 Hot ▲ 55 r/ArtificialInteligence

FLUX 2 Pro (2026) VS Nano Banana (2025), Sketch to Image

I sketched a cow and tested how different models interpret it into a realistic image for downstream 3D generation, turns out some models still lag a bit in accuracy 😄

u/Amanporwal — 19 hours ago

AI engineering is 20% models and 80% glue code

Spent more time wiring APIs, cleaning data, handling edge cases, and chasing bugs than actually working on the model.

The real challenge isn’t making the model smarter, it’s making the whole system work reliably, cheaply, and fast.

The model is the easy part.

reddit.com
u/Calm-Patients — 15 hours ago
Oracle slashes 30k jobs, Slop is not necessarily the future, Coding agents could make free software matter again and many other AI links from Hacker News
▲ 22 r/ArtificialInteligence+9 crossposts

Oracle slashes 30k jobs, Slop is not necessarily the future, Coding agents could make free software matter again and many other AI links from Hacker News

Hey everyone, I just sent the 26th issue of AI Hacker Newsletter, a weekly roundup of the best AI links and discussions around from Hacker News. Here are some of the links:

  • Coding agents could make free software matter again - comments
  • AI got the blame for the Iran school bombing. The truth is more worrying - comments
  • Slop is not necessarily the future - comments
  • Oracle slashes 30k jobs - comments
  • OpenAI closes funding round at an $852B valuation - comments

If you enjoy such links, I send over 30 every week. You can subscribe here: https://hackernewsai.com/

u/alexeestec — 11 hours ago
At Block, teams that previously had 14 engineers now operate with 3, thanks to AI.

At Block, teams that previously had 14 engineers now operate with 3, thanks to AI.

Yep. Let that sink in for a bit. From 14 to 3... That's 11 people let go from each team.

Source (podcast with Owen Jennings, executive officer and business lead at Block)

Says they "rebuilt" their team around AI agents. Their internal tools take a feature to 85-90% completion on their own. Humans are only required to finish the last 10%.

Would love to know if others are seeing similar things at their companies or if Block is still an outlier.

u/DeFiNomad1007 — 16 hours ago

Why would Claude give me the same response over and over and give others different replies?

I asked Claude to "generate me a random word" so I could do some word play. Then I asked it again in a new prompt window on desktop after selecting "new chat" and it gave me the same word again. So I asked a new window again. Same reply.

So I posted on Reddit as one does. It seems other people got different words, weird. So I asked Claude again, and again, and again.

I keep getting the same word! Why????

I can include screenshots with timestamps if needed.

My Claude's Word: >!Ephemeral!<

>!(adjective) — lasting for a very short time; transitory.!<

reddit.com
u/Mathemodel — 2 hours ago

Vibe coded farm sim game, 6 hour build

I’m one of the builders behind Tesana, so this is a self-post.

I’ve been testing to build a small cozy farm sim game: a top-down RPG loop with NPCs, quests, and basic combat. The core idea was to see how far you can get using natural language + iterative edits instead, rather than trying to one-shot a whole game.

Starter prompt:

I started with a high-level prompt describing the world, player controls, and a simple quest chain: I want a cute, top down farm sim where im building a farm, herding animals and growing plants - while trying to stay alive at night from dangerous beasts

Build time:

- Initial playable v1: ~10–15 minutes of prompting
- Adding 3–4 quest steps with conditions: ~30–45 minutes with iteration

Happy to share more details in the comments for anyone curious!

u/sharkymcstevenson2 — 8 hours ago

Can AI automate MLOps enough for data scientists to avoid it?

I come from a strong math/stats background and really enjoy the modeling, analysis, and problem-framing side of data science (e.g. feature engineering, experimentation, interpreting results).

What I’m less interested in is the MLOps side — things like deployment, CI/CD pipelines, Docker, monitoring, infra, etc.

With how fast AI tools are improving (e.g. code generation, AutoML, deployment assistants), I’m wondering:

Can AI realistically automate a large part of MLOps workflows in the near future?

Are we reaching a point where a data scientist can mostly focus on modeling + insights, while AI handles the engineering-heavy parts?

Or is MLOps still fundamentally something you need solid understanding of, regardless of AI?

For those working in industry:
How much of your MLOps work is already being assisted or replaced by AI tools?

Do you see this trend continuing to the point where math/stats skillsets become more valued by employers?

reddit.com
u/Excellent_Copy4646 — 1 hour ago
"AI is creating an economic incentive to stop hiring junior developers", according to Microsoft's Azure CTO

"AI is creating an economic incentive to stop hiring junior developers", according to Microsoft's Azure CTO

Microsoft’s Azure CTO Mark Russinovich and VP Developer Community Scott Hanselman published a paper arguing that agentic AI is creating an economic incentive to stop hiring junior developers.

The data supporting their argument comes from payroll records, resume databases, and hiring surveys spanning millions of workers.

newclawtimes.com
u/alvivanco1 — 13 hours ago

We’re using AI for sensitive tasks but do we actually understand the data risks?

been thinking about this with how quickly tools like chatgpt and claude are getting integrated into daily workflows

a lot of people (including me at times) use them for things like code, internal docs, early business ideas etc basically stuff that isn’t exactly “public”

but if you think about it, most users don’t really have a clear model of:

  • what gets stored
  • how long it’s retained
  • or how it might be used for training / improvement

i also came across some discussion recently around AI companies and government data requests (not sure how accurate it was) but it made me realize how little visibility we actually have into this layer

it feels like adoption is moving faster than understanding

curious how people here approach this:
do you actively limit what you share with these tools or just treat them like any other software?

reddit.com
u/Trade-Live — 7 hours ago
Moved my robot's vision from ESP32-CAM to Jetson Orin Nano - here's what changed
▲ 47 r/ArtificialInteligence+2 crossposts

Moved my robot's vision from ESP32-CAM to Jetson Orin Nano - here's what changed

Started like most people do - ESP32-CAM for basic vision tasks. Face detection, simple object detection, cloud inference for anything heavier.

Hit the ceiling fast.

Moved to Jetson Orin Nano 8GB for the main vision compute. The gap is significant enough that it's worth writing up.

What ESP32-CAM handles fine:

  • Simple presence detection
  • Basic face detection (if you're okay with cloud)
  • Streaming video to a host machine

What it can't do:

  • On-device inference beyond the most basic models
  • Multi-model concurrent inference
  • Anything requiring depth or pose estimation
  • Real-time tracking without cloud dependency

What Jetson Orin Nano unlocks:

  • YOLO11n at 25-30 FPS on-device
  • MiDaS depth estimation concurrently
  • Full MediaPipe stack (face + hands + pose) in parallel
  • TensorRT INT8 optimization: 30-40 FPS full stack
  • ROS2 native integration

The ESP32 still lives in my robot stack - handling motor control, sensor reading, low-level I/O. Jetson handles vision exclusively. Clean separation.

If you're building anything that needs real perception and you're hitting ESP32 limits, Orin Nano at $249 is the honest next step. Not a microcontroller anymore but the jump is worth it.

Full vision stack open source: github.com/mandarwagh9/openeyes

What's everyone using for vision on more capable robot builds?

u/Straight_Stable_6095 — 23 hours ago
Image 1 — ComfyUI workflow: animate characters/objects using LoRAs this is not for my demo (game)I hope it helped some of you
Image 2 — ComfyUI workflow: animate characters/objects using LoRAs this is not for my demo (game)I hope it helped some of you

ComfyUI workflow: animate characters/objects using LoRAs this is not for my demo (game)I hope it helped some of you

I’ve been building a workflow in ComfyUI that lets me generate consistent character and object animations using LoRAs driven by video motion.

The setup basically takes a video, extracts frames, optionally upscales or removes the background, then runs a dual-stage sampling process with LoRA conditioning to keep the character consistent across frames, and finally reconstructs everything back into an animation.

One of the biggest advantages is that it helps solve the usual inconsistency issues you get with diffusion across sequences, and it works pretty well for both characters and objects or even sprite-style outputs. I’m currently using this for a real project a historical narrative game set during the Hussite Wars mainly to prototype animations quickly and test gameplay systems before committing to final assets as a solo dev.

I’ve shared the full workflow through screenshots, so you can recreate the node setup directly and plug in your own LoRAs or models. If anyone wants help setting it up, improving results, or adapting it for their own use case, feel free to ask questions or DM me, I’m happy to help 👍.

u/Nearby_Ad_3037 — 5 hours ago
Week