u/PrestigiousPear8223

l5-s1 nerve pain at night, anyone found a good pad placement?

I have been dealing with severe sciatica around l5-s1 for many years, and sleep is basically impossible. The constant pain in my leg makes it difficult for me to feel comfortable. Sitting and working all day doesn't help. Last month, after seeing some people mention them here, I finally got an auvon tens unit.

This can't be cured, but it helps me enough. I can really sleep without feeling shocked every time I move. I usually put the cushion on my gluteus and calf for about 30 minutes before going to bed, but I don't sleep with it on. Still trying to find a point that really touches the nerve. Do people with l5-s1 problems have the best position for them?

reddit.com
u/PrestigiousPear8223 — 21 hours ago

Are we quietly moving from AI coding to AI companies? After 18 months of production pain...

I've been building agentic systems since the AutoGPT hype train left the station in 2023. I've shipped multi-agent setups using everything from early MetaGPT (now atoms ai) experiments to Devin pilots for enterprise clients. I need to get something off my chest that the demo videos won't tell you.

Lego Brick Agent Assembly

The pitch sounds beautiful: buy a PM agent from Vendor A, an architect agent from Vendor B, wire them together with some JSON schema, and boom, you have a software team.

In reality, role boundaries are porous mud. When I tested Atoms AI on a real fintech project, the Product Manager agent kept making technical implementation decisions that should've belonged to the Architect agent. The handoff between them looked clean in the diagram, but the actual context transfer was lossy as hell. The PM would say implement a secure payment flow and the Architect would interpret that as add basic SSL while the PM actually meant "implement PCI-DSS compliant tokenization."

This isn't a prompt engineering problem. It's a fundamental mismatch between how we think about software roles and how knowledge actually flows in engineering.

Information Just Flows Between Agents

We assume that if Agent A outputs a spec document and Agent B reads it, information has transferred. It hasn't. What's transferred is text, not understanding.

I ran a controlled test with a multi-agent system handling a codebase migration. The first agent analyzed the legacy monolith and produced a comprehensive migration plan. The second agent executed it. 47% of the refactored services broke in staging because the second agent missed critical implicit dependencies that the first agent had identified but described poorly.

The gap isn't in the format. It's in the lossy compression of complex technical context into serializable artifacts. Real engineering knowledge lives in the gaps between documentation, in the why didn't we do it the other way conversations, in the scars from previous outages, in the assumptions that senior engineers carry but never write down.

Devin's 13.86% success rate on SWE-bench isn't a fluke . It's what happens when you ask an agent to bridge that gap without the shared organizational memory that makes human teams function.

This Actually Creates Business Value

Autonomy without accountability is worthless. I watched a client spend $15K on Devin credits for a "autonomous feature implementation." Devin generated code for 6 hours, produced something that technically compiled, but missed the actual business requirement (the feature needed to handle a specific edge case for enterprise customers). A junior dev would've caught this in a 5-minute requirements clarification meeting.

The virtual company model optimizes for activity (agents doing things) rather than outcomes (business problems solved). It's an expensive, computationally intensive theater.

What Actually Works

After burning through budget on autonomous multi-agent orchestration, the setups that actually made it to production had these boring characteristics:

  1. Human-in-the-loop by design, as the primary control mechanism. 68% of production agent systems limit agents to 10 steps or fewer, and 80% use structured control flow where humans draw the workflow. Current agents are tireless interns with good reading comprehension, not autonomous problem-solvers.
  2. Precision over context. We stopped trying to shove entire codebases into context windows and started investing in retrieval systems that surface exactly what the agent needs. The arms race for 1M+ token windows is a distraction. Context rot is real, more tokens maybe mean more noise.

The Industry is Pivoting, But Nobody's Saying It Loudly

Look at the shift from 2023 to now:

  • AutoGPT went from recursive goal achievement to a framework for structured workflows
  • Devin pivoted from first AI software engineer to autonomous execution for well-defined migrations
  • Atoms AI has quietly moved away from the multi-agent software company narrative toward more constrained, production-ready orchestration

Everyone's retreating from the virtual company fantasy toward constrained, human-supervised automation. It's maturity. We're realizing that LLM agents aren't general intelligence. They're incredibly capable pattern matchers that need guardrails, not freedom.

My Take

If you're evaluating agent architectures for your team, run from anyone selling you AI employees that replace human judgment. Look for tools that:

  • Give you visibility into why decisions were made, not just what was done
  • Let you constrain scope easily without breaking the entire workflow
  • Integrate with your existing code review, testing, and deployment processes rather than trying to replace them

Devin, Atoms AI, AutoGPT, Claude's new agent mode, they all have legitimate use cases. But those use cases are narrower and more boring than the marketing suggests. But boring technology that ships is better than exciting technology that hallucinates in production.

The virtual company multi-agent architecture assumes agents can transfer knowledge like humans and make business-critical judgments autonomously. They can't. Production agent systems are converging on constrained, human-supervised workflows. Not because we're not AI-native enough, but because that's what actually works.

What's your experience?

reddit.com
u/PrestigiousPear8223 — 2 days ago

Made a cool AI video with VideoInu and had to share it

Been trying different AI video tools lately, and I randomly made this clip with VideoInu today.

Honestly didn’t expect it to turn out this fun. The motion looked smoother than I thought, and the whole vibe came out way better than what I had in mind.

Still kind of crazy that you can type an idea and get something like this in minutes.

Sharing it here because I thought some of you might enjoy it too. Curious what kind of stuff everyone else is making with AI video tools lately?

u/PrestigiousPear8223 — 4 days ago