u/Zestyclose_Bell7668

Even Blender Guru is using AI for 3D assets now. After 5 years in the industry, I think we're past the "gimmick" phase.

https://x.com/andrewpprice/status/2045494026342682767

I was scrolling Twitter and saw Andrew Price (Blender Guru) posting about using an AI tool (think it was Tripo) to generate a base 3D model instead of modeling it from scratch.

For those who don't know, he's the guy who taught half the internet how to make a 3D donut in Blender. To see someone with his level of manual modeling skill casually dropping AI into his workflow on Twitter is wild.

He basically dropped an image in, got a textured mesh out in seconds, and then just cleaned it up.

Is the 3D generation stack finally production-ready, or is this just for quick concepts?

reddit.com
u/Zestyclose_Bell7668 — 1 day ago
▲ 0 r/webdev

From AI coding to AI companies? After 18 months of production pain...

I've been building agentic systems since the AutoGPT hype train left the station in 2023. I've shipped multi-agent setups using everything from early MetaGPT (now atoms ai) experiments to Devin pilots for enterprise clients. I need to get something off my chest that the demo videos won't tell you.

Lego Brick Agent Assembly

The pitch sounds beautiful: buy a PM agent from Vendor A, an architect agent from Vendor B, wire them together with some JSON schema, and boom, you have a software team.

In reality, role boundaries are porous mud. When I tested Atoms AI on a real fintech project, the Product Manager agent kept making technical implementation decisions that should've belonged to the Architect agent. The handoff between them looked clean in the diagram, but the actual context transfer was lossy as hell. The PM would say implement a secure payment flow and the Architect would interpret that as add basic SSL while the PM actually meant implement PCI-DSS compliant tokenization.

This isn't a prompt engineering problem. It's a fundamental mismatch between how we think about software roles and how knowledge actually flows in engineering.

Information Just Flows Between Agents

We assume that if Agent A outputs a spec document and Agent B reads it, information has transferred. It hasn't. What's transferred is text, not understanding.

I ran a controlled test with a multi-agent system handling a codebase migration. The first agent analyzed the legacy monolith and produced a comprehensive migration plan. The second agent executed it. 47% of the refactored services broke in staging because the second agent missed critical implicit dependencies that the first agent had identified but described poorly.

The gap isn't in the format. It's in the lossy compression of complex technical context into serializable artifacts. Real engineering knowledge lives in the gaps between documentation, in the why didn't we do it the other way conversations, in the scars from previous outages, in the assumptions that senior engineers carry but never write down.

Devin's 13.86% success rate on SWE-bench isn't a fluke . It's what happens when you ask an agent to bridge that gap without the shared organizational memory that makes human teams function.

This Actually Creates Business Value

Autonomy without accountability is worthless. I watched a client spend $15K on Devin credits for a "autonomous feature implementation." Devin generated code for 6 hours, produced something that technically compiled, but missed the actual business requirement (the feature needed to handle a specific edge case for enterprise customers). A junior dev would've caught this in a 5-minute requirements clarification meeting.

The virtual company model optimizes for activity (agents doing things) rather than outcomes (business problems solved). It's an expensive, computationally intensive theater.

What Actually Works

After burning through budget on autonomous multi-agent orchestration, the setups that actually made it to production had these boring characteristics:

  1. Human-in-the-loop by design, as the primary control mechanism. 68% of production agent systems limit agents to 10 steps or fewer, and 80% use structured control flow where humans draw the workflow. Current agents are tireless interns with good reading comprehension, not autonomous problem-solvers.
  2. Precision over context. We stopped trying to shove entire codebases into context windows and started investing in retrieval systems that surface exactly what the agent needs. The arms race for 1M+ token windows is a distraction. Context rot is real, more tokens maybe mean more noise.

The Industry is Pivoting, But Nobody's Saying It Loudly

Look at the shift from 2023 to now:

  • AutoGPT went from recursive goal achievement to a framework for structured workflows
  • Devin pivoted from first AI software engineer to autonomous execution for well-defined migrations
  • Atoms AI has quietly moved away from the multi-agent software company narrative toward more constrained, production-ready orchestration

Everyone's retreating from the virtual company fantasy toward constrained, human-supervised automation. It's maturity. We're realizing that LLM agents aren't general intelligence. They're incredibly capable pattern matchers that need guardrails, not freedom.

My Take

If you're evaluating agent architectures for your team, run from anyone selling you AI employees that replace human judgment. Look for tools that:

  • Give you visibility into why decisions were made, not just what was done
  • Let you constrain scope easily without breaking the entire workflow
  • Integrate with your existing code review, testing, and deployment processes rather than trying to replace them

Devin, Atoms AI, AutoGPT, Claude's new agent mode, they all have legitimate use cases. But those use cases are narrower and more boring than the marketing suggests. But boring technology that ships is better than exciting technology that hallucinates in production.

The virtual company multi-agent architecture assumes agents can transfer knowledge like humans and make business-critical judgments autonomously. They can't. Production agent systems are converging on constrained, human-supervised workflows. Not because we're not AI-native enough, but because that's what actually works.

What's your experience?

reddit.com
u/Zestyclose_Bell7668 — 2 days ago

Using AI to find and vet suppliers, is it actually reliable?

I run a small business and I’ve been doing everything myself picking products, finding suppliers, and answering emails. It’s a massive time-sink. I decided to test some AI tools to see if they can actually handle the workload.

Right now, I use Claude to brainstorm and organize my thoughts. Then I use Accio Work to find suppliers and draft the outreach emails. Honestly, it’s been working well so far and saving me hours of manual searching.

But I’m still a bit cautious. Can I really trust these tools to handle the whole process? Or am I missing some key tips to make this more reliable?

I’d love to hear how you guys are using AI for sourcing. Any specific tricks to avoid mistakes when the agent is talking to vendors?

reddit.com
u/Zestyclose_Bell7668 — 3 days ago
▲ 3 r/ipad

ESR, zugu, or spigen? Any suggestions are more than welcome!

I ordered a new iPad and still waiting for it to arrive.(Yeah! My very first ipad!)

So I want to get a really protective case because I’m not exactly careful with my devices... I've tried but I can't.

I searched here and on amazon, these three brands seem to have the best protection, has anyone here used them? How’s your experience, and which one is actually the best? Any advice or experiences are really welcome!

u/Zestyclose_Bell7668 — 4 days ago

psa: If you need AI illustrations with actual readable words.

Played around with Figurelabs yesterday for some illustration. Unlike other tools that spits out jpegs with alien text, this one actually outputs editable text and vector shapes. You can just manually fix the typos after it generates. Saves a lot of headache when you need a chart with actual readable labels. Good enough for quick mockups imo.

u/Zestyclose_Bell7668 — 6 days ago