u/OReilly_Learning

O'Reilly Radar - Burnout and Cognitive Debt

O'Reilly Radar - Burnout and Cognitive Debt

Cognitive debt and burnout aren’t new, alas. With or without AI, we’ve all stayed up to 4AM working on a bug that won’t go away or pursuing an interesting idea to its end. Sometimes that’s heroic, but AI threatens to turn it into a lifestyle. AI fatigue is real, as Siddhant Khare writes, and it’s something we need to talk about. When fatigued, it’s tempting to say “this works, it looks good, and it passes our tests” without considering how the code fits into the overall plan. With 10x code generation, you also get 10x the debt load, and that’s being optimistic. When the debt curve goes exponential, strategies for managing that debt are stressed past the breaking point.

u/OReilly_Learning — 1 day ago
▲ 254 r/OReilly_Learning+1 crossposts

What’s Your Most Controversial IT Opinion?

Fellow sysadmins, what’s your biggest unpopular IT opinion? Not the usual “users should reboot first” stuff, but the things you’ve learned after a few years in the trenches that you probably wouldn’t say too loudly in a meeting.

reddit.com
u/OReilly_Learning — 1 day ago

Neal Ford and Sam Newman Discuss Agentic AI

Speaking at the our recent “Software Architecture Superstream,” Sam Newman and Neal Ford make the case that modular programming principles from the 1960s are exactly the right framework for bounding AI agents and verifying their output.

youtube.com
u/OReilly_Learning — 7 days ago
▲ 27 r/webdevelopment+1 crossposts

A customer asked why ChatGPT was saying our product doesn’t support subscriptions. We’ve had subscriptions live for over a year, so that didn’t make any sense.

I tried it myself and got the same answer.

So I dug a bit deeper and hit our pricing page using GPTBot as the user agent. The response looked… fine at first glance. Layout, nav, footer, all there. But the actual content was basically empty divs where React would normally hydrate.

So yeah, the bots weren’t seeing our product. They were seeing a skeleton of it.

Checked a few others too, Perplexity was messing up our pricing, Claude was missing entire parts of the product. Every AI had a slightly different wrong version.

We ended up doing something pretty simple in hindsight.

Instead of trying to make bots understand our HTML, we just gave them a format they’re better at reading.

Now every page has a markdown version alongside it. Same content, just clean, structured, no JS needed. At build time we generate both /page and /page. md.

Then at the edge, we check the user agent. If it’s one of the known AI crawlers (GPTBot, ClaudeBot, etc.), we serve the markdown version. Otherwise it just goes to the normal site. It’s literally just a string match, so there’s basically no overhead.

One small thing that made a surprisingly big difference, we normalized a lot of text to plain ASCII. Stuff like ₹ symbols, fancy quotes, em dashes. Models were weirdly inconsistent with those, but something like “INR 15000” gets reproduced correctly every time.

We’re also logging all bot requests now, mainly to see where markdown coverage is missing. That ended up being the most useful signal.

We did try going the SSR route first, thinking “just render everything for bots.” It technically worked, but added latency and still sent a lot of noisy HTML. Felt like we were maintaining two systems for no real gain, so we scrapped it pretty quickly.

Right now things are a lot more stable, but one thing we’re still figuring out is redirects. Bots cache pretty aggressively, and if they hit an old URL and get a 404, that seems to stick around longer than you’d expect.

Curious if anyone else has dealt with that part, how are you handling old URLs and keeping AI crawlers in sync?

reddit.com
u/OReilly_Learning — 13 days ago

Velocity is table stakes. Code is a commodity. Understanding is the edge.

“I was talking to a senior engineer at a well-funded company not long ago. I asked him to walk me through a critical algorithm at the heart of their product, something that ran hundreds of times a second and directly affected customer outcomes. He paused and said, ‘Honestly, I’m not totally sure how it works. AI wrote it.’

A few weeks later, a different engineer at another company was paged about a system outage. He pulls up the failing service and realizes he has no idea it’s connected to a database. A colleague accepted the AI-generated PR three months ago that added that dependency. The tests passed. The change was never written down. The original engineer moved on and the knowledge was lost.”

u/OReilly_Learning — 14 days ago
▲ 210 r/OReilly_Learning+2 crossposts

Google released two early-release chapters from the SRE Book 2nd Edition this week.

>One is the new "AI for SRE" chapter. It's on O'Reilly publication behind a paywall, but a free trial works. Read it last night, sharing the takeaways for anyone who doesn't to read the full thing.

The condensed version:

  1. AI is not a human replacement. The book is firm on this. We still need humans for the high-stakes calls and to maintain the AI itself.
  2. Don't give AI full access on day one. Build trust the way you would with a junior engineer. Let it suggest fixes first, fix small issues next, only then expand its scope.
  3. If the agent can take an action, it must have a rollback. If there is no undo path, the access should not be granted. This is the line I think most teams shipping agents are skipping right now.
  4. When the agent fails or gives a bad suggestion, flag it. The chapter leans on the same principle as good postmortem culture, more feedback and more context means better future execution.
  5. During incidents, the time-saver is not the fix, it is the searching. The chapter frames the agent as the thing that finds the right answer fast across tabs, runbooks, and prior incidents, instead of the thing that pushes the fix.
  6. Dashboards tell you something is broken. AI is positioned as the layer that tells you why, by reading the tickets and the user feedback that the dashboards do not capture.
  7. The framing that stuck with me most: AI does not reduce SRE workload, it raises the reliability ceiling. Cheaper reliability does not mean less work, it means higher reliability demanded across more services. Jevon's paradox applied to ops.

What I would add as a practitioner: the 5-level maturity model they propose is useful, but the gating criteria between levels is where the real engineering lives. "Agent suggested 50 fixes, 47 were good" sounds great until you ask which 3 were wrong and what they would have broken. Most teams I see skipping straight to autonomous remediation are not doing that work.

Worth a read if you are scoping AI in operations in the next year.

(Disclosure: I run Sherlocks, which builds in this space. This is not a pitch for it.)

reddit.com
u/OReilly_Learning — 3 days ago

In this episode

Martin Kleppmann is a researcher and the author of O’Reilly’s “Designing Data-Intensive Applications,” one of the most influential books on modern distributed, systems. As of this month, the second, heavily updated edition of the book is out.

In this episode of Pragmatic Engineer, we discuss Martin’s career in tech building startups, how he ended up writing this iconic book, and what he’s focused on, these days, after moving from industry, into academia.

We talk about the tradeoffs behind modern infrastructure, how the cloud has changed what it means to scale, and the thinking behind Designing Data-Intensive Applications, including what’s changing in the second edition.

u/OReilly_Learning — 16 days ago

Hey everyone! I'm u/marsee, a founding moderator of r/OReilly_Learning. You'll also see me posting as u/OReilly_Learning — that's our main account, and I'll be running it most of the time, though occasionally we'll hand it over to guest moderators and special visitors. This is our new home for all things related to tech learning, professional development, and the ever-evolving world of software, AI, data, and beyond. We're excited to have you join us!

What to Post Post anything that you think the community would find interesting, helpful, or inspiring. Feel free to share your thoughts, questions, wins, and struggles around topics like programming, career growth, emerging tech, certifications, learning resources, and industry trends. Not sure if something fits? Post it anyway — curiosity is always welcome here.

Community Vibe We're all about being friendly, constructive, and inclusive. This is a space for learners at every stage — from first-timers writing Hello World to seasoned engineers navigating what's next. Let's build a community where everyone feels comfortable sharing, asking, and connecting. That goes for us too — we're here as fellow learners and tech enthusiasts, not just as moderators. Say hi, push back, ask us anything.

A Little About Us O'Reilly has been teaching people tech for over 40 years — through books (with those 🐪🦒🦏🐍s on the covers), videos, live events, hands-on labs, and online learning. We've grown and changed a lot over the decades, but the mission has always been the same: spread the knowledge of innovators. This subreddit is the latest chapter in that story, and honestly, we're just as excited to see where it goes as you are.

Community Partners We also have a community partner program for organizations, meet-up groups, Discord servers, and other communities that share our passion for tech education. If that sounds like something you or your organization might be interested in, you can learn more and apply at https://www.oreilly.com/partner/signup.csp.

How to Get Started

  1. Introduce yourself in the comments below — tell us what you're learning or what brought you here.
  2. Posting a simple question can spark a great conversation.
  3. If you know someone who would love this community, invite them to join.

Thanks for being part of the very first wave. We're in this together — let's make r/OReilly_Learning something really special.

u/OReilly_Learning — 19 days ago

...(G)enerative AI is woven into the tools students use every day: web search, word processors, code editors. You might assume that by now, most programming instructors have figured out how to handle it. But when my collaborators and I went looking for computing instructors who had made meaningful changes to their course materials in response to GenAI, we were surprised by how few we found. Many instructors had updated their course policies, but far fewer had actually redesigned assignments, assessments, or how they teach.

reddit.com
u/OReilly_Learning — 19 days ago
▲ 97 r/OReilly_Learning+1 crossposts

https://preview.redd.it/ut9yti8t1ewg1.jpg?width=4284&format=pjpg&auto=webp&s=ea617ca16765b81608b73d97486d610e14dca37f

Hi Reddit! I'm Louise Macfadyen — most recently a product designer at Microsoft, before that Google. My book Designing AI Interfaces (O'Reilly) comes out tomorrow, and I wanted to do this AMA partly because design can feel like a locked door if you're vibe coding without a design background, and I don't think it should. I'm self-taught as both a designer and developer, so I know the feeling of working in design as a non-expert, and I've come to believe good design is more of a method than a talent.

Design is proving a major friction point for vibe-coded tools in general. As software got cheap to make, it means that looking credible is even more important - and expensive. Additionally, traditional product development used to force design decisions on you whether you thought of them as design or not. For example, funding pitches forced clarity on user and brand, hiring forced prioritization and a primary path. Vibe coding goes idea → prompt → build → ship, so neither of those forcing functions fires, and you end up with products that have no clear user, no obvious primary path, and no particular reason to look like anything.

You can see this in three recurring symptoms: designing for everyone (which is really designing for no one), the printer problem (every function surfaced with equal weight, no sequence designed through them), and visual sameness: the white-background, Inter, card-layout, blue-button look that increasingly reads as "this was generated, probably won't be here next month."

The solution around this is multifaceted, but worth learning to ask yourself the questions the old process used to ask. Here's the staged version I've been using:

Stage 1 — Plot yourself on two axes. Personal ↔ shared, disposable ↔ maintained. That lands you in one of three situations: a jig (you're the user), an internal tool (colleagues), or a consumer product (strangers). A lot of frustration comes from treating a product as though it's in a category it isn't.

Stage 2 — Focus on what the category actually needs. For a jig: zero visual investment, one path, let it be disposable. Polishing a jig steals time from the problem it exists for. For an internal tool: the primary action should be unmistakable on the first screen, label functions by what they do for the user ("refund this order," not "trigger refund endpoint"), hierarchy before polish. For a consumer product: move off the vibe-coded default before you start prompting, design the first minute deliberately, and don't skip the cliff — error states, onboarding, data policies, account recovery. That's what separates a product from a prototype.

Stage 3 — Audit your users and yourself honestly. Who exactly is this for, what's the one thing they must be able to do, and what breaks their trust when things go wrong? Then on yourself: how much taste are you bringing, and how much sustained care are you willing to invest? If the answer is "not much," stay in jig territory. That's a valid choice.

Stage 4 — Move away from the middle. The reason so many vibe-coded products look the same is Tailwind. Models have absorbed an enormous amount of Tailwind and its component libraries, so that's what they produce by default. Instead of asking "how do I improve this design," imagine your product pulled in four opposing directions: more refined (Stripe, Linear, Notion), more raw (Hacker News, early Are.na, brutalist web), more personal (Glossier, Poolside, indie zines), more specialized (Bloomberg terminal, Figma panels, DAWs). Then prompt your tool to render the same screen in a specific direction and compare side by side — that's where you actually see what's drifting. dat.GUI is worth knowing about; it lets you toggle fonts, palettes, and spacing on a live project without rebuilding.

Stage 5 — Build a reference library. A few I keep coming back to:

  • Mobbin — the industry standard for mobile UI screenshots, searchable by flow and by pattern.
  • Before.click — mobile app case studies, easy to lift specific patterns from and iterate on.
  • Pageflows.com — full UX flows, especially strong for onboarding and interaction patterns.
  • Cosmos.so — something like Pinterest for designers, with the useful detail that you can filter by color.
  • AIverse.design — AI-specific interaction patterns collected in one place.
  • Godly.website — curated web design that sits well outside the SaaS default.
  • Land-book — another curated directory, heavier on brand-driven sites.
  • Typewolf — if you're making any typographic decisions at all, this is the place to start.
  • Refactoring UI — practical visual improvements, written squarely for people without design training.

And a few Claude skills / agent tools for design-adjacent work:

  • Impeccable — a comprehensive package of design fluency for AI harnesses, aimed at polish, motion, and delight rather than just accessibility.
  • Theme Factory — an Anthropic Claude skill that generates cohesive color systems and styling foundations from a prompt.
  • Frontend Slides — a Claude skill for presentation-style UI with layout and hierarchy built in.
  • Vercel Agent Skills — their web-design-guidelines skill is useful for reviewing UI, UX, and accessibility against established best practices.

These aren't replacements for judgment — they handle the things people rush or skip (consistency, hierarchy, accessibility). Chain a few and you've got a lightweight design pipeline.

Ask me anything!

reddit.com
u/OReilly_Learning — 17 days ago
▲ 111 r/OReilly_Learning+1 crossposts

How is everyone's work going these days? Are you still writing code the old-fashioned way every day? Or have you all started letting AI do the work?

reddit.com
u/Focus-Novel — 23 days ago