u/Academic-Star-6900

When a client wants to deploy an LLM internally but their data governance is a mess, do you take the engagement and fix the data first, or walk away?

Here's a Reddit-style body for that question:

When a client wants to deploy an LLM internally but their data governance is a mess, do you take the engagement and fix the data first, or walk away?

Looking for some honest takes from people who've been in this position, because I keep seeing the same pattern and I'm not sure my firm is handling it well.

Client comes to us, usually mid-market or larger, and says some version of: "We want to deploy an internal LLM. Our competitors are doing it. The board is asking. Can you help us build a chatbot over our internal knowledge base / a copilot for our analysts / an AI assistant for our support team?"

Sounds great on paper. Then you start the discovery and find out:

  • Their "knowledge base" is 14 SharePoint sites, 3 Confluence instances from acquisitions, a shared drive nobody has cleaned since 2017, and a guy named Dave who knows everything but is retiring in 8 months.
  • Sensitive customer data is sitting in spreadsheets that anyone with a corporate login can read.
  • They have no data classification policy, or they have one on paper that everyone ignores.
  • Half their "documents" are screenshots of emails saved as PDFs.
  • Access controls are basically vibes.

So now you're standing at a fork. You can:

A) Take the engagement and quietly fix the data layer first. Bill it as "AI readiness" or "knowledge foundation work." Spend 6-9 months doing the unglamorous data hygiene, governance, and access control work nobody wants to pay for. Then deploy the LLM on top of a clean foundation. The client gets a real outcome but they're impatient and the CFO is asking why we haven't shipped anything yet.

B) Build the LLM anyway on the mess. Slap some RAG on top, ship something demo-able in 8 weeks, collect the fees. Watch it hallucinate, leak data it shouldn't have access to, or surface that one HR doc with everyone's salaries. Hope you're out the door before the lawsuit.

C) Walk away. Tell them they're not ready, recommend a smaller scoped engagement, lose the deal to the consultancy down the street who will happily do option B.

In practice my firm does some flavor of A but the commercial pressure to start showing "AI value" within the first quarter is brutal. The clients hear "data governance work" and their eyes glaze over. They hear "we'll have a chatbot in 6 weeks" and they sign the SOW.

A few things I'd love to hear from this sub:

  • How are you scoping these engagements at signing time so the data foundation work is non-negotiable, not an upsell?
  • For folks at the bigger firms, are you walking away from deals where the client isn't ready, or are you taking the work and managing the risk?
  • Has anyone actually had success doing option B and not getting burned, or is that survivor bias talking?
  • How are you handling the partner/principal pressure to "just ship something" when you know the foundation isn't there?

I genuinely think a lot of the "AI projects fail at 80% rate" headlines trace back to this exact decision point, and we're collectively not being honest about it with clients.

reddit.com
u/Academic-Star-6900 — 1 day ago

As AI starts writing code, testing systems, and monitoring infrastructure, what skills will define a high-value IT professional?

AI is no longer limited to simple automation. It’s already writing code, generating test cases, monitoring infrastructure, detecting anomalies, optimizing workflows, and even assisting with architectural decisions. A lot of repetitive technical work that once required large teams is gradually becoming AI-assisted or fully automated.

That raises an interesting question about the future of IT careers.

If AI continues handling more operational and development tasks, what will actually separate a high-value IT professional from everyone else?

Will raw coding ability still matter the most, or will skills like system design, AI governance, security, critical thinking, business understanding, and decision-making become more important? Maybe the real value will shift toward people who can manage AI systems effectively rather than compete with them directly.

At the same time, companies still need humans for accountability, creativity, complex problem-solving, and understanding real business context — things AI still struggles with in unpredictable environments.

So how do you see the industry evolving over the next 5–10 years?

What skills do you think will remain truly valuable as AI becomes deeply integrated into software development and IT operations?

reddit.com
u/Academic-Star-6900 — 3 days ago

AI is already changing how fast products can be built.

Features that once took weeks or months can now be prototyped in days using AI-assisted coding, design generation, testing, automation, and product research tools.

Because of that, I’ve been wondering whether user expectations are about to change completely.

If companies can develop and ship faster with AI, will users start expecting:

  • constant feature releases,
  • instant bug fixes,
  • faster UI improvements,
  • and near-continuous innovation?

We already see people getting frustrated when apps feel “outdated” after just a few months.

At the same time, faster development doesn’t always mean better products.

  • Quality still matters.
  • Scaling still matters.
  • Security, testing, and user experience still take real thought.
  • And shipping too quickly can sometimes create bloated or unstable products.

So now I’m curious:

As AI drastically reduces development time, do you think users will become less patient with slow-moving products and companies?

Will speed become the new standard in tech?
Or will thoughtful execution still matter more than rapid iteration?

Interested to hear perspectives from developers, product managers, startup founders, designers, and users themselves.

reddit.com
u/Academic-Star-6900 — 7 days ago

With how fast AI tools are evolving, it feels like building apps is becoming less of a technical bottleneck and more of a “who can execute fastest” game. Tools like GitHub Copilot and ChatGPT are making it easier than ever to go from idea → working product without needing deep expertise in every layer of the stack.

So I keep wondering — if everyone has access to the same level of building power, what actually becomes the differentiator?

Earlier it used to be:

  • Strong engineering teams
  • Better architecture
  • Ability to ship faster than competitors

Now it feels like those advantages are shrinking.

Does differentiation shift more towards:

  • Product thinking and understanding user problems?
  • UX and design quality?
  • Distribution, branding, and marketing?
  • Or just who can iterate and adapt faster using AI itself?

Also curious about long-term defensibility. If an app can be replicated quickly with AI, does that make most products easier to copy and harder to sustain?

Would love to hear how people in startups or product teams are thinking about this. What still gives a product a real edge in an AI-first world?

reddit.com
u/Academic-Star-6900 — 10 days ago

I’ve been noticing how often I turn to AI for quick answers or even decisions I’d normally think through myself.

It’s efficient and convenient—but it also makes me wonder if I’m relying on it a bit too much.

If AI starts handling more of our thinking, learning, and problem-solving, how does that change the way we use our own brains? Do we become better at navigating information—or worse at independent thinking?

Curious how others see this. Where do you think the balance should be?

reddit.com
u/Academic-Star-6900 — 15 days ago