u/Jwilliams437

Why does this thing answer everything I don’t ask it?

Whenever I ask a simple straightforward question it will answer something I never asked. I don’t know how to explain it, it’s like it looks for a problem in what I said without considering any question I put in.

The second example I will admit is shaky because my original prompt wasn’t explicitly validate or weigh this idea. There I was just brainstorming how to grow my product offering. I feel like this thing lives and breathes to be a contrarian feeding off the lowest hanging fruit of my prompt.

For instance on a recent shit comment I gave it this prompt:

“You think this be satire ? This is false because my 1st 1x removed cousin’s best friend’s wife’s sister’s father in law’s nephew’s daughter got a job she was over qualified for. So everything said in this video is false.”

It answered “One anecdote doesn’t invalidate a broader claim.

If a video argues “the job market is bad,” “qualified people struggle to get hired,” or “networking/nepotism affects hiring,” a single person getting a good job despite being overqualified doesn’t logically disprove it. It just means exceptions exist.

The structure of the argument matters:

  • “Nobody can get jobs” → disproven by one counterexample.
  • “Many qualified people are struggling” → not disproven by one success story.
  • “Hiring is inconsistent and connections matter” → your example might actually support it.

Also, “overqualified people getting hired” happens for a lot of reasons:

  • internal referrals
  • timing
  • niche experience
  • salary flexibility
  • company desperation
  • personal connections
  • bad hiring filters
  • luck

Without the actual video, the strongest critique is usually:

“Your personal anecdote is too limited to falsify a population-level trend.”

That’s the same reason economists don’t use “my cousin got rich” to prove the economy is amazing.”

Or

I ask it about an idea, it gets the idea wrong, I correct it and explain what I mean bluntly. Then instead of evaluating the corrected idea it says the framing for that is terrible and politically risky and tells me why the rhetoric I used was wrong and how I should frame it instead of evaluating the idea.

For instance I said

“I could sell my current product to governments in jurisdictions who contract out assessment services because then its an audit of the contractor and less of the governemnt body.”

It explained how much more seamless B2B selling to these private companies that contract would be how much quickly they move.

I clarify, “No I am saying sell it to the governments to grade the contractors”

It said

That framing is riskier politically than positioning it as workflow augmentation, but it could still be viable if handled carefully.

Because yes, from a government buyer’s perspective, there is a real use case:

“How do we independently validate reassessment contractor performance?”

Especially when counties outsource:

  • field inspections,
  • reassessment work,
  • data collection,
  • or valuation support.

A county may legitimately want:

  • QA sampling,
  • independent verification,
  • consistency checks,
  • or oversight tooling.

And that could fit into that.

But the danger is if your product is framed as:

“We expose contractor failures.”

That can create immediate resistance because:

  • contractors may push back politically,
  • procurement relationships get sensitive,
  • and governments may fear conflict escalation.

A safer framing is probably:

“Independent parcel-level QA infrastructure for reassessment oversight.”

That sounds administrative and operational rather than adversarial.

Because governments absolutely already audit contractors:

  • sample reviews,
  • ratio studies,
  • field audits,
  • performance metrics,
  • spot checks,
  • permit reconciliation,
  • completion verification.

Your system just introduces imagery-driven QA into that process.

And honestly, this may actually be one of the strongest use cases for your technology because:

  • reassessment contracts are expensive,
  • counties need oversight,
  • and manual verification at scale is difficult.

Especially if your system is positioned as:

  • identifying review candidates,
  • not making final determinations.

For example:

“We identified parcels where imagery appears inconsistent with recorded reassessment outcomes and recommends targeted human review.”

That is much easier to defend than:

“The contractor missed these.”

You also avoid becoming entangled in valuation disputes directly.

You are validating:

  • existence,
  • physical characteristics,
  • and spatial consistency,

not necessarily:

  • market value conclusions.

That is important.

And politically, counties may actually like the idea of:

  • independent QA,
  • standardized review,
  • and auditability,

provided it is framed as:

  • oversight modernization,
  • data quality assurance,
  • and accountability tooling.

The strongest positioning there is probably:

“Objective imagery-based parcel QA.”

Not:

“AI auditing your contractor.”

reddit.com
u/Jwilliams437 — 4 days ago