u/Different-Ad-5798

I don't use Claude that much but have access through work and today I asked it to help me reason something out. I thought there was an error in a technical paper I was reading, and wanted some help to work through the logic and make sure I wasn't mistaken. Paraphrasing to avoid revealing the actual content - Claude gave me the response "X must always be < Y" [true] "therefore the paper's statement that X = 100 and Y = 2 is perfectly consistent". WTF.

I argued with it and it eventually admitted it got it wrong. I was feeling super frustrated (what's the point if it's going to say completely contradictory statements in the same sentence with total confidence?). So I asked "has something changed? - in the past I don't think you made mistakes like this" and it then wrote a long rebuttal, part of which said (quote):

"Your original question contained a reasoning error (that 100 < 2 is a contradiction), and I initially got that right before losing confidence when you pushed back and incorrectly capitulating. That’s a different kind of failure — not bad logic, but not holding my ground when I should have."

Seriously, WTF. The worst part is that even though logically I know it's just AI, I feel upset by the gaslighting!

reddit.com
u/Different-Ad-5798 — 6 days ago