u/Icy-County3265

English is not my native language, but I use it with ChatGPT because support is better compared to my native language. So I may not express everything perfectly, but I’m sharing this based on real usage.

Also, I’m not criticizing how others use AI — I’m just sharing patterns I personally noticed while using ChatGPT and trying to understand if others have similar experiences.

I’ve been using ChatGPT heavily since mid-2024 (paid user), and over time I’ve noticed a few recurring patterns:

  1. Coding behavior (chat, not IDE tools)

When working on small changes:

- I asked it to extend one tab in a Streamlit app → it added features but removed existing plots

- After correction → partial fixes, but some things were still missing

- In documentation edits → sometimes sections get removed without instruction

This makes even small updates feel unpredictable sometimes.

  1. Writing / report generation

While working on a college report:

- I provided a README → some points were not reflected in output

- Asked to improve a section → it referred to that section as “your section” even though it had written it itself earlier

This created some confusion while iterating.

  1. Misattribution pattern

Something I’ve seen multiple times:

- I ask a question

- ChatGPT answers

- Later it refers to that as “your answer”

I didn’t provide one.

At one point it explained that generated content becomes mine, which is understandable in a general sense, but during step-by-step work it can make things harder to track.

  1. Response style

Sometimes responses are:

- technically correct

- but slightly abstract or indirect

So they don’t always resolve the exact task, even if they sound reasonable.

  1. Effort vs outcome

In some cases, I’ve spent more time refining prompts than doing the task itself.

Example:

- One report section took ~2.5 hours due to repeated corrections and re-prompts

  1. Real-world example (insect identification)

I once asked about an insect in my room (it turned out to be a wasp).

- Identification was correct

- Some guidance was useful

But later responses included assumptions I didn’t make (like implying I thought it was “hunting” me), and phrasing like “only stings when disturbed,” which felt a bit vague in an indoor situation.

In practice, insects react to movement and proximity, so clearer phrasing might help in such contexts.

  1. Visual issue

I’m okay reading about insects/spiders, but I prefer not to see images.

Sometimes images can appear unexpectedly (depending on settings), so:

- a text-only mode

- or warning before showing such images

could be helpful for users with strong visual aversion.

Overall, I still use ChatGPT because:

- pricing is reasonable

- usage limits are flexible

But these patterns affect consistency in longer or more detailed tasks.

Question:

- Have others noticed similar patterns?

- Any workflows to keep responses more stable (especially for coding and writing tasks)?

reddit.com
u/Icy-County3265 — 15 days ago