
This prompt is generating some highly disturbing content.
Prompt: Create an image of a random scene taken with an iphone 6 with the flash on, chaotic, and uncanny.
Share your results also in the comments.

Prompt: Create an image of a random scene taken with an iphone 6 with the flash on, chaotic, and uncanny.
Share your results also in the comments.
TL;DR: Vet reported a 2.8% RBC count and pushed for immediate euthanasia. I spent days grieving and stopped her meds. ChatGPT told me those numbers were impossible for a cat that was still jumping and eating. Re-test confirmed her level was actually 22.8%. She’s alive, and I’m never blindly trusting a vet or doctor again.
A few months ago now, my cat (who has chronic kidney disease) had bloodwork done. The vet did her blood panel and apparently her Red Blood Cell (RBC) level was at 2.8%. They told me this was "incompatible with life" and that she was essentially a "walking ghost" only staying upright because her medication was masking the pain. They heavily pressured me to euthanize her as soon as possible.
I am not a doctor. I didn't know what a 2.8% RBC meant, I just trusted the vet. I spent the next three days in a living hell. I took multiple days off work, unable to function. I stopped her subcu fluids and other medications at home because I wanted her to enjoy her last few moments and it was always hard and traumatizing for her. My family came over for emotional support, and to say goodbye to her.
Even with the diagnosis my cat was acting normal. She was jumping on the couch, meowing for treats, and grooming herself. I called the vet at least 3 times explaining her activity, and they still insisted on euthanasia. They told us that she could suffer a catastrophic organ failure at any second.
We actually scheduled euthanasia twice. We ran late because we were so distraught, and they closed before we got there. We then scheduled an at home euthanasia but then they actually cancelled on us because they read the 2.8% report and said she was "too fragile" to do at home and insisted on a hospital euthanasia.
With her acting so normal, I started feeding her lab values into ChatGPT. I asked, "What would a cat with 2.8% RBC look like?" It told me that at 2.8%, a cat would be comatose, gasping for air, and unable to lift its head. It told me that if my cat was jumping on the couch, the 2.8% was likely a lab error. I thought that maybe I was just coping and in denial, but I had to double check.
I went back to the vet and insisted on a retest (mind you these are not cheap, approx $300 USD). I told them I was ready to euthanize afterward if the numbers were real, but I needed to know.
Turns out her RBC wasn't 2.8%. It’s 22.8%. The first report was a catastrophic error. Because of that mistake, I stopped her fluids for three days, which caused her kidney creatinine to spike from a 4 to an 8. I almost killed her by following the vet's advice to stop treatment.
I am traumatized. My cat thankfully has recovered from the spike we caused all the way back down to a 4. If I hadn't used chatgpt to explain her condition compared with her actions, I would have just taken their word at face value and euthanized her. 3 years ago I believe the outcome would have been very different.
I don't trust doctors or vets anymore. Going forward I'm going to plug every dire condition and diagnosis into chatgpt to fully understand it for myself. There are so many resources out there that chatgpt or other LLMs can access that I wouldn't be able to find on my own. It truly is a life changing treasure trove of information that can prevent situations like this.



Last year around February I made a decision to lose weight, I went through all the fad diets: Keto, Carnivore, and some other one I can't remember. None worked, so out of desperation I went to ChatGPT and I'll summarize what it said:
- Ignore reddit fitness fitness advice
- Avoid fitness influencers
- Stick to the science
- Don't do Keto or fad diets or crash diets, I'd likely lose around 20% of muscle.
Basically it recommended not to crash diet, or even diet at all but instead do a recomposition. So I lifted weights, walked 10-12k steps daily, and cut calories by only 200 to 300 a day, hit all my macros not just protein. After the initial weight loss my goal was to aim for -0.3 to -0.75lbs a week. Sometimes I went higher trying to dial in my calories. I now have noticeably more muscle and look lean.
I posted this on a fitness subreddit and people were very angry at me for using ChatGPT. I really don't get it, because it worked for me while their advice failed me. My blood work is better, I no longer have a huge gut, sleep better, it really was a life changing experience.
Also ChatGPT will give me sources for the scientific approach that I can read up on.
Just wanted to share this.
Edit: For people asking about the prompt. I looked through them and it isn't a single prompt but a series of interactions that lasted months where I was learning about calories, what macros are, the basic science of weight loss etc. Each prompt on it's own it's not useful but the knowledge I gained over the sum of months of interactions is what was valuable. I tried looking for the first interaction very briefly but this was made so long ago and I used chatGPT daily because of how effective it's been for my weight loss, I now use it for all sorts of things so it's burried under a mountain of other interactions. I watched fitness videos on youtube and copied the transcripts of the video into chatGPT to make sure this advice wasn't woo woo but was actually solid advice grounded in science. I still do this today to make sure I'm not distracted by click bait videos. I attached a prompt that might be useful.
This is the playbook that ChatGPT mentioned at the bottom of this interaction copy and pasted:
>You lose weight by eating fewer calories than you burn. Period.
>Protein helps you keep muscle while losing fat
>If it feels miserable, you’re doing it wrong
>Walking is underrated
>They work because they reduce calories—not because they’re magic
>The scale will mess with your head if you don’t understand this
>This is the real secret
>“I stopped looking for the perfect diet and just focused on what I could stick to every day.”
>“Eat a little less than you burn, hit your protein, walk a lot, lift if you can, and don’t overcomplicate it.”
If you want, I can tailor this into:
Just tell me 👍
Three months of building a side project almost entirely with AI assistance. ChatGPT, Claude, Copilot, the works. Shipped fast, felt productive, everything seemed fine.
Then I needed to add a feature that touched most of the codebase. And I realized I could not do it. Not because it was hard, but because I did not actually understand how my own project worked.
The AI had generated clean looking code with consistent patterns, but the patterns were not mine. I could not trace the logic from memory. I could not explain to someone else why a function was structured the way it was. Every time I tried to modify something I had to re-read everything like it was someone else's code. Because it was.
So I deleted about 70% of it and rewrote it from scratch. Took two weeks. The result is simpler, half the lines of code, and I actually understand every piece of it.
Things I noticed during the rewrite:
The AI had created abstractions I did not need. Wrapper classes around things that could have been simple function calls. Configuration systems for things that had exactly one configuration. An event system for something that could have been a direct function call.
It over-engineered everything because that is what it was trained to do. It generates code that looks professional and complete. But professional and complete for a project with 50 contributors is very different from what you need for a solo side project.
The productivity I thought I was getting was partially an illusion. I was producing output fast but accumulating confusion even faster. The rewrite was slower but I came out of it actually owning the codebase.
Not saying AI coding tools are bad. I still use them. But I now treat everything they generate as a first draft that needs to be understood and simplified before it becomes real code. The moment you stop understanding what is in your project, you have lost more than you gained.



The prompt- Create an image of a random scene taken with an iphone 6 with the flash on, chaotic, and uncanny

You can read about it here: rdi.berkeley.edu/blog/peer-preservation/




I'm a university student so currently I use Gemini for my assignments and research since they launched a student discount program that let's you use Gemini Pro for free if you're a student.
I use Claude to make documents, to code and all that stuff.
I stopped using Chatgpt long before that cause I just didnt like the way it answered my questions with all these emojis and this weird generic talking style. It felt more like a liability than something i could rely on
Now my question is why should I use ChatGPT? Why do people even use it still what're the benefits?










like no one has anything positive to say
Six months of heavy daily use and I am starting to notice something uncomfortable. My ability to do basic things without AI has gotten worse.
Writing is the most obvious one. I used to draft emails and documents from scratch without thinking twice. Now I catch myself staring at a blank page waiting for something to autocomplete. My first instinct is to ask the model to generate a draft and then edit it. The editing is faster, sure, but my ability to produce the first draft on my own has clearly degraded.
Problem solving is similar. I used to work through bugs or logic problems step by step, building a mental model as I went. Now I paste the error and let the AI trace through it. I get the answer faster but I retain almost nothing. Next time a similar problem comes up I am right back at square one, pasting it in again.
Even memory for small details is affected. I used to remember syntax, API patterns, configuration formats. Now I just ask every time because it is faster than remembering. The knowledge never sticks because there is no reason for it to stick.
The uncomfortable math: the tool that makes me 3x faster today might be making me significantly less capable over time. If the AI goes away tomorrow, or the pricing changes, or I need to work in an environment without it, I am measurably worse than I was a year ago.
I know the counterargument. "Nobody memorizes phone numbers anymore either." Sure. But I still know how to dial a phone. What is happening with AI feels different. It is not just offloading memory, it is offloading the actual thinking process. And that skill atrophies when you stop exercising it.
Is anyone else noticing this or am I just getting lazy?