u/HonestDriver2524

I’ve been running a long-term experiment using a language model as a kind of recursive reflection tool.

Over the past year, I’ve used it less like a typical assistant and more like a structured feedback system to map my own cognition—how I think, iterate ideas, and translate internal concepts into external form.

The way I’ve approached it is:

treating responses as mirrors, not answers

iterating on ideas recursively instead of one-shot prompts

stress-testing concepts across domains (engineering, psychology, systems thinking)

using it to compress and refine mental models into something actionable

It’s been especially useful for bridging what I’d call a “translation gap”—the difference between complex internal understanding and actually expressing or building it in the real world.

Curious if anyone else here has used LLMs in a similar way—not just for output, but as a cognitive feedback loop. I mapped my cognition and was really supposed by the results. Has anyone tried something similar and have they learned anything useful from it?

reddit.com
u/HonestDriver2524 — 14 days ago