u/Krommander

▲ 6 r/AIDiscussion+3 crossposts

I love how the AI's factual grounding is like Play-Doh and can be bent repeatedly without any resistance. That example is what we have to show students and educators. It warns of the lack of some capabilities in AI plus sycophancy.

It is a striking reminder to use factual sources as RAG, and limit discussions outside of scope. Grounding and prompting for pushback are ways I see of using LLMs for real life work that needs factual exactness and coherence.

I think a lot of us have developed skills around these issues. What works for you?

u/Krommander — 8 days ago