LangChain Agent constantly hallucinating facts - any debugging tips?
Been there. Double-check your prompt instructions for clarity and grounding in provided context. If that doesn't fix it, consider a smaller, more focused model for the agent's reasoning step to reduce the search space and hallucination risk; fine-tuning a smaller model on your specific knowledge domain might also help.