Most AI platforms today store and can technically access user conversations … whether for logs, debugging, or improvements.
But as AI gets used for:
- business workflows
- client data
- internal tools
- personal and private use
…does that become a real issue?
What would you expect instead?
We’ve been exploring a different approach:
- Encrypting message content at rest
- Designing systems where even platform admins can’t read user data
- Still keeping the app usable (search, sessions, memory, etc.)
But it raises tradeoffs.
The real questions
- Would you trust an AI platform more if it couldn’t read your data at all?
- Or do you prefer platforms that can access data for:
- debugging
- better responses
- support
- How important is “admin-blind” AI to you, realistically?
Tradeoffs we’re seeing
- More privacy = more complexity
- Harder to debug issues
- Limits certain “smart” features unless designed carefully
Curious what you think
Is this:
- A must-have for the future of AI
- Nice-to-have but overkill
- Not important compared to usability
We’ve been building toward this direction and it’s sparked a lot of internal debate.
Would love to hear how others are thinking about it.