We’re using AI for sensitive tasks but do we actually understand the data risks?
been thinking about this with how quickly tools like chatgpt and claude are getting integrated into daily workflows
a lot of people (including me at times) use them for things like code, internal docs, early business ideas etc basically stuff that isn’t exactly “public”
but if you think about it, most users don’t really have a clear model of:
- what gets stored
- how long it’s retained
- or how it might be used for training / improvement
i also came across some discussion recently around AI companies and government data requests (not sure how accurate it was) but it made me realize how little visibility we actually have into this layer
it feels like adoption is moving faster than understanding
curious how people here approach this:
do you actively limit what you share with these tools or just treat them like any other software?