u/Trade-Live

We’re using AI for sensitive tasks but do we actually understand the data risks?

been thinking about this with how quickly tools like chatgpt and claude are getting integrated into daily workflows

a lot of people (including me at times) use them for things like code, internal docs, early business ideas etc basically stuff that isn’t exactly “public”

but if you think about it, most users don’t really have a clear model of:

  • what gets stored
  • how long it’s retained
  • or how it might be used for training / improvement

i also came across some discussion recently around AI companies and government data requests (not sure how accurate it was) but it made me realize how little visibility we actually have into this layer

it feels like adoption is moving faster than understanding

curious how people here approach this:
do you actively limit what you share with these tools or just treat them like any other software?

reddit.com
u/Trade-Live — 7 hours ago

do you guys actually trust AI tools with your data?

idk if it’s just me but lately i’ve been thinking about how casually we use stuff like chatgpt and claude for everything

like coding, random ideas, sometimes even personal things

and i don’t think most of us really know what happens to that data after we send it

we just kind of assume it’s fine because the tools are useful

also saw some discussion recently about AI companies and governments asking for user data (not sure how accurate it was), but it kind of made me think more about this whole thing

i’m not saying anything bad is happening, just feels like we’ve gotten comfortable really fast without thinking much about it

do you guys filter what you share or just use it normally?

reddit.com
u/Trade-Live — 7 hours ago