Who governs what AI creates
​
Been in information governance and records management for 20+ years, but my role is definitely shifting to AIG
Something's been nagging at me. I've gone through the main AI governance frameworks, EU AI Act, ISO 42001, NIST, and they all govern the system. How you build it, deploy it, risk-assess it. Fine.
But what about the output? The stuff AI actually produces? Who owns it? How long do you keep it? Can you even trust it as evidence? What happens to all of it when you switch platforms or just stop using the tool?
Nobody seems to be asking these questions. Or if they are, I can't find them.
I keep seeing courts having to figure this stuff out on the fly because there's no framework telling them what AI output actually is. And I've seen a couple of jurisdictions starting to apply old records management law to AI output, not new AI legislation, existing records law. Which is interesting because it suggests the tools to deal with this might already exist, just nobody's connecting them.
Am I late to this? Is someone already working on governing AI output specifically, not just the systems? Because from where I'm sitting it feels like a massive gap and I'm surprised nobody's filled it yet.