Not from a policy standpoint, but operationally.
In most orgs I’m seeing, AI adoption isn’t the issue. It’s that usage is spreading faster than anyone can really track across teams, tools, and vendors. Some of it is sanctioned, some of it isn’t, and once it’s in production it’s hard to answer basic questions with confidence:
What’s actually running?
Who has access to which models?
What controls are being enforced at runtime?
What changes have been made over time?
A lot of companies still try to handle this through policies or approval processes, but those don’t seem to hold up once systems are live and distributed.
Feels like we’re missing an operational layer here. Something closer to how we think about network control or identity, but applied to AI systems.
For those of you further along, how are you handling this in practice? Are you centralizing model access, enforcing controls at runtime, or leaving it to individual teams?
Just trying to understand what’s actually working.