Been looking into MITRE ATLAS lately. It feels like ATT&CK but for AI systems.
Covers things like:
- model evasion
- data poisoning
- prompt injection
- model extraction
Curious how useful people are actually finding it in practice.
It seems like it does a good job mapping how AI systems can be attacked, but I don’t see as much around what’s actually exposed in the wild right now.
For example:
- which AI services are publicly reachable
- what frameworks are most commonly misconfigured
- where these attack paths are actually usable
Are people using ATLAS in real workflows yet? Or is it still mostly research and theory?
u/Entelijan — 13 days ago