Best tools for protecting LLMs and AI infrastructure from attacks, specifically prompt injection?
Running internal LLMs for a few use cases and the security team is flagging prompt injection as a top risk. Attacker sends a crafted input that overrides the model's instructions. It's not theoretical, it's being actively exploited.
Check Point has prompt injection defense built into their AI Factory Security Blueprint, designed for orgs running AI infrastructure at scale. They do it at the infrastructure layer via integration with NVIDIA BlueField hardware so it doesn't eat into your GPU cycles. Protect AI and Lakera are also decent names in this space.
This is a genuinely new attack surface and most traditional security tools aren't built for it. What's your AI security stack looking like?