
Hackers Use Hidden Website Instructions in New Attacks on AI Assistants
Threat actors are now using a technique known as Indirect Prompt Injection (IPI) to manipulate large language models (LLMs) by embedding hidden instructions within seemingly ordinary websites, according to a new report from Forcepoint X-Labs. Once considered a purely theoretical risk, the research shows that IPI is now actively being exploited in the wild to target live web infrastructure.