Are AI research tools harder to jailbreak than chatbots?
Tried messing around with Frank AI researcher a bit and noticed research-focused AI tools seem way more restrictive compared to normal chatbots. A lot of prompt injection/jailbreak-style prompts that work elsewhere either get ignored or heavily filtered when the model is tied to search/research workflows.
Wondering if that’s because of different system setups or just stricter moderation layers.