Skip to main content

GenAI

·7 mins
This post is about safety alignment bypasses like prompt injections or jailbreaks, that make LLms ignore their guardrails.
·2 mins
This new series covers security risks in AI systems in depth. From prompt injections, to supply chain attacks and risks of agentic AI.