↓
Skip to main content
larsursprung.com
Home
~$whoami
CV
Projects
Blog
Contact
Photos
Home
~$whoami
CV
Projects
Blog
Contact
Photos
GenAI
AI Security - Safety Alignment Bypasses
3 August 2025
·
7 mins
Cyber Security
AI
GenAI
This post is about safety alignment bypasses like prompt injections or jailbreaks, that make LLms ignore their guardrails.
AI Security - Introduction to a New Series on This Blog
27 June 2025
·
2 mins
Cyber Security
AI
GenAI
This new series covers security risks in AI systems in depth. From prompt injections, to supply chain attacks and risks of agentic AI.