Exploring Refusals, Jailbreaks, and Prompt Injections in LLMs!
Apr 21, 2025 · 4 min read · Exploring Refusals, Jailbreaks, and Prompt Injections in LLMs! Introduction Another weekend, another mind-blowing deep dive into the world of Large Language Models (LLMs)! This time, I tackled Lesson 4 of the "Quality and Safety for LLM Applications"...
Join discussion