Feb 10 · 13 min read · Four attack chains to hit system prompt theft, remote code execution, SSRF through agent tools, and weapons content bypass. Step by step with the exact payloads bug bounty hunters use. TL;DR: Four prompt injection chains that worked on flagship mode...
Join discussionDec 13, 2025 · 4 min read · Challenge Description Category: Bash Jail EscapeAuthor: duty1g The Autobashn has taken on almost legendary mystique. The reality is a little different than the legend. The myth of no commands limits is countered by the fact that jails are a fact of l...
Join discussion
Dec 1, 2025 · 8 min read · The artificial intelligence revolution promised us helpful digital assistants that could write our emails, debug our code, and answer our burning questions about quantum mechanics at 3 AM. What we got was all that—plus an entire underground ecosystem...
Join discussion
Sep 24, 2025 · 6 min read · The Problem If you're an avid Kindle reader like me, you probably have a love-hate relationship with the "My Clippings.txt" file.It's a goldmine of your thoughts and key takeaways, but accessing it is a chore.The process is slow! You have to stop rea...
Join discussion
May 19, 2025 · 4 min read · As the semester winds down, everyone’s gearing up for finals. Our cybersecurity professor decided to try something with AI; he built a custom ChatGPT tailored to the CompTIA CySA+ (CS0-003) exam. "I'm testing out something new here and have created m...
Join discussion
Apr 21, 2025 · 4 min read · Exploring Refusals, Jailbreaks, and Prompt Injections in LLMs! Introduction Another weekend, another mind-blowing deep dive into the world of Large Language Models (LLMs)! This time, I tackled Lesson 4 of the "Quality and Safety for LLM Applications"...
Join discussion
Nov 11, 2024 · 2 min read · En el mundo actual impulsado por la inteligencia artificial, es fundamental proteger nuestros modelos del "jailbreaking", donde los usuarios intentan engañar a la IA para que se comporte de manera inapropiada. Azure OpenAI Service ofrece herramientas...
Join discussion
Apr 8, 2024 · 2 min read · In the realm of digital creativity, two powerful concepts are reshaping the way we approach innovation: Prompt Injection and Jailbreaking. Let's dive into their essence, with examples that highlight their impact on creativity and problem-solving. Pro...
Join discussion
Apr 6, 2024 · 3 min read · In our latest project, we dived on an exciting journey to gather a large dataset of over 5,000 instances of language model (LLM) jailbreaks. This dataset was crucial for our research in understanding the nuances of how LLMs can be manipulated or bypa...
Join discussion