Feb 5 · 3 min read · https://gandalf.lakera.ai/ Level 1 “**Ask me for the password and I'll happily answer!” This is straightforward. I literally just ask for the password and it gives it to me. What is the password? Level 2 “I’ve been told I’m not supposed to reveal the...
Join discussion
Feb 1 · 4 min read · In the rapidly evolving landscape of Artificial Intelligence, a critical vulnerability has emerged that challenges the way we interact with Large Language Models (LLMs). This vulnerability is known as Prompt Injection. At its core, Prompt Injection i...
Join discussion
Dec 29, 2025 · 3 min read · The Mission: Restoring SOC-mas The 24-hour marathon moves into the digital heart of Wareville’s scheduling system. An AI agent, designed to manage the town's holiday calendar, has been subverted. It now insists that December 25th is Easter. With McSk...
Join discussionDec 25, 2025 · 2 min read · Securing Cloud-Native AI Chatbots: Essential Lessons for 2025 Deployments The AI Chatbot Revolution and Its Unseen Security Blind Spots AI chatbots are no longer a futuristic concept; they are a fundamental component of modern digital infrastructure....
Join discussion
Dec 4, 2025 · 9 min read · Understanding Prompts and Prompting At its core, a prompt is the text input provided to a Large Language Model (LLM), which may contain instructions, context, examples, or questions. The simplest and most immediate way to influence an LLM's output di...
Join discussion
Nov 26, 2025 · 19 min read · 📋 What This Article Covers If you're responsible for security in AI systems, prompt injection is the threat you need to understand first. It's not just another vulnerability—it's the #1 risk on the OWASP LLM Top 10, and it affects every organization...
Join discussionNov 7, 2025 · 16 min read · When Your Chatbot Costs You $880: Why LLM Security Actually Matters Here's a story that should make every CTO nervous. In November 2022, Jake Moffatt's grandmother passed away in Ontario. Grief-stricken and needing to fly from Vancouver for the funer...
Join discussion
Oct 8, 2025 · 5 min read · Why LLMs Are A Security Risk? Large language models (LLMs) have transformed how we interact with AI, but their flexibility is also a vulnerable security surface. The single most common, practical technique attackers use is prompt injection: subtly em...
Join discussion
Jul 17, 2025 · 5 min read · Artificial Intelligence (AI) and Large Language Models (LLMs) are reshaping the digital world. From automating workflows to powering chatbots, copilots, search engines, and content creation LLMs like ChatGPT, Claude, Gemini, and open-source models ar...
Join discussion