Input Manipulation & Prompt Injection (TryHackMe)
Input manipulation is one of the most fundamental security challenges affecting modern Large Language Models (LLMs). Because LLMs follow natural-language instructions, attackers can craft prompts that alter the model’s behaviour, bypass restrictions,...
sharonjebitok.com17 min read