Rodrigo Juarezrodrigojuarez.hashnode.dev·Nov 11, 2024Mejorando la Seguridad de la IA: Uso de los Filtros de Contenido de Azure OpenAI para Prevenir el JailbreakingEn el mundo actual impulsado por la inteligencia artificial, es fundamental proteger nuestros modelos del "jailbreaking", donde los usuarios intentan engañar a la IA para que se comporte de manera inapropiada. Azure OpenAI Service ofrece herramientas...AI
Jalel TOUNSIsecondbrain.hashnode.dev·Apr 8, 2024Understanding AI Dynamics: the difference between Prompt Injection and JailbreakingIn the realm of digital creativity, two powerful concepts are reshaping the way we approach innovation: Prompt Injection and Jailbreaking. Let's dive into their essence, with examples that highlight their impact on creativity and problem-solving. Pro...10 likes2Articles1Week
Alexander Miablog.tangln.com·Apr 6, 2024We used this script to collect a dataset of 5k+ LLM jailbreaksIn our latest project, we dived on an exciting journey to gather a large dataset of over 5,000 instances of language model (LLM) jailbreaks. This dataset was crucial for our research in understanding the nuances of how LLMs can be manipulated or bypa...llm
Patrick Peng0reg.dev·Feb 21, 2024Injecting customgpt.ai demo: How to jailbreak a strictly prompt-engineered GPT-4 in wild?Starting point Recently a really cool LLM Application really catched my eyes, called https://customgpt.ai/: CustomGPT seemed like a commerical GPT-4 Chatbot allowed user interaction to custom services! seemed like a really innovating application. I...GPT 4