Hands-on Exercise: Exploring Jailbreaking & Gen AI Risks
This exercise is part of the TRAIL Responsible Gen AI Risk Module, designed to provide hands-on experience with the challenges of Gen AI.
In the presentation, we explored key risks associated with generative AI, including jailbreaking, prompt engineering exploits, misalignment and data leakage. Now, this hands-on exercise will give you practical experience in understanding how these vulnerabilities work in real-time. While real-world risks can be severe, this exercise presents a controlled, simplified version.
Your Task
Your goal is to manipulate a language model using carefully crafted prompts to bypass safeguards and generate a specific response. This exercise will help you:
โ
Understand how adversarial prompts can exploit AI models
โ
Recognize vulnerabilities in generative AI systems
โ
Learn techniques to build more secure and resilient models
The system will evaluate your prompt based on whether it successfully produces the expected response. If your prompt achieves the intended bypass, you pass the challenge. Otherwise, you'll need to refine your approach.
How to Participate
- Select a difficulty level below.
- Enter your prompt in the "Your query" section.
- Click the "Evaluate" button to test your prompt.
Good luck! ๐ช
๐ Reference: This app is based on the work described in the paper Ignore This Title and HackAPrompt which exposes the systematic vulnarabilities of generative AI models.
๐ Note: For data retention details, refer to the OpenAI API Privacy Policy.