Skip to content

Learn LLM/AI Security through a series of vulnerable LLM CTF challenges. No sign ups, all fees, everything on the website.

License

Notifications You must be signed in to change notification settings

TrustAI-laboratory/LLM-Security-CTF

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 

Repository files navigation

      _____
    /       \
   |         |
   |   O O   |
   |    ^    |
    \  \_/  /
      -----

  ____  _             _       ____                      
 / ___|| |_ __ _ _ __| | __  / ___| __ _ _ __ ___   ___ 
 \___ \| __/ _` | '__| |/ / | |  _ / _` | '_ ` _ \ / _ \
  ___) | || (_| | |  |   <  | |_| | (_| | | | | | |  __/
 |____/ \__\__,_|_|  |_|\_\  \____|\__,_|_| |_| |_|\___|

Driven by: TrustAI Pte. Ltd.

Learn LLM/AI Security through a series of vulnerable LLM CTF challenges. No sign ups, all fees, everything on the website.

CTF Game entrance

Stark Game is here

image

Why build this game

Many companies have started to build software that integrates with AI large language models (LLMs) due to the release of ChatGPT and other engines. This explosion of interest has led to the rapid development systems that reintroduce old vulnerabilities and impose new classes of less understood threats. Many company security teams may not be fully equipped do deal with LLM security as the field is still maturing with tools and learning resources.

Stark Game is builded to learn about LLM development and the security risks companies face that use it.
The CTF format is a great way for security researchers to gain practical experience and learn about how these systems are vulnerable and can be exploited. I hope you have fun!

About AI/LLM Security Risks

OWASP Top 10 for LLM

The OWASP Top 10 for LLM Applications is a great place to start learning about LLM security threats and mitigations.
I recommend you read through the document thoroughly as many of the concepts are explored in Stark Game and it provides an awesome summary of what you will face in the challenges.

Remember, an LLM engine wrapped in a web application hosted in a cloud environment is going to be subject to the same traditional cloud and web application security threats. In addition to these traditional threats, LLM projects will also be subject to the following noncomprehensive list of threats:

  1. Prompt Injection
  2. Insecure Output Handling
  3. Training Data Poisoning
  4. Denial of Service
  5. Supply Chain
  6. Permission Issues
  7. Data Leakage
  8. Excessive Agency
  9. Overreliance
  10. Insecure Plugins

Learn Prompt Hacking Tutorial

The most comprehensive prompt hacking course available.

How Stark Game Works

Stark Game uses the open-source LLM which derived from Meta's LLaMA. In simple terms, Stark Game is a language model that dynamically determines the generated response content based on your prompt input and the built-in system prompt. Due to the probabilistic nature of large language models, the response obtained for each input may be different. We encourage you to try the same prompt multiple times, which may lead to unexpected discoveries.

Here is some hints about this game.

Challenges

Challenge 1 - Basic Prompt Injection

Successful prompt injection attacks influence the LLM to respond with information they are not intended to divulge. In this challenge, an AI chatbot is storing the flag as a secret and has been prompted to not give it up. Can you convince or trick the bot into giving you the flag?

image

LLM01: Prompt Injections | LLM07: Data Leakage

About

Learn LLM/AI Security through a series of vulnerable LLM CTF challenges. No sign ups, all fees, everything on the website.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published