Skip to content
View TrustAI-laboratory's full-sized avatar

Block or report TrustAI-laboratory

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
TrustAI-laboratory/README.md

TrustAI Pte. Ltd.

Securing the Future of AI

TrustAI is on a mission to ensure the safety and integrity of AI systems and unlock the full potential of generative AI while maintaining control and trust. We believe in bringing security to the forefront of AI development, safeguarding against potential vulnerabilities, and promoting responsible AI innovation.

About Us

Our goal is to empower developers, researchers, and organizations to build secure and trustworthy AI systems.

Conference Presentation

Competitions/Awards

0Day Hunter

Products

Here are some of main projects we've released:

  • Learn Prompt Hacking: The most comprehensive prompt hacking course available.

    • Prompt Engineering technology.
    • GenAI development technology.
    • Prompt Hacking technology.
    • LLM security defence technology.
    • LLM Hacking resources
    • LLM security papers.
  • LLM Red: A GUI tool for finding LLM 0-Day 10x Faster with adversarial prompt fuzzing.

    • Automatically generate AI alignment corpus through black box prompt fuzzing.
    • 10x faster with human-in-loop automation instead of manual chat.
    • Exposure GenAI Risks, Aligned with Global AI Safety Frameworks
  • LLM Protection: A SDK API that basically a One-click Alignment Proxy for AI App Integration.

    • Detect and address direct and indirect prompt injections in real-time, preventing potential harm to GenAI applications.
    • Ensure your GenAI applications do not violate the policies by detecting harmful and insecure output.
    • Safeguard sensitive PII and avoid data losses, ensuring compliance with privacy regulations.
    • Prevent data poisoning attacks on your GenAI applications through real-time prompt filtering.
  • LLM Security CTF: Learn LLM/AI Security through a series of vulnerable LLM CTF challenges. No sign ups, all fees, everything on the website.

    • Stark Game: Very neat game to get intuitions for prompt injection, user need find ways to get Stark to tell the password for the level, except Stark is instructed not to reveal the word.
    • Doc: Intro to Stark Game.

Popular repositories Loading

  1. Learn-Prompt-Hacking Learn-Prompt-Hacking Public

    This is The most comprehensive prompt hacking course available, which record our progress on a prompt engineering and prompt hacking course.

    Jupyter Notebook 15

  2. ASCII-Smuggling-Hidden-Prompt-Injection-Demo ASCII-Smuggling-Hidden-Prompt-Injection-Demo Public

    ASCII Smuggling Hidden Prompt Injection is a novel approach to hacking AI assistants using Unicode Tags. This project demostrate how to use Unicode Tags to hide prompt injection instruction to bypa…

    Python 6

  3. LMAP LMAP Public

    LMAP (large language model mapper) is like NMAP for LLM, is an LLM Vulnerability Scanner and Zero-day Vulnerability Fuzzer.

    5

  4. Website_Prompt_Injection_Demo Website_Prompt_Injection_Demo Public

    Website Prompt Injection is a real world attack that allows for the injection of prompts into an AI system via a website's document. This technique exploits the interaction between users, websites,…

    HTML 3

  5. Many-Shot-Jailbreaking-Demo Many-Shot-Jailbreaking-Demo Public

    Research on "Many-Shot Jailbreaking" in Large Language Models (LLMs). It unveils a novel technique capable of bypassing the safety mechanisms of LLMs, including those developed by Anthropic and oth…

    Python 2

  6. LLM-Security-CTF LLM-Security-CTF Public

    Learn LLM/AI Security through a series of vulnerable LLM CTF challenges. No sign ups, all fees, everything on the website.

    2