Skip to content

AI/ML applications have unique security threats. Project GuardRail is a set of security and privacy requirements that AI/ML applications should meet during their design phase that serve as guardrails against these threats. These requirements help scope the threats such applications must be protected against.

License

Notifications You must be signed in to change notification settings

Comcast/ProjectGuardRail

image

AI Threat Modeling Solution
Questionnaire to identify threats in AI/ML applications
Developed by the Comcast SPIDER Team

Contents

Overview

AI/ML applications have unique security threats. Project GuardRail provides a questionnaire that includes a set of threat modeling questions for AI/ML applications. It helps ensure to meeting security and privacy requirements during the design phase, which serve as guardrails against those threats. The requirements help scope the threats to protect AI/ML applications against. It consists of a baseline set required for all AI/ML applications and two additional set of requirements that are specific to continuous learning and user-interacting models. There are four additional questions that are specific to generative AI applications only.

Structure

The content of this library is derived from a variety of frameworks, lists, and sources, both from academia and industry. We have performed several iterations to refine the library to accurately determine the scope and language of the questions. The sources provided below offer a comprehensive list of all the materials contributing to this library.

As shown in the diagram below, the "Questionnaire for Manual Threat Modeling" defines the library. The 53 threats (and 4 additional generative AI threats) are divided into three categories as shown.

  • All AI/ML applications must meet the 28 baseline requirements.
  • If an application is continuously learning, it must meet 6 additional requirements in addition to the baseline.
  • If an application EITHER trains on user data OR interacts with users, it must meet 19 additional requirements.

Generative AI questions are differentiated and put into a separate group under each category if applicable.

Structure-Diagram-GuardRail

Each requirement is divided into four sub categories - data, model, artefact (output), and system/infrastructure, depending on the element of the ML application to which a threat is applicable.

Data indicates all input information to the model that it trains on. Model indicates the source code of the AI/ML application. Artefact indicates the output of the model, including predictions if applicable. System/infrastructure is the underlying architecture supporting the model functionality, like hardware, for example.

Usage

This threat modeling questionnaire can be used as an assessment for both AI/ML applications as well as new third-party AI vendors. After an application undergoes the usual security review process and it is determined that it is not an AI/ML-driven application, the review ends. Otherwise, the application developers can take the baseline assessment. Following this, depending on whether the underlying model fits into the two additional categories outlined above, additional assessment questions can be added. This questionnaire can then be reported to the threat modeling team for review.

Process-Diagram-GuardRail

Sources

This repository will be updated as we use additional sources to update the library if required.

Contributions

Contributions and suggestions from the open-source community are welcome. Please refer to the Contributing.md document for more information. All contributions must follow the Comcast Code of Conduct.

License

Licensed under the CC-BY-4.0 license.

About

AI/ML applications have unique security threats. Project GuardRail is a set of security and privacy requirements that AI/ML applications should meet during their design phase that serve as guardrails against these threats. These requirements help scope the threats such applications must be protected against.

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published