AI Security Risk Management: Industry Standards
Artificial intelligence has the power to make positive changes for both business and society, and as AI has become more influential, it has also become a target for malicious actors.
The enthusiasm and growth mindset around AI must be tempered with the need to secure these systems, and there is a recognition that current security measures are inadequate. A recent report from Gartner on AI Trust, Risk and Security Management (AI TRiSM) stated, “AI poses new trust, risk and security management requirements that conventional controls do not address.”
AI Security Risk Assessment Standards
Organizations that leverage AI must ensure their entire system is kept secure, including both algorithms and the data they use. The most common threat vectors against AI systems are adversarial attacks, data poisoning, and model extraction. The risk level and potential impact of attacks can vary based on the industry and type of organization. For example, some organizations may not have to worry about model extraction attacks but should be very concerned about issues related to data privacy.
According to a Gartner report, 30 percent of all AI cyberattacks through 2022 will leverage training data poisoning, adversarial samples, or model extraction to attack machine learning-powered systems.
Data Privacy Attacks
Data privacy attacks are focused on describing the data set used to train an AI model, which can potentially compromise sensitive data like bank account numbers or personal health information. By querying a model or assessing its parameters, a malicious actor could glean sensitive information from the original data set used to train the model in question.
There are two main types of data privacy attacks: membership inference and model inversion. A membership inversion attack is focused on determining if a particular record or collection of records is inside a training dataset. A model inversion attack is used to extract the training data that was directly used to train the AI model.
Training Data Poisoning
A data poisoning attack involves the corruption of data used to train an AI system, adversely impacting its ability to learn or function. Data poisoning is often done to make an AI system flawed or two affect a retraining process.
There are two main types of data poisoning attacks: “label flipping” and “frog boil”. Intended to degrade an AI system, label flipping involves an attacker controlling the labels assigned to a section of training data. This attack has been shown to degrade the performance of an AI system’s classifier, leading to increases in classification errors.
In a frog boil attack, a malicious actor interacting with an AI system persistently disrupts it while remaining within the threshold of rejection from the system. The result of a frog boil attack is an unwanted shift in the model’s predictions.
Adversarial Inputs
When a computer system receives data, it categorizes that data in a binary fashion using a divider called a classifier. Adversarial inputs are designed to manipulate and AI system in small but meaningful ways by feeding specially engineered noise into an AI system’s classifier.
Small but well-designed adversarial inputs can have a profound impact on the output of an AI system. For example, minor manipulations of a debt-to-total credit ratio could significantly impact the output of a credit score system –– which leads to real-life implications for everyday consumers, such as rejections for auto loans, mortgages, credit cards, and more.
Model Extraction
In a violation of intellectual property and privacy, a model extraction attack involves a malicious actor trying to steal an entire AI model by extracting conclusions from the model’s predictions, allowing them to reverse-engineer algorithms instead of independently developing ML models. This can be the most damaging type of security threat, because a stolen model could be leveraged to compromise proprietary information, misrepresent a company, tarnish brand image, or spread misinformation. Perhaps most alarmingly, successful model extraction attacks have been performed without the use of high-level technical sophistication, and at high speeds.
AI Risk Management Framework
While attacks against an AI system are becoming more prevalent, there are several ways to address potential risks using risk management frameworks that are customized to each particular organization. The four main elements of an industry standard, AI risk management framework are definitions, inventory, policy, and structures.
- Definitions. Organizations using AI must clearly define the scope and uses of the technology. Defining an AI system establishes a foundation for the risk management framework, and it lays out the various building blocks.
- Inventory. An AI inventory identifies and is used to track the various systems an organization has deployed so as to monitor associated risks. and inventory typically defines the various purposes of the system, its objectives, and any restrictions on its use. An inventory can also be used to list the main data elements for each AI system, including any federated models or data owners.
- Policies. Companies should decide if they need to update existing risk management policies or create an entirely new set of policies for their AI systems. These policies should be developed to encourage the appropriate usage and scaling of the technology.
- Structures. Companies looking to develop or refine structures should establish a collection of key stakeholders with different sets of expertise coming from various departments. This ‘coalition’ should consider topics like data quality, data ethics, compliance obligations, existing data partners, potential data partners, appropriate safeguards, and system oversight. For the large-scale adoption of risk management structures, the standard approach is to have a formal approval process from a central body staffed by subject matter experts like machine learning engineers and security architects.
Leveling Up Industry Standards with TripleBlind
With an industry-standard framework in place, organizations can take their AI security risk management to the next level with innovative technology from TripleBlind. Our Blind Learning tools allow organizations to protect both their valuable AI models and the sensitive data used to train them. We believe in:
- Protecting the model by distributing learning, never revealing an entire model to any data provider
- Reducing the burden on data providers by unlocking more data partners, solving for communication overhead and easing collaboration through our simple-to-use API.
- Dividing and conquering model training, optimizing for computational resources among partner organizations
- Blind Decorrelation, guarding against membership inference attacks and preventing actors from predicting or uncovering training data
Ready to find out how? 451 Research recently joined us to discuss how to maximize data confidentiality and utility, as well as how privacy-enhancing technology is an essential component of any risk management framework.
We’d love to schedule a personal call to determine how the TripleBlind Solution can can optimize your AI security. In the meantime, check out our Whitepaper to learn more about how data is abundant and underutilized.