Trusting AI Hero image

AI — Trust, Risk & Security

We trust artificial intelligence for both personal tasks and business functions, but how far does that trust go? When it comes to making billion-dollar business decisions or life-and-death medical care choices, is it okay to completely trust a computer? Concerns around trusting AI are many and they include bias, inaccuracy, design flaws, lack of transparency, and security.

Many organizations are currently wrestling with these AI trust issues, and this conversation was recently crystalized in a report from Gartner® that addressed notions of AI trust, risk, and security management (AI TRiSM). The report highlighted that (AI TRiSM) typically requires organizations to implement a best-of-breed tool portfolio approach, as most AI platforms will not provide all required functionality.

 

What Is an AI TRiSM Strategy?

Addressing AI TRiSM requires a multi-pronged strategy capable of managing risks and threats while promoting trust in the technology. Best practices for handling AI TRiSM establish the following core capabilities:

  • Explainability. An AI TRiSM strategy must include justifications or information that explains the purpose of the AI technology. The model should be described in terms of its purpose, strengths, weaknesses, likely behavior, and potential biases. This aspect of an AI TRiSM strategy ought to clarify how a specific AI model will provide accuracy, accountability, fairness, stability, and transparency with respect to decision-making.
  • ModelOps. Model operationalization (ModelOps) of an AI TRiSM strategy covers the lifecycle management and overall governance of all AI models, including both analytical models and models based on machine learning.
  • Data Anomaly Detection. Data monitoring tools are used to analyze weighted data drift or degradation of key features to prevent attacks, bias, and process mistakes.. This aspect of AI TRiSM is meant to highlight data issues and anomalies before decisions are made based on information provided by a model. Data monitoring tools are also helpful when it comes to optimizing model performance.
  • Adversarial Attack Resistance. Used to create different types of organizational loss and harm, adversarial attacks alter the results of machine learning algorithms to gain an advantage. This is done by providing adversarial inputs or malicious data to an AI model after it has been implemented. Adversarial attack resistance methods prevent models from accepting adversarial inputs throughout their entire life cycle; from development, through testing, and into implementation. For example, an attack resistance technique might be designed to help models tolerate a certain level of noise, as this noise may be adversarial inputs.
  • Data Protection. AI technology requires massive amounts of data, and protecting data is a mean concern when it comes to implementation. As a component of AI TRiSM, data protection is particularly critical in highly-regulated industries such as healthcare and finance. Regulations like the Health Insurance Portability and Accountability Act (HIPAA) in the US and the General Data Protection Regulation (GDPR) must be followed or else organizations risk being found in non-compliance. Additionally, AI-specific regulations are currently a major focus for regulators, particularly when it comes to protecting privacy.

 

The State of AI TRiSM

Regulators and government agencies around the world have been issuing guidance and announcing potential new laws designed to enforce the fair and transparent use of AI.

The European Union has taken a multifaceted approach to this issue. In one aspect, the EU has looked to shape the use of AI technology through investments in research and public-private partnerships. The European government has also formed a group of multidisciplinary experts called the European AI High Level Expert Group (HLEG) to develop ethical guidelines around bias, transparency, shared values, and explainability. The group will also recommend policies on AI-related infrastructure and funding. The EU has also proposed hefty fines for companies that do not comply with established AI guidelines.

In the United States, regulation of AI appears to be in a fairly early stage. Some government officials have called for rules to limit the use and design of AI, but the US government has not developed a comprehensive approach to the issue. In May 2021, the National Institute of Standards and Technology (NIST) released a draft publication designed to trigger discussion on trust in AI systems.

Currently, companies are operating in regulated industries that need transparency regarding the AI systems they use. These companies must be able to monitor the precision, performance, and any potential bias of their technology. They must record this information in a level of detail that is suitable for accountability, compliance, and possibly customer service.

Therefore, regulated companies should have staff and structures in place to address the many variables related to risk and compliance — including handling the massive amounts of necessary documentation.

 

AI Security Risk Management

There is no one-size-fits-all platform when it comes to addressing every aspect of AI TRiSM. Companies must cobble together various internal and external policies and tools to cover the various distinct risk and threat categories, including bots, system faults, query attacks, and malicious inputs. Theft of property, asset damage, asset manipulation, model theft, and data corruption are all potential types of damage that can be caused by these threats.

Some of these threats are pre-existing, while others target the AI system after implementation. Therefore, organizations must have mitigation measures in place that cover both types of threats. Enterprise security and trustworthiness measures can be used to address pre-existing threats, while ModelOps measures can be used to mitigate post-implementation threats. Measures that address financial, brand-related, political, or other macro risks fall outside the scope of AI TRiSM. Furthermore, AI TRiSM measures do not specifically address data breaches, fraud, or theft. These measures are specifically focused on securing and protecting AI models.

 

Trust in AI

While there is some conceptual overlap between AI security and a more general approach to cyber security, use of artificial intelligence requires not only trust in a system’s security features, but trust in an AI’s output, results, and potential implications.. Organizations that heavily invest in keeping their technology secure should also invest in a comprehensive approach to sustaining trust in the technology. The goal is to have AI systems that are explainable, have minimal bias, and remain dependable.

An essential first step to sustaining trust in an organization’s AI technology is establishing a high degree of explainability. Unfortunately, this is not an easy goal. AI systems on the market, such those used for medical benefits or online advertising, are often black-box solutions that rarely offer visibility into underlying data, logic, and processes. Furthermore, effective deep learning and other machine learning approaches are not transparent to end users and do not provide much insight into their inner workings. While some efforts are underway to increase transparency, companies can support explainability by establishing strong sets of goals, best practices, and organizational transparency around their AI systems.

Another key step toward establishing trust in AI is the detection and mitigation of bias. Once again, this is not as easy as it might seem. Issues around bias, fairness, and inclusivity can be subjective. For the business-oriented application of AI, measures related to eliminating bias should be calibrated in accordance with the context and application objectives. 

With proper calibration, bias detecting and mitigating algorithms can catch issues and address them before they become deeply integrated into a model. In addition to addressing moral and regulatory concerns, bias checks can simplify risk analysis and add value to system performance reports.

To maintain trust, AI systems must also remain dependable. Model outputs can drift due to conceptual or data-based reasons and therefore require constant monitoring to avoid generating adverse results. This facet of AI TRiSM is a bit more straightforward than more subjective aspects and there are tools available better capable of monitoring the performance of AI models.

 

Market Direction for AI TRiSM

In September 2021, Gartner released a comprehensive report on the current state of the market for AI TRiSM solutions and projections for the near future of the market.

“We expect the market to quickly evolve, driven in part by increasing regulations and by increased capabilities to operationalize AI models,” the report said. “New generations of combined functionality will emerge over time, and we expect [end-to-end TRiSM systems to arrive] by 2026.”

According to the report, the AI TRiSM market will progress in five distinct phases:

  • Phase 1 — Fragmentation: The AI TRiSM market today is highly fragmented. AI vendors (see Note 2) do not provide all the requisite functionality to effectively and continuously manage AI trust, risk and security. This leaves users in a spot where they select more than one provider for best-of-breed products from the primary AI TRiSM categories to fulfill these requirements.
  • Phase 2 — Feature Consolidation: AI TRiSM capabilities will be consolidated into just two buckets, down from the current five. ModelOps and Data Protection will be the primary two vendor categories required to address AI TRiSM issues (see Note 3). At the same time, AI vendors will expand their own packaged TRiSM functionality.
  • Phase 3 — Solution Integration: ModelOps alerts and remediations will be integrated into overarching and existing Enterprise Risk Management and Security Orchestration systems. Third-party models used by the enterprise will be incorporated into ModelOps platform management (beyond first party enterprise developed models). Alerts and remediations for adversarial attacks and malicious transactions will be integrated into existing Security Orchestration or SIEM systems.
  • Phase 4 — Market Consolidation: Most of the model- and platform-neutral ModelOps vendors in the market today will be acquired by broader AI platform vendors or Enterprise Risk Management vendors, leaving very few pureplay ModelOps vendors. These consolidated platforms will coexist with innovative solutions that extend capabilities to composite AI and generative AI. Data Protection for AI model data will continue to evolve from solutions used to protect data outside AI applications.
  • Phase 5 — Augmented-AI Managed TRiSM: New end-to-end fully managed enterprise AI TRiSM systems that themselves use AI will emerge so that AI systems can be self-correcting under human oversight

 

Until the market reaches this final stage, enterprises must create a tapestry of tools and practices to address AI TRiSM. Fortunately, some platforms offer cross-functionality.

“Some, but not all, of today’s ModelOps platforms include explainability functions and also check for anomalous data patterns in incoming production data. However, they do not generally detect the source of the anomalies,” the report said. “Further, pre-existing data anomaly detection tools may already be used by an enterprise, for example in fraud detection operations, and should optimally be integrated into a comprehensive ModelOps operation.”

When it comes to selecting solutions, the report said, decision-makers should focus on addressing their industry-specific, use case needs: “For example, data protection may be the most important priority in the healthcare sector, while adversarial attack resistance may be top of the list for defense contractors.”

The report’s authors also said it is critical to assess the perspectives of all potential users.

“These stakeholders include data and analytics leaders, C-level executives responsible for AI along with data scientists, machine learning engineers, enterprise AI architects, legal and compliance teams, cloud operations, security, privacy and risk managers,” the report said. “In particular, explainability solutions must be able to satisfy the different requirements of these unique user personas so they can understand model risks particular to their domain.”

 

Simplifying AI TRiSM with TripleBlind

Organizational leaders looking to address AI TRiSM should consider the data protection afforded by TripleBlind’s innovative privacy-enhancing technology.

The massive amounts of data required to develop AI models are not always available. For systems designed to process personal information, like those in healthcare and financial services, privacy regulations can prevent significant access to siloed data.

Our innovative technology is specifically designed to facilitate AI development. When partnering with TripleBlind, model developers can use Blind Learning to access data from multiple parties in a way that maintains privacy and protects valuable algorithms. This technology combines and expands the efficiency of split learning with the data residency advantages of federated learning, resulting in a single and highly-efficient privacy-preserving solution.

Blind Learning allows for data to be operationalized without ever revealing any personal or private information, helping data partners to remain compliant with regulations like HIPAA and GDPR. This technology also allows data holders to hold on to their information, addressing issues related to data residency. Furthermore, model owners never have to ship their full model to another organization.

Our technology also allows for the protection of all data types. For example, health care researchers looking to develop AI models for X-ray imagery can easily share diagnostic imaging without revealing any personal information of patients. In this use case, source images are obfuscated and encrypted through privacy-enhancing computation.

With our technology, AI developers and users can tap into new sources of essential data and create valuable new data partnerships. If you would like to learn more about how our technology can be your data protection solution in pursuit of AI TRiSM, contact us today.

 

Gartner, Market Guide for AI Trust, Risk and Security Management, Avivah LitanFarhan ChoudharyJeremy D’Hoinne, 1 September, 2022

GARTNER is the registered trademark of Gartner Inc., and/or its affiliates in the U.S. and/or internationally and has been used herein with permission. All rights reserved.