AI Trust 101: Management, Trustworthiness, Transparency
Would you trust a machine with your life?
As artificial intelligence increasingly becomes more prevalent in our day-to-day lives, you might have to. Think about driverless cars, for example. In order for this type of technology to be accepted in our society, we have to trust that they’re safe to use. But how can we get there?
A recent survey from IBM of technology experts found that building trust in AI will require significant effort. Survey respondents said technology must evolve to better instill a strong sense of morality and significant transparency. More effort must also be made to educate both businesses and consumers on the technology, how it works, and the opportunities it is poised to create. This will likely require a collaborative effort across many industries, research disciplines, and government agencies.
What is AI Trust?
As a business term, AI Trust — the T in TRiSM — refers to systems designed to support trust in artificial intelligence systems. AI Trust initiatives are fairly nascent at this point, and surveys regularly show there is significant work to do when it comes to the general public trusting AI systems. In one survey of American consumers, almost 42 percent said they do not trust AI technology currently found in virtual assistance, medical diagnosis, financial planning, and hiring. The survey also found just 9 percent of respondents said they trust AI with their financial decisions, while only 4 percent said they trusted AI to do a good job in hiring the processes.
Addressing AI’s Biggest Problem: Trust
It’s understandable why some people are mistrustful of AI. There have been several high-profile examples of automated technology going awry. For example, a study from Carnegie Mellon University found automated Google ads for high-paying jobs were far more likely to target men than women. In another high-profile incident, Google’s image processing algorithms tagged photos of black people using racist tropes.
These incidents show how AI can reinforce existing disparities especially whenAI algorithms are trained on data that is not fully representative of what the system is meant to be learning. Flawed systems may not be designed to be biased, but they become biased because the designers did not do an adequate job of offering a good dataset. To be fair, this could be an unintentional source of bias due to the curation of an inadequate training data. AI systems can also become biased if a malicious actor is able to manipulate a training dataset. For example, machine learning algorithms for video surveillance could be fooled by changing a few pixels in training data images. These changes might be undetectable to us, but enough to manipulate the algorithm.
Organizations looking to develop AI trust should prioritize the creation of expansive and representative training datasets. They should also take strong measures to ensure these datasets are protected from hackers. This calls for the development and maintenance of strong AI Trust management and frameworks.
Evaluating User Trust in Artificial Intelligence
Building more trustworthy AI systems is only one side of the equation. AI trust initiatives must also address end users of the technology. One key aspect of this is gauging users’ AI trust level using a systemic and quantifiable approach.
A recent draft publication from the National Institute of Standards and Technology (NIST) put forward just such an approach for assessing user trust in AI. The draft put forward a list of several factors that play a role in users’ potential trust in an AI system. The paper described these factors as falling under two primary categories: User Trust Potential and Perceived System Trustworthiness.
User Trust Potential (UTP) describes the personal qualities of a user that relate to their trust in AI technology, such as personal disposition, age, gender, experience with AI, and technical ability. Perceived System Trustworthiness (PST) is comprised of the system’s user experience (UX) and Perceived Technical Trustworthiness (PTT), or a user’s perception of the technology behind the system.
The NIST paper proposes that user trust in AI can be quantified through the combination of UTP and PST. The study authors consider each variable to be a probability value, with a multiplication of the variables producing a number ranging between zero and one that represents the odds of a user trusting a specific AI system to correctly perform an intended action. This probabilistic representation allows for the quantification of trust so that systems can be assessed, compared, and improved.
How Do You Build Trust in AI?
There are many initiatives underway that are focused on building trust in AI, and one major focus of these initiatives is supporting a high level of transparency around the technology. When people can understand how an AI system arrived at a decision, they are more likely to trust the system itself.
Unfortunately, most experts agree that levels of AI trust and transparency are fairly low. One way to address this is to have AI systems essentially “show their work” as they interact with people. For example, some systems present text from their knowledge base alongside the conclusions that they have reached. Alternatively, Google has proposed AI “model cards” that are designed to resemble the nutrition labels printed on grocery items.
Greater transparency can also be achieved by presenting how AI systems gather data, in the same way, we can see how the apps on our smartphones collect data on our activities. Experts also recommend that, like smartphone apps, users of AI systems should be able to switch off some data collection functions as they see fit.
We can also develop better AI trust and transparency with user education. There are currently a lot of misconceptions around AI and this has the effect of eroding trust in the technology. One key area of mistrust has to do with the idea that AI will negatively impact our jobs, with automation making them less essential or eliminating them altogether. Therefore, a key focus of education should be on where AI will cause disruptions, and how people can adapt to ensure career security.
It’s also critical to build trust with initiatives focused on the technology side of the equation. One promising initiative instills systems with a sense of ethics through a technique called inverse reinforcement learning. This AI training technique involves a system observing how people behave in different situations and what they value in social settings. The objective of this training is to make an AI system function in ways that are consistent with our own ethics.
AI Trust Management
A recent paper by South Korean and British-based researchers laid out a specific framework for AI Trust Management that focused on three main areas: the data layer, the model layer, and the application layer.
- Data layer: This layer covers functions related to the collection and storage of data. If a dataset used to train an AI system is low-quality or corrupted, it will significantly undercut the ability to trust that system.
- Model layer: When an organization is developing its own AI system, trust requirements such as explainability and traceability should be implemented at the design stage. If an AI system is coming from a third-party provider, trust requirements should focus on evaluating the supplier.
- Application layer: AI trust management requirements must focus on the interaction between AI systems and the applications they support.
Supporting AI Trust with TripleBlind
In order to be trustworthy, AI systems must be built on large sets of dependable, unbiased, and high-quality data. TripleBlind technology offers unprecedented ability to access this type of data in a way that supports AI Trust.
Our TripleBlind Solution expands on proven principles related to multi-party compute and federated learning. It also compares favorably with similar technologies like homomorphic encryption and tokenization. Furthermore, it can be used with all kinds of data, including non-text and highly-unstructured data.
Book a demo today with our team of experts to understand how TripleBlind can help your organization unlock new revenue streams while supporting AI Trust.