Is it time for a paradigm-shift in artificial intelligence development?
Model-centric AI used to be the pinnacle of machine learning, with a focus on modifying model architecture to increase meaningful output. Like designing a more powerful car engine or optimizing rotor blades on a wind turbine, model-centric AI prioritizes structural changes to increase efficiency and improve data outcomes.
However, modifications to model architecture can only go so far. Any AI solution requires two key components: clear code (the model), and clean data. Much like how you wouldn’t put the lowest-grade fuel into a high-capacity Ferarri, machine learning models require high-quality data to execute operations quickly and accurately. Without it, no AI solution can truly reach its maximum potential.
After years of intense focus on improving artificial intelligence models, the AI industry is rapidly changing lanes. If a machine learning model is only as effective as the data used to train it, then it’s time for a shift to data-centric AI.. However, change can be difficult, and given the relative newness of data-centric AI, there is understandably some doubt about jumping on the bandwagon.
Model-centric AI, with its focus on developing and improving models, has been the prevailing approach to AI to date. But the emerging data-centric approach to AI is predicated on the idea that performance can be improved further by putting more attention on the systems used to collect and process training data.
Having large volumes of high-quality data is essential to the creation and maintenance of AI systems. Unfortunately, data often doesn’t come cheap. It’s common for companies to spend six months or more of legal business development time just to get access to a single dataset. Due to its overall value, it’s understandable that the AI industry would start to prioritize data as the means to an end. Data-centric AI methods look to extract even more value by investing more in the curating, labeling, augmenting, and managing of data.
Real-world data is usually private and proprietary to the organization that collected it. One key aspect of data-centric AI is increasing an organization’s access to data.
Labeling data typically requires input from subject matter experts, many of whom have other important priorities. For example, medical doctors might be required to properly label X-rays for an AI system that processes medical imagery, while attorneys might be required to label legal documents for a law-focused system.
It’s common for the data AI systems handle to change over time, as well as the systems’ purpose. Because of this need for change, training data must regularly be updated and possibly relabeled to reflect various changes.
When a data set has a massive amount of manually labeled records, it raises issues related to governance and auditing. Organizations overseeing large volumes of data must have systems in place for identifying bias, fixing quality issues, conducting audits, and tracing the lineage of model flaws.
The Three Main Challenges of Data-Centric AI
Companies looking to adopt data-centric AI often face three primary data challenges: data volume, consistency, and quality.
- Volume. Supplying a large volume of high-quality data to an AI model means eliminating low-quality datasets. As a result, a data-centric approach often requires more data volume than a model-centric approach. However, addressing this issue by blindly collecting as much data as possible can be inefficient and costly. Before acquiring more data, organizations leveraging data-centric AI must establish the kind of data that is needed.
- Consistency. Without consistent data annotation, an AI model quickly becomes unreliable. Unfortunately, achieving a high level of consistency is quite difficult. A study from MIT revealed approximately 3.4 percent of data records in popular datasets were mislabeled. Furthermore the MIT study found that larger, more powerful models tend to be more greatly impacted by poor labeling consistency. Organizations looking to leverage data-centric AI must prioritize an effective system of data annotation, partly based on machine learning engineers having a deep understanding of their datasets.
- Quality. The data used to train an AI system should be representative of the data that a model will process after deployment, including any rare variations. Furthermore, attributes of data records that are not causal features should be randomized during training as part of quality control measures.
Taking these steps can mitigate two of the common flaws associated with poor dataset quality:
- Incorrect correlations. Dubious associations occur when a machine learning model associates non-causal data with a label. For example, if an image-processing model was trained on images of cows that always appeared in grasslands, a deployed model might identify pictures of cows as “grass”.
- Insufficient variation. When a model isn’t trained on a data set with adequate variation, it can result in the model doing a poor job of making generalizations. For example, an image processing model that was trained only on images showing daylight might fail to perform well on nighttime images.
Benefits of Data-Centric AI
While there are major data-centric AI challenges, organizations that can overcome them stand to reap several key benefits:
- More reliable and less biased. Avoiding “garbage-in, garbage-out” as a top priority, data-centric AI is designed to yield more dependable results. Having systems in place that put data first also means prioritizing the elimination of bias.
- Lower costs through greater flexibility. It is difficult for model-centric AI to realize performance gains without massive investments in new datasets and computing resources. With its bigger focus on maintaining high-quality data, data-centric AI is poised to unlock greater performance gains with much smaller investments in data and resources.
- Less administration. Model-centric AI is based on the notion of specialization, with models being explicitly designed to perform specific tasks. This can lead to an organization accumulating massive amounts of models and associated datasets. In addition to being unwieldy, this approach can lead to elevated costs compared to a more flexible, standardized approach that revolves around the data itself.
Get the Massive Amounts of High-Quality Data You Need Through TripleBlind
TripleBlind helps organizations adopt data-centric AI by breaking down regulatory and administrative siloes that keep data locked away.
With the TripleBlind Solution, organizations leverage proven privacy-enhancing technology to operationalize sensitive data while remaining compliant with various regulations. Our innovative technology also offers true scalability and faster processing, while supporting all data types.
If you would like to learn more about how our technology can enable the adoption of data-centric AI, contact us today.