PROTECT DATA AND MODELS
Our AI Tools remove common barriers to using high-quality data for artificial intelligence and deep learning, allowing AI professionals to solve their most pressing data access, prep, and bias challenges. These tools make it possible to train new models on remote data and run inference on existing models, while protecting the privacy and fidelity of data and intellectual property.
WHAT IS BLIND LEARNING?
Efficient and accurate privacy-preserving distributed machine learning
Blind Learning is TripleBlind’s patented solution for distributed, privacy-first, regulatory-compliant machine learning at scale. Blind Learning combines and extends the computational-efficiency of split learning with the data residency benefits of federated learning into a single computationally-efficient, privacy-preserving approach.
FEDERATED LEARNING
In federated learning, copies of the model are sent to each data provider, trained locally on the data, and returned to the model provider – where they are aggregated, resulting in a final global model. The process relies on each of the data providers training the entire model, which requires expensive computational resources, high network bandwidth, and often non-disclosure agreements. Averaging of the whole models can be less effective than the method of aggregation used in Split Learning.
SPLIT LEARNING
Split learning sends parts of the model to each data provider – distributing the task of training while preserving the intellectual property of parts of the model. The final model is only ever realized by combining all the parts back at the model provider. The original split learning paradigm uses a sequential training method, which makes it impractical for real-world solutions. Moreover, the existing model aggregation methods of split learning produce suboptimal models.
Everything you need to know
about TripleBlind in one place.
Never move data, TripleBlind eliminates the need for copying or transmitting it raw. Our Blind Learning technique leaves each dataset in place during model training, never transmitting raw data or aggregating it in a central location.
Protect the Model
RETAIN THE INTELLECTUAL PROPERTY OF PROPRIETARY MODELS
Due to the inherent distribution scheme of Blind Learning, the model provider never reveals their entire model to any data provider. Only part of the model ever leaves their firewall to be trained by the data providers.
Reduce Burden on Data providers
UNLOCK MORE DATA PARTNERS
The three main limitations of distributed learning are communication overhead, decreased model performance, and increased compute requirements.
With Blind Learning, model training requires just a small fraction of the communication overhead compared to federated and split learning. Models trained using Blind Learning achieve comparable accuracy to models trained centrally on aggregated data. Additionally, data providers do not need to be experts in data engineering and AI model training. They can simply approve the model provider to use the data, and simple APIs automate the rest. Plus, since only the front portion of a model is trained at each data provider’s location, the data providers do not shoulder the computational resource burden of training a full model.
Divide and Conquer
OPTIMIZE FOR COMPUTATIONAL RESOURCES AMONG PARTNERS
Blind Learning’s method of parallelizing model training amongst distributed data providers is more efficient and faster than approaches like sequential split learning, which trains the client-server models in a sequential manner, leading to a much higher elapsed training time that is not practical for real-world applications. The speed and efficiency benefits of Blind Learning are even more pronounced with larger datasets and when multiple data providers are involved.
Blind Decorrelation
GUARD AGAINST MEMBERSHIP INFERENCE ATTACKS
For even further data protection, a specialized loss function called Blind Decorrelation can be used in Blind Learning to “decorrelate” the relationship between a model’s input data and its parameters. The decorrelation occurs on the data provider side and protects against membership inference attacks, which seek to predict or uncover the data used to train a model.
WHAT IS BLIND INFERENCE?
Use new data on existing models, while preserving the privacy of both
Blind Inference refers to the set of TripleBlind’s capabilities that allow customers to run private inference on trained models using new data in a privacy-preserving way. TripleBlind supports privacy-first and regulation-mindful inference on neural networks, random forest models, XGBoost models, and statistical models, with support for new model types being added regularly.
Blind Inference is delivered via a simple set of APIs that make implementing it easy.
Secure Multi-party Computation (SMPC)
TripleBlind has developed a patented advancement of Secure Multi-party Computation (SMPC) which is faster and more practically usable than other SMPC implementations. Model inference using our SMPC offers the strongest level of protection, both for the data and for the model. No recoverable version of the data or the model is ever exchanged between the parties. Instead, a one-way transformation is applied to partial shares of the model and the data, which allows computations to be performed in an irreversible SMPC-transformed space. No encryption key exists that can be compromised, and SMPC is mathematically proven to be quantum safe, meaning that a bad actor with unlimited computational resources would be unable to compromise the system.
Book A Demo
TripleBlind keeps both data and algorithms in use private and fully computable. To learn more about Blind Learning, or to see it in action, please book a demo!