Unlocking Research with HIPAA Compliant Data Encryption

The Health Insurance Portability and Accountability Act (HIPAA) plays an essential role in protecting patients. When you’re following HIPAA-compliant data encryption standards, however, it becomes difficult to get the most out of your data. There are strict rules around how data can be used (or who can use it), and making a data set usable often means stripping away its most useful components. 

In most industries today, Big Data is redrawing the limits of human knowledge and capability. Unfortunately, highly regulated industries like healthcare have a harder time maximizing these benefits. While HIPAA is paramount to safeguarding patient privacy, regulations prevent researchers from exploring the full potential of their patient data.

A single hospital’s internal data might be enough to draw conclusions about common diagnoses, but meta studies have found this approach to building datasets for research can result in too small (and too biased) a sample size to provide reliable conclusions. Larger data sets are necessary, but researchers within healthcare organizations don’t always know the options available to them.

Embracing the spirit of the growing legal requirements for individual privacy, new privacy enhancing technologies are fundamentally changing the way healthcare organizations can unlock patient data, especially for collaboration.

But how might these solutions be better than current practices? To start, let’s take a quick look at some issues with the current ways healthcare organizations handle data.


The Limitations to Current HIPAA-Compliant Data Use Practices

Using Institutional Review Boards (IRBs) for decrypted data use: slow, costly, constrained

Institutional Review Boards (IRBs) offer a way for organizations to collectively use data, but this has multiple issues. 

Firstly, the level of bureaucracy in an IRB isn’t conducive to novel research. Taking representatives from each organization, deciding who’s getting what data, what they can do with the data (and why), and dealing with all the compliance and checkpoints along the way — all this red tape makes research slow, limited, and expensive.

Additionally, since setting up an IRB involves legal review (which is expensive in both dollars and time), the scope of research has to be carefully understood beforehand. If you wish to dive deeper into any novel findings you uncover, this can require an entirely new legal review and IRB.  Thus the process inhibits the effectiveness and potential of research by discouraging researchers from doing what they are supposed to do.

Even after all this, you’re still responsible for the data you’ve allowed other organizations to access, so you still have to trust that other IRB participants won’t make human mistakes when handling data you are responsible for protecting.


Deidentified Data: A False Sense of Security

While you can always deidentify your patient data before taking part in collective research, even certified deidentification standards can’t fully free you from concern.

It might be tempting to think deidentified data is anonymized, but being “deidentified” is very different from being “unidentifiable.” Researchers have been demonstrating for years that they can reidentify data by pairing it with other data sources, which wouldn’t be possible if it were truly anonymized.

Similarly, artificial intelligence models have gotten so sophisticated that they can identify this kind of data with ease, so solely using deidentification is akin to setting your password as “password.”


Ignoring Data That Can’t Be De-identified: Large Opportunity Costs

In many cases, you can’t simply strip off identifying data without rendering it useless for research. Say you’re studying the human eye — eye veins are as unique as fingerprints, so you can’t simply distort the data, at least not without making your research useless. Similarly, genetic data and electrocardiograms are so unique to each person that they could always be used to identify the individual in question.


A Better Solution: One-Way Encryption for Safe Collaboration and Data Use

Normally, using encrypted data means  the user of the data needs to decrypt it first, but decryption is what introduces the risks (and incomplete solutions) mentioned above. So what if you never had to decrypt data, but you could still get full usage of it?

The TripleBlind Solution allows data users to perform the same operations on data as they normally would, without having to “see”, copy, or store any data. This involves using one-way encryption, which is like locking up the data and throwing away the key: mathematically impossible to reverse. Due to the way these operations are carried out on one-way encrypted data, our solution allows data owners full Digital Rights Management (DRM) over how their data is used on a granular, per-use level.

Since any AI or analytic code can be run on this one-way encrypted data, the output is identical to running code on raw data, without putting privacy at risk. This is possible because of the innovations by TripleBlind on best-in-class, privacy-enhancing computation techniques.

Our aim with this technology is to provide tools for organizations to stop wasting valuable time worrying about security or compliance issues around research, freeing you to pursue more creative or ambitious investigations.

Since our solution ensures the safe handling of sensitive data, researchers can use data much more freely. This means you can start analyzing unconventional data points like credit card statements or driving patterns, rather than just MRIs and blood tests.

This adds a new wealth of data into diagnostics, enabling research that could vastly improve quality and effectiveness of patient care, all while maintaining their anonymity. Even though it’s sensitive data, it remains private.


Blind to Data, Blind to Processing, and Blind to the Result

TripleBlind allows your data to remain behind your firewall while it is made discoverable and computable by third parties for analysis and ML training.

These innovations build on well-understood principles, such as federated learning and multiparty compute. Our solution unlocks the intellectual property value of data, while preserving privacy and ensuring compliance with HIPAA and GDPR and all data localization laws. Data owners never sacrifice control over sensitive assets.

Want to see how it works? Learn more about our technology.