THREE QUESTIONS SERIES

Greg Storm,
COO and Co-Founder of TripleBlind

Interested in learning more about the minds behind the privacy-enhancing computation the solves the biggest data challenges faced by AI and analytics professionals?

We’re introducing our “Three Question Interview” series, where we share thoughts from our company’s leaders on the future of data privacy. This week, we sat down for a chat with Greg Storm, COO and Co-Founder of TripleBlind.

Question 1:

What is your most fearless forecast for the state of privacy 10 years from now?

Ten years out is a long time. With respect to data privacy, there are two things that could logically happen. The first is an universal equivalent to the Basel 3 framework, which applies to the finance industry. 20 years ago, each country had its own standards for financial disclosure. Multinational scheisters were strategic about taking advantage of those different standards to game systems. As a result, the international community came together in Basel, Switzerland to come up with a Basel Financial Disclosure framework. This framework synthesizes international standards for how disclosures work, what gets disclosed, and more. This way, international organizations can communicate with each other clearly. 

At the moment we have the GDPR, CCPA –– Europe’s doing one thing, California’s doing another. Some regulations have data residency rules, some don’t. It’s a hodge-podge. I hope and I believe that at some point, we’ll see that it’s necessary to have global standards for data privacy.

That’s one idea related to the regulatory side of privacy. In parallel, we may see some developments surrounding the idea of giving an individual control over said individual’s data. Taking a look back in not-too-distant history –– In the early 80’s, there was a guy at MIT named Nicholas Negroponte. He started what is now the MIT Media Lab. He made a prediction that is now referred to as the “Negroponte Switch.” At the time, television broadcasts took place over the air. You put an antenna on your roof, and that’s how you got a TV signal. At the same time, house phones came in on a wire. 

What Nicholas Negroponte noticed was that these two technologies evolved through an accident of engineering –– an accident that didn’t make sense from an engineering perspective. It was more logical for smaller-bandwidth telephone signals to come over the air, while larger-bandwidth television signals come through a line. His future-thinking predictions became more relevant as these forms of media began to converge, requiring an even larger amount of data to be transmitted. Nowadays, everyone’s phone is in their pocket, with signal coming over the air. Meanwhile, internet services like Google Fiber come through a wire, with much higher capacity.

I believe a similar switch will happen with data privacy. Right now, app and website users opt to share the privacy of their data in exchange for free use of that site. Companies receive swaths of consumer data and monetize it to the tune of billions of dollars. I’m hoping that the equation switches and we develop technology that allows everybody to have control over the monetization and privacy of their own data.

Individuals could pay for the services that they receive, but also earn revenue on data created around their activities. Some companies are shifting business models by giving you the ability to keep your data and monetize it as you see fit, as opposed to giving it to a corporation and allowing them to monetize it. I think as people learn more about the value of what they’re exchanging, there will be momentum to flip this equation.

For the individual consumer, there’s an economic incentive to do this. Imagine granting an educated consumer the ability to set the slider bar of what they want to disclose and where they want to participate in the data economy –– and allowing them to derive monetary value from these decisions. There are psychological incentives to participate as well. Think of granting an afflicted community the ability to participate in a certain set of health clinical trials. There are people who deeply value contributing to research and development in some shape or form, especially on their own terms. These are only a few of the tremendous economic and social benefits to giving individuals granular autonomy over their own data.

Question 1:

What is your most fearless forecast for the state of privacy 10 years from now?

Ten years out is a long time. With respect to data privacy, there are two things that could logically happen. The first is an universal equivalent to the Basel 3 framework, which applies to the finance industry. 20 years ago, each country had its own standards for financial disclosure. Multinational scheisters were strategic about taking advantage of those different standards to game systems. As a result, the international community came together in Basel, Switzerland to come up with a Basel Financial Disclosure framework. This framework synthesizes international standards for how disclosures work, what gets disclosed, and more. This way, international organizations can communicate with each other clearly. 

At the moment we have the GDPR, CCPA –– Europe’s doing one thing, California’s doing another. Some regulations have data residency rules, some don’t. It’s a hodge-podge. I hope and I believe that at some point, we’ll see that it’s necessary to have global standards for data privacy.

That’s one idea related to the regulatory side of privacy. In parallel, we may see some developments surrounding the idea of giving an individual control over said individual’s data. Taking a look back in not-too-distant history –– In the early 80’s, there was a guy at MIT named Nicholas Negroponte. He started what is now the MIT Media Lab. He made a prediction that is now referred to as the “Negroponte Switch.” At the time, television broadcasts took place over the air. You put an antenna on your roof, and that’s how you got a TV signal. At the same time, house phones came in on a wire. 

What Nicholas Negroponte noticed was that these two technologies evolved through an accident of engineering –– an accident that didn’t make sense from an engineering perspective. It was more logical for smaller-bandwidth telephone signals to come over the air, while larger-bandwidth television signals come through a line. His future-thinking predictions became more relevant as these forms of media began to converge, requiring an even larger amount of data to be transmitted. Nowadays, everyone’s phone is in their pocket, with signal coming over the air. Meanwhile, internet services like Google Fiber come through a wire, with much higher capacity.

I believe a similar switch will happen with data privacy. Right now, app and website users opt to share the privacy of their data in exchange for free use of that site. Companies receive swaths of consumer data and monetize it to the tune of billions of dollars. I’m hoping that the equation switches and we develop technology that allows everybody to have control over the monetization and privacy of their own data.

Individuals could pay for the services that they receive, but also earn revenue on data created around their activities. Some companies are shifting business models by giving you the ability to keep your data and monetize it as you see fit, as opposed to giving it to a corporation and allowing them to monetize it. I think as people learn more about the value of what they’re exchanging, there will be momentum to flip this equation.

For the individual consumer, there’s an economic incentive to do this. Imagine granting an educated consumer the ability to set the slider bar of what they want to disclose and where they want to participate in the data economy –– and allowing them to derive monetary value from these decisions. There are psychological incentives to participate as well. Think of granting an afflicted community the ability to participate in a certain set of health clinical trials. There are people who deeply value contributing to research and development in some shape or form, especially on their own terms. These are only a few of the tremendous economic and social benefits to giving individuals granular autonomy over their own data.

Question 2:

What data concerns do you see on the horizon?

I think I share the same concern as everyone, and that’s over the amount of personal and private data that’s being created. It’s exploding and it’s only getting faster every day. It’s like the speed of the universe after the Big Bang –– we’re creating, using, and sharing data at a faster and faster rate. All of the IoT devices that we carry observe us and collect our data, much of which can be used to invade our privacy. 

The expansion of virtual reality and other developing technologies inherently creates new avenues for collection of personal data. This type of data “explosion” could worsen this issue for the following two reasons:

  1. There’s much more data to track you, both physically and digitally.
  2. Since there’s an enormous pile of it, it’s harder and harder for you to control.

By simply walking out the door and getting on a train that runs through a downtown area, you’re in front of CCTV cameras that constantly know where you walk past. The trains’ cameras are built to catch everyone who gets on and off for safety reasons –– but consider the scale of data that’s collected about you each day, and consider controlling that amount of data when it’s in another entity’s hands. They’re collecting data about you that can be used to infer where you are and you might never know. 

The explosion of these data-collecting devices and tools could result in a more invasive landscape. In the 50’s, before computers were even mainstream, marketing companies made tons of money by sending fliers in the mail and tracking what people responded to. Now, companies do this digitally –– and a lot faster. If you put on your VR headset to go to an online meeting, I could track your eye movements and see what advertisements you respond to based on how quickly you look, as well as how much your pupils dilate when you see that ad. That’s an incredible amount of privacy that’s been given up, or at least made available, without an individual’s knowledge. Those are the types of things that scare me.

Everything you need to know
about TripleBlind in one place.

Question 2:

What data concerns do you see on the horizon?

I think I share the same concern as everyone, and that’s over the amount of personal and private data that’s being created. It’s exploding and it’s only getting faster every day. It’s like the speed of the universe after the Big Bang –– we’re creating, using, and sharing data at a faster and faster rate. All of the IoT devices that we carry observe us and collect our data, much of which can be used to invade our privacy. 

The expansion of virtual reality and other developing technologies inherently creates new avenues for collection of personal data. This type of data “explosion” could worsen this issue for the following two reasons:

  1. There’s much more data to track you, both physically and digitally.
  2. Since there’s an enormous pile of it, it’s harder and harder for you to control.

By simply walking out the door and getting on a train that runs through a downtown area, you’re in front of CCTV cameras that constantly know where you walk past. The trains’ cameras are built to catch everyone who gets on and off for safety reasons –– but consider the scale of data that’s collected about you each day, and consider controlling that amount of data when it’s in another entity’s hands. They’re collecting data about you that can be used to infer where you are and you might never know. 

The explosion of these data-collecting devices and tools could result in a more invasive landscape. In the 50’s, before computers were even mainstream, marketing companies made tons of money by sending fliers in the mail and tracking what people responded to. Now, companies do this digitally –– and a lot faster. If you put on your VR headset to go to an online meeting, I could track your eye movements and see what advertisements you respond to based on how quickly you look, as well as how much your pupils dilate when you see that ad. That’s an incredible amount of privacy that’s been given up, or at least made available, without an individual’s knowledge. Those are the types of things that scare me.

Everything you need to know
about TripleBlind in one place.

Question 3:

New data privacy regulations are emerging around the world at a rapid pace. How does TripleBlind plan to keep up?

We’ve got a great answer for this one. The way TripleBlind works is that we protect privacy by never allowing data that has private information in it to be seen. That’s one of the beautiful things about TripleBlind – the actual computation happens on data that is not restricted because it has no personally identifiable information remaining in it. There is no privacy concern in data computed on via the TripleBlind Solution. We give a data owner the ability to 

  1. Ensure that their data is never exposed to an unauthorized party, and
  2. Complete rights to approve or disapprove any computations on a data set. 

To boil it down, it’d be like if all of your data was on and encrypted on your cell phone. If a streaming service wanted to send you an advertisement, they would ask, “Can we run our recommender system on your data so that we can recommend the 5 shows you’re most likely to enjoy?” and you would say “Yes” or “No.” What TripleBlind allows is for you to grant permission for them to run an algorithm on your data. Then, they’d make recommendations for you –– without ever actually seeing your raw personal information. That’s sort of bulletproof for privacy regulation. The data with private information in it stayed local, stayed encrypted, and never moved. You can make any kind of regulation around that data, and we help enforce it because that data never moves.

Question 3:

New data privacy regulations are emerging around the world at a rapid pace. How does TripleBlind plan to keep up?

We’ve got a great answer for this one. The way TripleBlind works is that we protect privacy by never allowing data that has private information in it to be seen. That’s one of the beautiful things about TripleBlind – the actual computation happens on data that is not restricted because it has no personally identifiable information remaining in it. There is no privacy concern in data computed on via the TripleBlind Solution. We give a data owner the ability to 

  1. Ensure that their data is never exposed to an unauthorized party, and
  2. Complete rights to approve or disapprove any computations on a data set. 

To boil it down, it’d be like if all of your data was on and encrypted on your cell phone. If a streaming service wanted to send you an advertisement, they would ask, “Can we run our recommender system on your data so that we can recommend the 5 shows you’re most likely to enjoy?” and you would say “Yes” or “No.” What TripleBlind allows is for you to grant permission for them to run an algorithm on your data. Then, they’d make recommendations for you –– without ever actually seeing your raw personal information. That’s sort of bulletproof for privacy regulation. The data with private information in it stayed local, stayed encrypted, and never moved. You can make any kind of regulation around that data, and we help enforce it because that data never moves.