3 Key Figures in the History of Privacy-Enhancing Technology
Privacy-enhancing technology may not appear in as many headlines as blockchain or cryptocurrency technologies, but behind the scenes, privacy-enhancing technology is enabling scientific breakthroughs and unprecedented business insights.
The privacy technologies widely used today are the result of developments made over the past half-century. Extremely gifted and innovative change makers have made groundbreaking contributions spanning this time period, from Andrew Yao developing essential principles in the early 1980s at the University of California Berkeley to Cynthia Dwork devising privacy-based research principles just a few years ago while at Microsoft Research. The developments made by a handful of key figures have been instrumental in advancing this area of technology, and we are enjoying the fruits of their labor today. While there are many people to highlight and thank for their contributions to the current state of privacy-enhancing technologies, below we feature three figures that have played fundamental roles.
Andrew Yao
In 1982, Andrew Yao was the solo author on a paper that would lead to a game-changing privacy technology called multi-party computation. In addition to developing the theoretical concept, Yao also developed several fundamental multi-party computing algorithms, on which the majority of today’s protocols are built.
In the seminal paper he presented at the 23rd Annual Symposium on Foundations of Computer Science, Yao used a simple riddle to introduce the problem he hoped to solve: Two secretive millionaires having lunch decide the richer person should pay the bill, but how can they do this if neither one wants to reveal what they are worth?
The solution to this riddle — Yao determined — is a two-party protocol that can determine the Boolean result of private input 1 ≤ private input 2. Called the Garbled Circuits Protocol, Yao’s approach involves Boolean gate truth table that is ‘garbled’; obfuscated using randoms strings or labels. This truth table is sent from the first party to the second party, who evaluates the garbled gate using a symmetric encryption key to produce a Boolean result.
An extension of this protocol to include more than two parties, multi-party computation is a system that allows multiple parties to compute a shared function using individual private inputs.
The practical application of this protocol was difficult to achieve until the 2000s, when more sophisticated algorithms, fast networks, and more powerful and accessible computing made it practical to develop multi-party computing systems. Yao’s work became even more relevant during the rise of Big Data and machine learning.
By enabling the privacy-preserving use of large datasets, multi-party computing has become a valuable tool in the inferencing phase of machine learning algorithms. During this phase, multiple parties want to collaborate on a model through the use of a combined data set that does not expose the raw inputs of individual participants.
Thanks to the pioneering work of Andrew Yao, machine learning systems have more access to a wider variety of sensitive data, and this enables the development of critical new breakthroughs and insights, in fields such as precision medicine and diagnostic imaging.
Cynthia Dwork
Cynthia Dwork is a theoretical computer scientist at Harvard University specializing in cryptography, distributed computing, and privacy technologies with more than 100 academic papers and two dozen patents to her name.
In 2006, Dwork was the main author and contributor to a groundbreaking paper presented at The Third Theory of Cryptography conference that established principles for a new kind of privacy-enhancing methodology: differential privacy. Dwork has said conversations with philosopher Helen Nissenbaum inspired her to focus on ways to maintain privacy in the digital age.
Differential privacy describes a group of mathematical methods that lets researchers compute on large datasets containing personal information, including medical and financial information, while maintaining the privacy of individual contributors to the dataset. These methods support privacy by adding small amounts of statistical noise to either raw data or the output of computations on raw data.
Differential privacy methods are designed to ensure that the added noise doesn’t significantly dilute the value of data analysis. At the same time, these methods maintain the integrity of analysis, whether or not a given individual opts in or out of the dataset. Thus, differential privacy blocks the release of individuals’ personal information resulting from data analysis. . This groundbreaking approach in privacy-enhancing technology addresses many of the limitations associated with previous approaches.
In 2015, Dwork was the main author of another key paper called “The reusable holdout: Preserving validity in adaptive data analysis” that outlined how differential privacy could be used to further machine learning-based scientific research.
In scientific research, machine learning typically involves the use of a training dataset and a testing, or ‘holdout’, dataset — on which a trained machine learning system conducts an analysis. After the holdout dataset is analyzed, it is no longer seen as an independent ‘fresh’ dataset. In the 2015 paper, Dwork and her colleagues proposed using differential privacy to preserve the independence of the holdout dataset.
According to Dwork, this application of differential privacy targets a future in which new data is hard to come by. Since machine learning requires massive amounts of data, and data is a finite resource on Earth, this application enables repeated uses of the same holdout dataset.
David Chaum
Having taught graduate-level business administration at New York University and computer science at the University of California Berkeley, David Chaum laid the foundation for a number of business-focused privacy-enhancing computation techniques, including digital signatures, anonymous communications, and a trustworthy digital system for secret voting ballots.
In a groundbreaking 1983 paper, Chaum established principles for blind signatures. The digital signature system enabled non-traceable payments by allowing a payment receiver to sign for payment without knowing its origin. The same 1983 paper that established principles for blind signatures also laid out principles for digital cash — a precursor to cryptocurrency. Chaum’s paper described how people could obtain and spend digital currency in a way that could be untraceable.
Initially, Chaum found these politically- and socially-tinged concepts to be very unpopular in academic circles. Facing resistance, Chaum decided to strike out on his own to create Digicash, a digital payments company. The Digicash system was called eCash and its currency was called CyberBucks. The system was very similar to Bitcoin, but the Digicash system was centralized, unlike Bitcoin’s decentralized network. Private sector success helped the idea of privacy-enhanced payments catch on, and Chaum would go on to present his cornerstone concept of cryptocurrencies at the first ever CERN conference in 1994 in Geneva, Switzerland.
In 1989, Chaum and his colleague would develop ‘irrefutable signatures’ — an interactive signature system that allows the signer to control who is able to verify the signature. In 1991, Chaum and another colleague developed a system for “group signatures” that allowed one individual to anonymously sign for an entire group.
Over the years, Chaum has also developed a number of digital voting systems designed to preserve a secret ballot and protect the integrity of elections. One cryptographically verifiable system called Scantegrity was used by Takoma Park, Wash. for an election in November 2009 — the first time such a system was used in a public election.
While Chaum was able to develop an impressive array of privacy-enhancing techniques, he’s probably best known for devising the core principles behind something that gets a lot more headlines: blockchain technology.
We’re taking the next step in privacy-enhancing technology
The TripleBlind Solution expands on the data privacy-enhancing technologies developed by the pioneers in our industry.
Our technology allows easy access to the foundational multi-party computing approach established by Yao, as well as other privacy-enhancing technologies, in a seamless package. By leveraging our solution, researchers, financial institutions, and other organizations are able to focus on innovative collaborations while maintaining possession of their own proprietary assets.
Our solution also meets the highest privacy standards. In the same way differential privacy protects individuals, our privacy-enhancing software allows data owners to operationalize sensitive data while protecting the privacy of individuals.
If you would like to learn more about the latest in data privacy technology and tools, please contact us today.