TDWI Articles

The Anonymization Myth

The ability to classify individuals from very little data is raising business and ethical issues. What can and should enterprises do?

In early July, Wired UK declared a "privacy nightmare for users," reporting that researchers at University College London and the Alan Turing Institute had demonstrated that they could identify an individual Twitter user in a group of 10,000 with over 96 percent accuracy based only on the metadata stored and made publicly available by Twitter about their tweets.

For Further Reading:

Lessons from Facebook: Can We Defeat Databuse?

The New Ethics of Data Management

GDPR and Tokenizing Data

Regular Twitter users may be confused. Tweets, after all, are not anonymous. They are directly associated with their @name account source. Closer inspection of the scientific paper by Perez et al shows the authors were using Twitter only "as a case study to quantify the uniqueness of the association between metadata and user identity and to understand the effectiveness of potential obfuscation strategies."

As input features for three classification algorithms, the authors chose combinations of nine of the approximately 144 metadata fields (who knew there were so many?) associated with a Tweet, excluding, of course, @name and any user-controlled fields. The training set consisted of tweets from 5.4 million users, each with over 200 tweets, over a four-month period.

Beyond the attention-grabbing 96 percent figure, the researchers came to a more disturbing conclusion: "obfuscation strategies are ineffective: after perturbing 60 [percent] of the training data, it is possible to classify users with an accuracy greater than 95 [percent]." In effect, even with significant levels of anonymization, metadata associated with social media can be used to identify individuals, even in the complete absence of message content.

Broken Privacy Promises

The danger of re-identification of individuals in so-called anonymized data by consolidating business data and content from multiple sources is not new. The classical example dates to the mid-1990s -- the stone age in terms of big data and analytics -- when the Massachusetts Group Insurance Commission released "anonymized" data on every hospital visit of state employees. The noble goal was to aid medical researchers. Instead, a graduate computer science student easily researched the medical history of the Massachusetts governor and, by implication, any state employee. How? By combining "anonymized" data with freely available demographic data from other sources.

A 2009 Ars Technica article quotes a research paper by Professor Paul Ohm of Georgetown University Law Center, about "the surprising failure of anonymization," which claims that "for almost every person on earth, there is at least one fact about them stored in a computer database that an adversary could use to blackmail, discriminate against, harass, or steal the identity of him or her. I mean more than mere embarrassment or inconvenience; I mean legally cognizable harm."

Even the European Union's GDPR -- the most comprehensive and recent law on privacy -- builds on the beliefs that personal data is a class of data that can be isolated from other data and effectively anonymized and thus protected. Ohm's nearly decade-old analysis invalidates these assumptions.

Metadata Matters

As illustrated by the Twitter example, privacy protection demands careful consideration of the metadata captured and stored alongside content or business-related information. In social media and the Internet of Things (IoT), metadata provides extensive context for messages and events. Beyond obvious links to the original user, metadata offers multiple and often counterintuitive ways of identifying or re-identifying individuals.

Recording geolocation metadata over time (from fitness wearables or smartphones, for example) produces unique, repeatable data patterns that can be directly associated with individuals. As few as four spatiotemporal points can provide 95 percent accurate identification, according to a 2013 paper by de Montjoye et al.

Olivia Solon's recent Guardian article provides a fascinating list of examples of privacy breaches. In many cases, metadata, rather than or in addition to actual content, has been at the heart of the re-identification process. The reason is that metadata is actually context-setting information. The ongoing transition to a cashless economy, digital business, and autonomous vehicles will dramatically increase the amount of context-setting information collected and stored. This exposes an increasing proportion of the context of individuals' actions in both the physical and online worlds. As a result, their privacy is destroyed and their rights diminished.

What Can We Do?

The privacy problem is old, has been poorly addressed, and is growing because of our expanding appetite for analytics on rapidly increasing volumes of content and context-setting information.

At a January 2018 hearing on Protecting Privacy and Promoting Policy before a U.S. House of Representatives Committee, Paul Ohm explained that "the very same mechanisms that let researchers use data to learn useful information about people and programs can also ... lead to serious and harmful invasions of privacy. Often, the only way to distinguish between the two is to examine the subjective intent of the person looking at the data."

Such a formulation suggests the need for a legal and judicial solution. While accepting that need and despite his legal background, Ohm focuses more on implementation: Curb wholesale consolidation of data and limit access to any such "centralized 'all-knowing database.'" He sees the solution more in the ethical stance of data consolidators toward balancing potential benefits, both public and corporate, with the risk to personal privacy.

Princeton computer science professor Arvind Narayanan and collaborators concur. In A Precautionary Approach to Big Data Privacy, they examine six cases ranging from data aggregators to open government. Given the variety of issues involved, no single solution is offered. However, the overarching conclusion is that the responsibility for balancing utility and privacy in analytics lies largely with the practitioners themselves.

In short, virtually all data and metadata, especially that from social media and IoT sources, is subject to privacy risks that must be explicitly balanced against its value in analytics. Stronger legal frameworks, such as GDPR, and improvements in privacy technology can, at best, somewhat mitigate the risks. Ethics training and oversight for all business and IT personnel involved in the commissioning and implementation of big data analytics programs is not only necessary. It is the right thing to do.

About the Author

Dr. Barry Devlin is among the foremost authorities on business insight and one of the founders of data warehousing in 1988. With over 40 years of IT experience, including 20 years with IBM as a Distinguished Engineer, he is a widely respected analyst, consultant, lecturer, and author of “Data Warehouse -- from Architecture to Implementation" and "Business unIntelligence--Insight and Innovation beyond Analytics and Big Data" as well as numerous white papers. As founder and principal of 9sight Consulting, Devlin develops new architectural models and provides international, strategic thought leadership from Cornwall. His latest book, "Cloud Data Warehousing, Volume I: Architecting Data Warehouse, Lakehouse, Mesh, and Fabric," is now available.


TDWI Membership

Accelerate Your Projects,
and Your Career

TDWI Members have access to exclusive research reports, publications, communities and training.

Individual, Student, and Team memberships available.