Oh the gleeful headlines. In the news recently:
We are hearing the triumphant cries of “Aha! See? We told you it was a bad idea! “
But what “flaw” did these researchers actually uncover?
The Right to be Forgotten (RTBF), as set out by the court, recognized that search engines are “data controllers” for the purposes of data protection rules, and that under certain conditions (i.e., where specific information is inaccurate, inadequate, irrelevant or excessive), individuals have the right to ask search engines to remove links to personal information about them.
Researchers were able to identify 30-40% of delisted mass media URLs and in so doing extrapolate the names of the persons who requested the delisting—in other words, identify precisely who was seeking to be “forgotten”.
This was possible because while the RTBF requires search engines to delist links, it does NOT require newspaper articles or other source material to be removed from the Internet. RTBF doesn’t require erasure – it is, as I’ve pointed out in the past, merely a return to obscurity. So actually, the process worked exactly as expected.
Of course, the researchers claim that the law is flawed – but let’s examine at the RTBF provision in the General Data Protection Regulation. Article 17’s Right to Erasure sets out a framework where an individual may request from a data controller the erasure of personal data relating to them, the abstention of further dissemination of such data, and obtain from third parties the erasure of any links to or copy or replication of that data in listed circumstances. There are also situations set out that would override such a request and justify keeping the data online – legal requirements, freedom of expression, interests of public health, and the necessity of processing the data for historical, statistical and scientific purposes.
This is the context of the so-called “flaw” being trumpeted.
Again, just because a search engine removes links to materials that does NOT mean it has removed the actual materials—it simply makes them harder to find. There’s no denying that this is helpful—a court decision or news article from a decade ago is difficult to find unless you know what you’re looking for, and without a helpful central search overview such things will be more likely to remain buried in the past. One could consider this a partial return to the days of privacy through obscurity, but “obscurity” does not mean “impenetrable.” Yes, a team of researchers from New York University Tandon School of Engineering, NYU Shanghai, and the Federal University of Minas Gerais in Brazil was able to find some information. So too (in the dark ages before search engine indexing) could a determined searcher or team of searchers uncover information through hard work.
So is privacy-through-obscurity a flaw? A loophole? A weak spot? Or is it a practical tool that balances the benefits of online information availability with the privacy rights of individuals?
It strikes me that the RTBF is working precisely as it should.
The paper, entitled The Right to be Forgotten in the Media: A Data-Driven Study is available at http://engineering.nyu.edu/files/RTBF_Data_Study.pdf. It will be presented the 16th Annual Privacy Enhancing Technologies Symposium in Darmstadt, Germany, in July, and will be published in the proceedings.