Removing Unlawful Content Isn’t a Right to be Forgotten – It’s Justice

A federal court decision released 30 January 2017 has (re)ignited discussion of the “right to be forgotten” (RTBF) in Canada. 

The case revolved around the behaviour of Globe24h.com (the URL does not appear to be currently available, but it is noteworthy that their Facebook page is still online), a website that republishes Canadian court and tribunal decisions. 

The publication of these decisions is not, itself, inherently problematic.  Indeed, the Office of the Privacy Commissioner (OPC) has previously found that an  organization (unnamed in the finding, but presumably CanLii or a similar site) had collected, used and disclosed court decisions for appropriate purposes pursuant to subsection 5(3) of PIPEDA.  The Commissioner determined that the company's purpose in republishing was to support the open courts principle, by making court and tribunal decisions more readily available to Canadian legal professionals and academics. Further, that the company's subscription-based research tools and services did not undermine the balance between privacy and the open courts principle that had been struck by Canadian courts, nor was the operation of those tools inconsistent with OPC’s guidance on the issue.  It is important to note that this finding relied heavily on the decision by the organization NOT to allow search engines to index decisions within its database or otherwise making them available to non-subscribers. 

In its finding, the OPC references another website – Globe24h.com – about which they had received multiple well-founded complaints.  Regarding Globe24h.com, which did allow search engines to index decisions as well as hosting commercial advertising and charging a fee for removal of personal information, the Commissioner found that:

  1. He did have jurisdiction over the (Romanian-based) site, given its real and substantial connection to Canada;
  2. the site was not collecting, using and disclosing the information for exclusively journalistic purposes and thus was not exempt from PIPEDA’s requirements.
  3. that Globe24h’s purpose of making available Canadian court and tribunal decisions through search engines – which allows the sensitive personal information of individuals to be found by happenstance or by anyone, anytime for any purpose –  was NOT one that a reasonable person would consider to be appropriate in the circumstances; and
  4. that although the information was publicly available, the site’s use was not consistent with the open courts principle for which it was originally made available, and thus PIPEDA’s requirement for knowledge and consent did apply to Globe24h.com.

Accordingly, he found the complaints well-founded.

From there, the complaint proceeded to Federal Court, with the Privacy Commissioner appearing as a party to the application.

The Federal Court concurred with the Privacy Commissioner that: PIPEDA did apply to Globe24h.com; that the site was engaged in commercial activity; and that it’s purposes were not exclusively journalistic.  On reviewing its collection, use and disclosure of the information, the Court determined that the exclusion for publicly available information did not apply, and that Globe24h had contravened PIPEDA. 

Where it gets interesting is in the remedies granted by the Court.  Strongly influenced by the Privacy Commissioner’s submission, the Court:

  1. issued an order requiring Globe24h.com to correct its practices to comply with sections 5 to 10 of PIPEDA;
  2. relied upon s.16 of PIPEDA, which authorizes the Court grant remedies to address systemic non-compliance to issue a declaration that Gobe24h.com had contravened PIPEDA; and
  3. awarded damages in the amount of $5000 and costs in the amount of $300.

The reason this is interesting is the explicit recognition by the Court that:

A declaration that the respondent has contravened PIPEDA, combined with a corrective order, would allow the applicant and other complainants to submit a request to Google or other search engines to remove links to decisions on Globe24h.com from their search results. Google is the principal search engine involved and its policy allows users to submit this request where a court has declared the content of the website to be unlawful. Notably, Google’s policy on legal notices states that completing and submitting the Google form online does not guarantee that any action will be taken on the request. Nonetheless, it remains an avenue open to the applicant and others similarly affected. The OPCC contends that this may be the most practical and effective way of mitigating the harm caused to individuals since the respondent is located in Romania with no known assets. [para 88]

It is this line of argument that has fed response to the decision.  The argument is that, by explicitly linking its declaration and corrective order with the ability of claimants to request that search engine’s remove the content at issue from their results, the decision has created a de facto RTBF in Canada. 

With all due respect, I disagree.  A policy on removing content that a court has declared to be unlawful is not equivalent to a “right to be forgotten.”  RTBF, as originally set out, recognized that under certain conditions (i.e., where specific information is inaccurate, inadequate, irrelevant or excessive), individuals have the right to ask search engines to remove links to personal information about them.  In contrast, the issue here is not that the information is “inaccurate, inadequate, irrelevant or excessive” – rather, it is that the information has been declared UNLAWFUL. 

The RTBF provision of the General Data Protection Regulation – Article 17 – sets out circumstances in which a request for erasure would not be honoured because there are principles at issue that transcend RTBF and justify keeping the data online – legal requirements, freedom of expression, interests of public health, and the necessity of processing the data for historical, statistical and scientific purposes.

We are not talking here about an overarching right to control dissemination of these publicly available court records.  The importance of the open court principle was explicitly addressed by both the OPC and the Federal Court, and weighted in making their determinations.  In so doing, the appropriate principled flexibility has been exercised – the very principled flexibility that is implicit in Article 17. 

I do not dispute that a policy conversation about RTBF needs to take place, nor that explicitly setting out parameters and principles would be of assistance going forward.  Perhaps the pending Supreme Court of Canada decision in Google v Equustek Solutions will provide that guidance. 

Regardless, the decision in Globe24h.com does not create RTBF– rather, it exercises its power under PIPEDA to craft appropriate remedies to facilitate justice.

 

How About We Stop Worrying About the Avenue and Instead Focus on Ensuring Relevant Records are Linked?

Openness of information, especially when it comes to court records, is an increasingly difficult policy issue.  We have always struggled to balance the protection of personal information against the need for public information and for justice (and the courts that dispense it) to be transparent.  Increasingly dispersed and networked information makes this all the more difficult. 

In 1991’s Vickery v. Nova Scotia Supreme Court (Prothonotary) Justice Cory (writing in dissent, but in agreement with the Court on these statements)  positioned the issue as being inherently about the tension between the privacy rights of an acquitted individual versus the importance of court information and recordsbeing open.

…two principles of fundamental importance to our democratic society which must be weighed in the balance in this case.  The first is the right to privacy which inheres in the basic dignity of the individual.  This right is of intrinsic importance to the fulfilment of each person, both individually and as a member of society.  Without privacy it is difficult for an individual to possess and retain a sense of self-worth or to maintain an independence of spirit and thought.
The second principle is that courts must, in every phase and facet of their processes, be open to all to ensure that so far as is humanly possible, justice is done and seen by all to be done.  If court proceedings, and particularly the criminal process, are to be accepted, they must be completely open so as to enable members of the public to assess both the procedure followed and the final result obtained.  Without public acceptance, the criminal law is itself at risk.

Historically the necessary balance has been arrived at less by policy negotiation than by physical and geographical limitations.  When one must physically attend the court house to search for and collect information from various sources, the time, expense and effort necessary functions as its own form of protection.  As Elizabeth Judge has noted, however, “with the internet, the time and resource obstacles for accessing information were dramatically lowered. Information in electronic court records made available over the Internet could be easily searched and there could be 24-hour access online, but with those gains in efficiency comes a loss of privacy.”

At least arguably, part of what we have been watching play out with the Right to be Forgotten is a new variation of these tensions.  Access to these forms of information is increasingly easily and generally available – all it requires is a search engine and a name.  In return, news stories, blog posts, social media discussions and references to legal cases spill across the screen.  With RTBF and similar suggestions,  we seek to limit this information cascade to that which is relevant and recent. 

This week saw a different strategy employed.  As part of the sentences for David and Collet Stephan – whose infant son died of meningitis due to their failure to access medical care for him when he fell ill – the Alberta court required that notice of the sentence be posted on Prayers for Ezekiel and any other social media sites maintained by and dealing with the subject of their family.  (NOTE:  As of 6 July 2016, this order has not been complied with).

Contrary to some, I do not believe that the requirement to post is akin to a sandwich board, nor that this is about shaming.  Rather, it seems to me that in an increasingly complex information spectrum, insisting that sentence be clearly and verifiably linked to information about the issue.  Instead, I agree that

… it is a clear sign that the courts are starting to respond to the increasing power of social media, and to the ways that criminals can attract supporters and publicity that undermines faith in the legal system. It also points to the difficulties in upholding respect for the courts in an era when audiences are so fragmented that the facts of a case can be ignored because they were reported in a newspaper rather than on a Facebook post.

There has been (and continues to be) a chorus of complaints about RTBF and its supposed potential to frustrate (even censor) the right to KNOW.  Strangely, that same chorus does not seem to be raising their voices in celebration of this decision.  And yet…. doesn’t requiring that conviction and sentence be attached to “news” of the original issue address many of the concerns raised by anti-RTBF forces? 

 

Data Schadenfreude and the Right to be Forgotten

Oh the gleeful headlines. In the news recently:

Researchers Uncover a Flaw in Europe’s Tough Privacy Rules

NYU Researchers Find Weak Spots in Europe’s “Right to be Forgotten” Data Privacy Law

We are hearing the triumphant cries of “Aha! See? We told you it was a bad idea! “

But what “flaw” did these researchers actually uncover?

The Right to be Forgotten (RTBF), as set out by the court, recognized that search engines are “data controllers” for the purposes of data protection rules, and that under certain conditions (i.e., where specific information is inaccurate, inadequate, irrelevant or excessive), individuals have the right to ask search engines to remove links to personal information about them. 

Researchers were able to identify 30-40% of delisted mass media URLs and in so doing extrapolate the names of the persons who requested the delisting—in other words, identify precisely who was seeking to be “forgotten”. 

This was possible because while the RTBF requires search engines to delist links, it does NOT require newspaper articles or other source material to be removed from the Internet.  RTBF doesn’t require erasure – it is, as I’ve pointed out in the past, merely a return to obscurity.  So actually, the process worked exactly as expected. 

Of course, the researchers claim that the law is flawed – but let’s examine at the RTBF provision in the General Data Protection Regulation.  Article 17’s Right to Erasure sets out a framework where an individual may request from a data controller the erasure of personal data relating to them, the abstention of further dissemination of such data, and obtain from third parties the erasure of any links to or copy or replication of that data in listed circumstances.  There are also situations set out that would override such a request and justify keeping the data online – legal requirements, freedom of expression, interests of public health, and the necessity of processing the data for historical, statistical and scientific purposes.

This is the context of the so-called “flaw” being trumpeted. 

Again, just because a search engine removes links to materials that does NOT mean it has removed the actual materials—it simply makes them harder to find.  There’s no denying that this is helpful—a court decision or news article from a decade ago is difficult to find unless you know what you’re looking for, and without a helpful central search overview such things will be more likely to remain buried in the past.  One could consider this a partial return to the days of privacy through obscurity, but “obscurity” does not mean “impenetrable.”  Yes, a team of researchers from New York University Tandon School of Engineering, NYU Shanghai, and the Federal University of Minas Gerais in Brazil was able to find some information. So too (in the dark ages before search engine indexing) could a determined searcher or team of searchers uncover information through hard work. 

So is privacy-through-obscurity a flaw?  A loophole?  A weak spot?  Or is it a practical tool that balances the benefits of online information availability with the privacy rights of individuals? 

It strikes me that the RTBF is working precisely as it should.

The paper, entitled The Right to be Forgotten in the Media: A Data-Driven Study is available at http://engineering.nyu.edu/files/RTBF_Data_Study.pdf.  It will be presented the 16th Annual Privacy Enhancing Technologies Symposium in Darmstadt, Germany, in July, and will be published in the proceedings.

 

Speaking of the Right to be Forgotten, Could We Please Forget This Fearmongering?

In the wake of the original Right to be Forgotten (RTBF) decision, citizens had the opportunity to apply to Google for removal from their search index of information that was inadequate, irrelevant, excessive and/or not in the public interest.  Google says that since the decision it has received more than 250,000 requests, and that they have concurred with the request and acted upon it in 41.6% of the cases

In France, even where Google accepted/approved the request for delisting, it implemented that only on specific geographical extensions of the search engine – primarily .fr (France) although in some cases other European extensions were included.  This strategy resulted in a duality where information that had been excluded from some search engine results was still available via Google.com and other geographic extensions.  Becoming aware of this, the President of CNIL (France’s data protection organization) formally gave notice to Google that it must delist information on all of its search engine domains.  In July 2015 Google filed an appeal of the order, citing the critiques that have become all-too-familiar – claiming that to do so would amount to censorship, as well as damaging the public’s right to information.

This week, on 21 September 2015, the President of CNIL rejec ted Google’s appeal for a number of reasons:

  • In order to be meaningful and consistent with the right as recognized, a delisting must be implemented on all extensions.  It is too easy to circumvent a RTBF that applies only on some extensions, which is inconsistent with the RTBF and creates a troubling situation where informational self-determination is a variable right;

  • Rejecting the conflation of RTBF with information deletion, the President emphasized that delisting does NOT delete information from the internet.  Even while removed from search listings, the information remains directly accessible on the source website.

  • The presumption that the public interest is inherently damaged fails to acknowledge that the public interest is considered in the determination of whether to grant a particular request.  RTBF is not an absolute right – it requires a balancing of the interest of the individual against the public’s right to information; and

  • This is not a case where France is attempting to impose French law universally – rather, CNIL “simply requests full observance of European legislation by non European players offering their services in Europe.”

With the refusal of its (informal) appeal, Google is now required to comply with the original CNIL order.  Failure to do so will result in fines that begin in the $300,000 range but could rise as high as 2-5% of Google’s global operating costs.


The Balance Inherent in the Right to be Forgotten

Trust me, I’d love to stop writing about the RTBF.  I’m not even sure how it came to take up so much real estate on this blog and in media generally. Especially since, as I’ve said before, it isn’t really anything new!  Nevertheless, the RTBF continues to rankle as the original decision reverberates through search engine companies and various countries around the globe.

A New York Times article on 5 Aug 2015 sets out the original decision and examines the changes wrought by the decision, positing that the RTBF will ultimately spread outside the EU boundaries and become normalized in multiple jurisdictions including the US. 

Emma Llansó, a free expression scholar at the Center for Democracy and Technology, is quoted criticizing the RTBF within the context of the US saying:

“When we’re talking about a broadly scoped right to be forgotten that’s about altering the historical record or making information that was lawfully public no longer accessible to people, I don’t see a way to square that with a fundamental right to access to information”

The article provides strong arguments on the other side as well. Marc Rotenberg of the Electronic Privacy Information Center  that says that “global implementation of the fundamental right to privacy on the Internet would be a spectacular achievement” and a positive development for users.  Asked about the allegation that freedom of speech is compromised by the removal of information, he notes that there are ways to limit access to private information that do not conflict with free speech and in fact that Google already has a process for global removal of some identifiable private information, like bank account numbers, social security numbers and sexually explicit images uploaded without the subject’s consent (“revenge porn”) that hasn’t attracted the same concerns. 

As for concerns about international implementation, Jonathon Zittrain of the Berkman Centre for Internet and Society at Harvard reminds us that this too is already in practice—when Google receives a takedown notice for linking to copyright infringing content, it removes those links from all of its sites across the world.

PERSPECTIVE

In any discussion of this issue it’s important to understand that RTBF was not intended to be an absolute right – rather it is inherently a process of balancing competing interests.  Indeed, after the original RTBF decision, Google instituted a process by which individuals could make RTBF requests.  Their own data shows that since the process was instituted in May 2014, roughly 41 percent of the one million requests it has received have been successful.  It is also worth noting that the original information “removed” in the successful requests doesn’t disappear – rather, the original source is no longer indexed by Google or shown in search engine results.

A recent decision in British Columbia shows that, although the RTBF hasn’t been formally implemented in Canada, the balancing of rights is active and seems to be working out just fine.

Niemela v. Malamas arises from a situation where former clients allegedly made defamatory comments about Mr. Niemela on a variety of sites.  Although the comments ceased, Mr. Niemela found that his law practice was affected by these comments.  Accordingly, he began an action against those who he believed had made the comments and the sites upon which the comments were published.  He was successful in this claim, and thus obtained injunctions requiring the removal of 146 posts from various sites. 

Where it becomes particularly interesting, however, is that he also filed suit against Google for publishing defamatory statements about him because individuals gained access to the information through being able to view snippets of the comments in the Google search results. Google asked that this action be dismissed, and ultimately it was—because the court determined that on all the facts the “snippets” were the product of an algorithm and that there was nothing to indicate that Google was actively involved in publishing the statements.

It doesn’t end there, however.  When the first suit concluded, Google voluntarily removed links to the 146 sites from its Canadian search engine results.  Mr. Niemela, however, was not satisfied with this – he wanted Google to remove the links from its search engine results worldwide. 

In considering this request, the court set out a three-part test that Mr. Niemela must meet:

  1. that there is strong evidence that the words are defamatory;
  2. that a failure to grant the injunction will result in irreparable harm; and
  3. that the balance of convenience favours granting the injunction.

On the facts, it was the opinion of the court that while there was strong evidence of defamation, the case failed at the second step – (i) because the majority of searches on Mr. Niemela were made on the Google Canada; (ii) it was not obvious that the damage Mr. Niemela alleged was caused by the defamatory comments at all; and (iii) such an order would not be internationally enforceable since it was against US policy on defamation and freedom of speech.

This is a great example of how to approach RTBF – all the facts are considered in context to assess the role of the search engine results in perpetrating the injury and to ascertain whether ON BALANCE an order to remove the information (and the extent of such removal) is warranted. 

In the end, what the RTBF requires is a balancing of the benefits of online information availability against the privacy of individuals. 

Really – does this strike you as compromising an important public record?  As undercutting freedom of expression?  Even of facilitating user vanity at the expense of public information?  Or is it the kind of approach that should be built into the process of digitizing and making public information about individuals in order to ensure that in our excitement about technological capacity we don’t compromise individual autonomy?

 

It’s obscurity, not apocalypse: All that the “right to be forgotten” decision has created is the right to ask to have information removed from search engine results.

The National Post recently carried a week of op-eds, all focused on responding to the “right to be forgotten” that was (allegedly) created by the Google Spain decision.   Some extremely well-known and widely-respected experts have weighed in on the subject: 

Ann Cavoukian and Chris Wolfe: 

…while personal control is essential to privacy, empowering individuals to demand the removal of links to unflattering, but accurate, information arguably goes far beyond protecting privacy… The recent extreme application of privacy rights in such a vague, shotgun manner threatens free expression on the Internet. We cannot allow the right to privacy to be converted into the right to censor.

Brian Lee Crowley:

This ruling threatens to change the Internet from a neutral platform, on which all the knowledge of humanity might eventually be made available, to a highly censored network, in which, every seven seconds, another person may unilaterally decide that they have a right to be forgotten and to have the record of their past suppressed…We have a duty to remember, not to forget; a duty not to let the past go, simply because it is inconvenient or embarrassing.

Paula Todd:

Should there be exceptions to the principle of “let the public record stand”? Most countries already have laws that do just that — prohibiting and ordering the deletion of online criminal defamation, cyberabuse and images of child sexual abuse, for example. Google, and other search engines, do invent algorithms that position certain results more prominently. Surely a discussion about tweaking those algorithms would have been less draconian than this cyber censorship.

With all due respect to these experts, I cannot help but feel that each of them has missed the central point – pushed by the rhetoric about a “right to be forgotten” into responding to a hypothetical idea rather than the concrete reality of the decision.

It is not about mere reputation grooming.

It is not about suppressing or rewriting history.

It is not about silencing critics.

It is not about scrubbing clean the public record.

It is not about protecting people from the consequences of their actions.

Frankly, this ruling isn’t the creation of a new form of censorship or suppression – rather, it’s a return to what used to be.  The decision sets the stage for aligning new communications media with more traditional lifespans of information and a restoration of the eventual drawing of a curtain of obscurity over information as its timeliness fades. 

Facing the facts:

It is important to be clear that all the “right to be forgotten” decision has created is the right to ask to have information removed from search engine results.   

There is no guarantee that the information will be removed – in its decision the court recognized that while indeed there are situations where removal would be appropriate, each request requires a careful balancing of individual rights of informational self-determination against the public interest.

It is also worth pointing out that this is hardly the only recourse users have to protect and shape online privacy, identity, and reputation. In an Atlantic article about the introduction of Facebook Social Graph, the authors comment that:

Online, obscurity is created through a combination of factors. Being invisible to search engines increases obscurity. So does using privacy settings and pseudonyms. Disclosing information in coded ways that only a limited audience will grasp enhances obscurity, too. Since few online disclosures are truly confidential or highly publicized, the lion's share of communication on the social web falls along the expansive continuum of obscurity: a range that runs from completely hidden to totally obvious.

Other ubiquitous privacy protective techniques don’t tend to engender the same concerns as “right to be forgotten”.  Nobody is ringing alarm bells equating privacy settings with censorship – indeed, we encourage the use of privacy settings as responsible online behavior.  And while there are certainly concerns about the use of pseudonyms, those concerns are focused on accountability, not freedom of speech or access to information.  In fact, the use of pseudonyms is widely considered as facilitating freedom of speech, not preventing it. 

Bottom line:

I’m all for freedom of expression.  Not a fan of censorship either.  So I would like to make a heartfelt plea to the community of thinkers who focus on this area of emerging law, policy, and culture: exaggerations and overreactions don’t help clarify these issues and are potentially damaging in the long run.  A clear understanding of what this decision is —and what it is not— will be the best first step towards effective, balanced implementation. 

 

How to Profit From the Right to be Forgotten (Operators are Standing By!)

 Search Engines are on Board

After setting up a request form, receiving tens of thousands of requests in the first day(s), and sending those requests through its review process, Google has now begun to remove information from its search results.   The court said Google had to do it, Google set up a process to do it, that process is free and even relatively quick. 

Not the end of the issue, unfortunately.  Google isn’t the only search engine out there, which means that information may still appear in the results of other search engines.  Other search engines are said to be developing similar processes in order to comply with the court’s interpretation, so that may help. Ultimately, however, even if all commercial search engines adopt this protocol, there are still other sites that are themselves searchable. 

Searching within an individual site

This matters because having a link removed from search results doesn’t get rid of the information, it just makes it harder to find.  No denying that this is helpful -- a court decision or news article from a decade ago is difficult to discover unless you know what you’re looking for, and without a helpful central search overview such things will be more likely to remain buried in the past.  Some partial return to the days of privacy through obscurity one might say. 

The Google decision was based on the precept that the actions of a search engine in constantly trawling the web meant that it did indeed collect, retrieve, records, organizes, discloses and stores information and accordingly does fall into the category of data control.  When an individual site allows users to search the content on the site, this same categorization does not apply.  Accordingly, individual sites will not be subject to the obligation (when warranted) to remove information from search results on request. 

If we take it as written that everything on the Internet is ultimately about either sex or money (and of course cats), then the big question is, of course, how this can be commodified?  And some sites have already figured that out. 

Here’s What We Can Offer You

Enter Globe 24h, a self-described global database of public records: case law, notices and clinical trials.  According to the site, this data is collected and made available because:

[w]e believe information should be free and open. Our goal is to make law accessible for free on the Internet. Our website provides access to court judgements, tribunal decisions, statutes and regulations from many jurisdictions. The information we provide access to is public and we believe everyone should have a right to access it, easily and without having to hire a lawyer. The public has a legitimate interest in this information — for example, information about financial scams, professional malpractice, criminal convictions, or public conduct of government officials. We do not charge for this service so we pay for the operating costs of the website with advertising.

 

A laudable goal, right? 

The public records that are held by and searchable on the site contain personal information, and the site is careful to explain to users what rights they have over their information and how to exercise them.  Most notably, the site offers clear and detailed explanation of how a user may request that their personal information be removed from the records – mailing a letter that includes personal information, pinpoint location of the document(s) at issue, documentary proof of identity, a signature card, an explanation of what information is requested to be removed and why.  This letter is sent to a designated address (and will be returned if any component of the requirements is not met) and after a processing time of up to 15 days.  Such a request, it is noted, may also involve forwarding of the letter and its personal information to data protection authorities.

But Wait, There’s More!!

Despite their claims that they bear the operating costs themselves (the implication being that they do so out of their deep commitment to access to information) the site does have a revenue stream.  

Yes, for the low low price of €19 per document, the site will waive all these formalities and get your information off those documents (and out of commercial search engine caches as well) within 48 hours.  Without providing a name, address, phone number, signature, or identification document.  No need to be authenticated, no embarrassing explanation of why you want the information gone, no risk of being reported to State authorities, the ease of using email and no risk of having your request ignored or fail to be acted upon.  It’s even done through PayPal, so it can theoretically be done completely anonymously. 

If the only way to get this information removed were to pay the fee, the site would fall foul of data protection laws, but that’s not the case here.  You don’t have to pay the money.  That said, the options are set up so that one choice seems FAR preferable to the other….and it just happens to be the one from which the site profits. 

There you have it – the commodification of the desire to be forgotten.   Expect to see more approaches like this one. 

My takeaway?  It’s not really possible to effectively manage what information is or isn’t available.  Removing information entirely, removing it from search engine results, redacting it from documents or annotating it in hopes of mitigating its effect – in the long run information is out there and, if it is accurate, is likely going to be found and become incorporated into a data picture/reputation of a given individual. 

 


Keeping it Real: Reputation versus the Right to be Forgotten”

In the wake of the recent legal ruling in Spain, described popularly—albeit erroneously, in my opinion—as creating a “right to be forgotten”, Google has created a web-form that allows people to request that certain information be excluded from search results.  More than 12,000 requests to remove personal data were submitted within the first 24 hours after Google posted the forms, according to the company. At one point Friday, Google was getting 20 requests per minute. 

If it’s not really about being “forgotten”, what is at the heart of this decision? In an article dated 30 May 2014, ABC News asserted that it’s about the right to remove “unflattering” information, and characterizes the process as one of censorship, used by people to polish their reputations.  Framing the issue in this way, however, is dismissive and an oversimplification. 


BALANCING THE PUBLIC’S RIGHT TO KNOW

The decision does NOT provide carte blanche for anyone to force the removal of any information for any reason.  In fact, the Google form is clear that in order to make a request, the following information is required:

(a) Provide the URL for each link appearing in a Google search for your name that you request to be removed. (The URL can be taken from your browser bar after clicking on the search result in question).

(b) Explain, if not clear, why the linked page is about you (or, if you are submitting this form on behalf of someone else, the person named above).

(c)  Explain how this URL in search results is irrelevant, outdated or otherwise inappropriate. [Emphasis is mine.]

Even after this information is provided, there is no guarantee that the removal request will be approved.  Google has indicated that requests will be assessed to determine whether there is a public interest in the information at issue, such as facts about financial scams, professional malpractice, criminal convictions, or public conduct of government officials.  Although other search engines that function in the EU have not yet announced their own plans for complying with the decision, similar factors can be expected to be considered.

REPUTATION

Without a doubt, reputation is increasingly important in business, academia, politics, and in our culture as a whole. The availability of a wide range of data and information via search engines is an integral part of reviewing reputations and of making educated risk and trust assessments.  That said, those reputation judgments are most effective when the available information is reliable and relevant.

In other words, it’s not necessary to make absolutely any and all information available, but rather it’s important to ensure the accuracy of the information that is available.  To conflate informational self-determination with censorship is problematic, but to use such a characterization in order to defeat the basic precepts of data protection – which include accuracy and limiting information to that which is necessary – can actually be destructive both of individual rights and of the proper power of reputation.

This is a new area of policy and law touching on powerful information tools and very important personal rights. It’s imperative to get this right.