Removing Unlawful Content Isn’t a Right to be Forgotten – It’s Justice

A federal court decision released 30 January 2017 has (re)ignited discussion of the “right to be forgotten” (RTBF) in Canada. 

The case revolved around the behaviour of Globe24h.com (the URL does not appear to be currently available, but it is noteworthy that their Facebook page is still online), a website that republishes Canadian court and tribunal decisions. 

The publication of these decisions is not, itself, inherently problematic.  Indeed, the Office of the Privacy Commissioner (OPC) has previously found that an  organization (unnamed in the finding, but presumably CanLii or a similar site) had collected, used and disclosed court decisions for appropriate purposes pursuant to subsection 5(3) of PIPEDA.  The Commissioner determined that the company's purpose in republishing was to support the open courts principle, by making court and tribunal decisions more readily available to Canadian legal professionals and academics. Further, that the company's subscription-based research tools and services did not undermine the balance between privacy and the open courts principle that had been struck by Canadian courts, nor was the operation of those tools inconsistent with OPC’s guidance on the issue.  It is important to note that this finding relied heavily on the decision by the organization NOT to allow search engines to index decisions within its database or otherwise making them available to non-subscribers. 

In its finding, the OPC references another website – Globe24h.com – about which they had received multiple well-founded complaints.  Regarding Globe24h.com, which did allow search engines to index decisions as well as hosting commercial advertising and charging a fee for removal of personal information, the Commissioner found that:

  1. He did have jurisdiction over the (Romanian-based) site, given its real and substantial connection to Canada;
  2. the site was not collecting, using and disclosing the information for exclusively journalistic purposes and thus was not exempt from PIPEDA’s requirements.
  3. that Globe24h’s purpose of making available Canadian court and tribunal decisions through search engines – which allows the sensitive personal information of individuals to be found by happenstance or by anyone, anytime for any purpose –  was NOT one that a reasonable person would consider to be appropriate in the circumstances; and
  4. that although the information was publicly available, the site’s use was not consistent with the open courts principle for which it was originally made available, and thus PIPEDA’s requirement for knowledge and consent did apply to Globe24h.com.

Accordingly, he found the complaints well-founded.

From there, the complaint proceeded to Federal Court, with the Privacy Commissioner appearing as a party to the application.

The Federal Court concurred with the Privacy Commissioner that: PIPEDA did apply to Globe24h.com; that the site was engaged in commercial activity; and that it’s purposes were not exclusively journalistic.  On reviewing its collection, use and disclosure of the information, the Court determined that the exclusion for publicly available information did not apply, and that Globe24h had contravened PIPEDA. 

Where it gets interesting is in the remedies granted by the Court.  Strongly influenced by the Privacy Commissioner’s submission, the Court:

  1. issued an order requiring Globe24h.com to correct its practices to comply with sections 5 to 10 of PIPEDA;
  2. relied upon s.16 of PIPEDA, which authorizes the Court grant remedies to address systemic non-compliance to issue a declaration that Gobe24h.com had contravened PIPEDA; and
  3. awarded damages in the amount of $5000 and costs in the amount of $300.

The reason this is interesting is the explicit recognition by the Court that:

A declaration that the respondent has contravened PIPEDA, combined with a corrective order, would allow the applicant and other complainants to submit a request to Google or other search engines to remove links to decisions on Globe24h.com from their search results. Google is the principal search engine involved and its policy allows users to submit this request where a court has declared the content of the website to be unlawful. Notably, Google’s policy on legal notices states that completing and submitting the Google form online does not guarantee that any action will be taken on the request. Nonetheless, it remains an avenue open to the applicant and others similarly affected. The OPCC contends that this may be the most practical and effective way of mitigating the harm caused to individuals since the respondent is located in Romania with no known assets. [para 88]

It is this line of argument that has fed response to the decision.  The argument is that, by explicitly linking its declaration and corrective order with the ability of claimants to request that search engine’s remove the content at issue from their results, the decision has created a de facto RTBF in Canada. 

With all due respect, I disagree.  A policy on removing content that a court has declared to be unlawful is not equivalent to a “right to be forgotten.”  RTBF, as originally set out, recognized that under certain conditions (i.e., where specific information is inaccurate, inadequate, irrelevant or excessive), individuals have the right to ask search engines to remove links to personal information about them.  In contrast, the issue here is not that the information is “inaccurate, inadequate, irrelevant or excessive” – rather, it is that the information has been declared UNLAWFUL. 

The RTBF provision of the General Data Protection Regulation – Article 17 – sets out circumstances in which a request for erasure would not be honoured because there are principles at issue that transcend RTBF and justify keeping the data online – legal requirements, freedom of expression, interests of public health, and the necessity of processing the data for historical, statistical and scientific purposes.

We are not talking here about an overarching right to control dissemination of these publicly available court records.  The importance of the open court principle was explicitly addressed by both the OPC and the Federal Court, and weighted in making their determinations.  In so doing, the appropriate principled flexibility has been exercised – the very principled flexibility that is implicit in Article 17. 

I do not dispute that a policy conversation about RTBF needs to take place, nor that explicitly setting out parameters and principles would be of assistance going forward.  Perhaps the pending Supreme Court of Canada decision in Google v Equustek Solutions will provide that guidance. 

Regardless, the decision in Globe24h.com does not create RTBF– rather, it exercises its power under PIPEDA to craft appropriate remedies to facilitate justice.

 

Police Bodycams: Crossing the Line from Accountability to Shaming

 

Police bodycams are an emerging high-profile tool in law enforcement upon which many hopes for improved oversight, accountability, even justice are pinned.

When it comes to police bodycams, there are many perspectives:

  • Some celebrate them as an accountability measure, almost an institutionalized sousveillance.
  • For others, they’re an important new contribution to the public record
  • And where they are not included in the public record, they can at least serve as internal documents, subject to Access to Information legislation.

These are all variations on a theme – the idea that use of police bodycams and their resulting footage are about public trust and police accountability.

But what happens when they’re used in other ways?

In Spokane, Washington recently a decision was made to use bodycam footage for the purpose of shaming/punishment.  In this obviously edited footage, Sgt. Eric Kannberg deals calmly with a belligerent drunk, using de-escalation techniques even after the confrontation gets physical.  Ultimately, rather than meting out the typical visit to the  drunk tank, the officer opts to proceed via a misdemeanor charge and the ignominy of having the footage posted to Spokane P.D.'s Facebook page. The implications of this approach in terms of privacy, dignity, and basic humanity are far-reaching.

The Office of the Privacy Commissioner of Canada has issued Guidance for the Use of Body-Worn Cameras by Law Enforcement;  guidance that strives to balance privacy and accountability. The Guidelines include:

Use and disclosure of recordings

The circumstances under which recordings can be viewed:

  • Viewing should only occur on a need-to-know basis. If there is no suspicion of illegal activity having occurred and no allegations of misconduct, recordings should not be viewed.
  • The purposes for which recordings can be used and any limiting circumstances or criteria, for example, excluding sensitive content from recordings being used for training purposes. 
  • Defined limits on the use of video and audio analytics.
  • The circumstances under which recordings can be disclosed to the public, if any, and parameters for any such disclosure. For example, faces and identifying marks of third parties should be blurred and voices distorted wherever possible.
  • The circumstances under which recordings can be disclosed outside the organization, for example, to other government agencies in an active investigation, or to legal representatives as part of the court discovery process.

Clearly, releasing footage in order to shame an individual would not fall within these parameters. 

After the posted video garnered hundreds of thousands of views, its subject is now threatening to sue.  He is supported by the ACLU, which expressed concerns about both the editing and the release of the footage. 

New technologies offer increasingly powerful new tools for policing.  They may also intersect with old strategies of social control such as gossip and community shaming.  The challenge – or at least an important challenge– relates to whether those intersections should be encouraged or disrupted.

As always, a fresh examination of the privacy implications precipitated by the implementation of new technology is an important step as we navigate towards new technosocial norms.

Predictive? Or Reinforcing Discriminatory and Inequitable Policing Practices?

UPTURN released its report on the use of predictive policing on 31 August 2016.  

The report, entitled “Stuck in a Pattern:  Early Evidence on Predictive Policing and Civil Rights” reveals a number of issues both with the technology and its adoption:

  •  Lack of transparency about how the systems work
  • Concerns about the reliance on historical crime data, which may perpetuate inequities in policing rather than provide an objective base for analysis
  • Over-confidence on the part of law enforcement and courts on the accuracy, objectivity and reliability of information produced by the system
  • Aggressive enforcement as a result of (over) confidence in the data produced by the system
  • Lack of audit or outcome measures tracking in order to assess system performance and reliability

The report notes that they surveyed the 50 largest police forces in the USA and ascertained that at least 20 of them were using a “predictive policing system” and another 11 actively exploring options to do so.  In addition, they note that “some sources indicate that 150 or more departments may be moving toward these systems with pilots, tests, or new deployments.”

Concurrent with the release of the report, a number of privacy, technology and civil rights organizations released a statement setting forth the following arguments (and expanding upon them).

  1. A lack of transparency about predictive policing systems prevents a meaningful, well-informed public debate. 
  2. Predictive policing systems ignore community needs.
  3. Predictive policing systems threaten to undermine the constitutional rights of individuals
  4. Predictive policing systems are primarily used to intensify enforcement rather than to meet human needs
  5. Police could use predictive tools to identify which officers might engage in misconduct, but most departments have not done so
  6. Predictive policing systems are failing to monitor their racial impact.

Signatories of the statement included:

The Leadership Conference on Civil and Human Rights

18 Million Rising

American Civil Liberties Union

Brennan Center for Justice

Center for Democracy & Technology

Center for Media Justice

Color of Change

Data & Society Research Institute

Demand Progress

Electronic Frontier Foundation

Free Press

Media Mobilizing Project

NAACP

National Hispanic Media Coalition

Open MIC (Open Media and Information Companies Initiative)

Open Technology Institute at New America

Public Knowledge

 

The Right(s) to One’s Own Body

In July, police approached a computer engineering professor in Michigan to assist them with unlocking a murder victim’s phone by 3D-printing the victim’s fingerprints. 

It is a well-established principle of law that ‘there is no property in a corpse.’ This means that the law does not regard a corpse as property protected by rights.  So hey, why not, right? 

There is even an easy argument to be made that this is in the public interest.  Certainly, that seems to be how Professor Anil Jain (to whom the police made the request) feels: “If we can assist law enforcement that’s certainly a good service we can do,” he says.   

Marc Rotenberg, President of the Electronic Privacy Information Centre (EPIC) notes that if the phone belonged to a crime suspect, rather than a victim, police would be subject to a Supreme Court ruling requiring them to get a search warrant prior to unlocking the phone—with a 3D-printed finger or otherwise.

I’ve got issues with this outside the victim/suspect paradigm though. 

For instance, I find myself wondering about the application of this to live body parts. 

I’ve always been amused by the R v Bentham case, from the UK House of Lords in 2005. Bentham broke into a house to commit robbery and in course of this, used his fingers in his pocket to make a gun shape.  He was arrested.  Though he was originally convicted of possessing a firearm or imitation thereof, that conviction was overturned on the basis that it wasn’t possible for him to “possess” part of his own body.  But…if you can’t “possess” your own body, why wait for death before the State makes a 3-D copy of it for its own purposes?

And…we do have legislation about body parts, both live and dead – consider the regulation of organ donation and especially payment for organs.  Consider too the regulation of surrogacy, and of new reproductive technologies. 

Maybe this is a new area to ponder – it doesn’t fit neatly into existing jurisprudence and policy around the physical body.  The increasing use of biometric identifiers to protect personal information inevitably raises new issues that must be examined. 

UPDATE:  It turns out that the 3D printed fingerprint replica wasn’t accurate enough to unlock the phone.  Undeterred, law enforcement finally used a 2D replica on conductive paper, with the details enhanced/filled in manually.  This doesn’t really change the underlying concern, does it? 

How About We Stop Worrying About the Avenue and Instead Focus on Ensuring Relevant Records are Linked?

Openness of information, especially when it comes to court records, is an increasingly difficult policy issue.  We have always struggled to balance the protection of personal information against the need for public information and for justice (and the courts that dispense it) to be transparent.  Increasingly dispersed and networked information makes this all the more difficult. 

In 1991’s Vickery v. Nova Scotia Supreme Court (Prothonotary) Justice Cory (writing in dissent, but in agreement with the Court on these statements)  positioned the issue as being inherently about the tension between the privacy rights of an acquitted individual versus the importance of court information and recordsbeing open.

…two principles of fundamental importance to our democratic society which must be weighed in the balance in this case.  The first is the right to privacy which inheres in the basic dignity of the individual.  This right is of intrinsic importance to the fulfilment of each person, both individually and as a member of society.  Without privacy it is difficult for an individual to possess and retain a sense of self-worth or to maintain an independence of spirit and thought.
The second principle is that courts must, in every phase and facet of their processes, be open to all to ensure that so far as is humanly possible, justice is done and seen by all to be done.  If court proceedings, and particularly the criminal process, are to be accepted, they must be completely open so as to enable members of the public to assess both the procedure followed and the final result obtained.  Without public acceptance, the criminal law is itself at risk.

Historically the necessary balance has been arrived at less by policy negotiation than by physical and geographical limitations.  When one must physically attend the court house to search for and collect information from various sources, the time, expense and effort necessary functions as its own form of protection.  As Elizabeth Judge has noted, however, “with the internet, the time and resource obstacles for accessing information were dramatically lowered. Information in electronic court records made available over the Internet could be easily searched and there could be 24-hour access online, but with those gains in efficiency comes a loss of privacy.”

At least arguably, part of what we have been watching play out with the Right to be Forgotten is a new variation of these tensions.  Access to these forms of information is increasingly easily and generally available – all it requires is a search engine and a name.  In return, news stories, blog posts, social media discussions and references to legal cases spill across the screen.  With RTBF and similar suggestions,  we seek to limit this information cascade to that which is relevant and recent. 

This week saw a different strategy employed.  As part of the sentences for David and Collet Stephan – whose infant son died of meningitis due to their failure to access medical care for him when he fell ill – the Alberta court required that notice of the sentence be posted on Prayers for Ezekiel and any other social media sites maintained by and dealing with the subject of their family.  (NOTE:  As of 6 July 2016, this order has not been complied with).

Contrary to some, I do not believe that the requirement to post is akin to a sandwich board, nor that this is about shaming.  Rather, it seems to me that in an increasingly complex information spectrum, insisting that sentence be clearly and verifiably linked to information about the issue.  Instead, I agree that

… it is a clear sign that the courts are starting to respond to the increasing power of social media, and to the ways that criminals can attract supporters and publicity that undermines faith in the legal system. It also points to the difficulties in upholding respect for the courts in an era when audiences are so fragmented that the facts of a case can be ignored because they were reported in a newspaper rather than on a Facebook post.

There has been (and continues to be) a chorus of complaints about RTBF and its supposed potential to frustrate (even censor) the right to KNOW.  Strangely, that same chorus does not seem to be raising their voices in celebration of this decision.  And yet…. doesn’t requiring that conviction and sentence be attached to “news” of the original issue address many of the concerns raised by anti-RTBF forces? 

 

Social Media at the Border: Call for Comments: Until 22 August 2016

The US Customs and Border Protection Agency are proposing to add new fields to the form people fill out when entering/leaving the country—fields where travelers would voluntarily enter their social media contact information.  The forms would even list social media platforms of interest in order to make it easier to provide the information. 

This raises serious concerns. Some might ask, how can this be controversial if it’s voluntary?  If someone doesn’t want to share the info, they’ll say, then don’t. Case closed.

Unfortunately, it isn’t that simple. This initiative raises some serious questions:

Is it really voluntary?

Are individuals likely to understand that provision of this information is, in fact, voluntary?  If this becomes part of the standard customs declaration form, how many people will just fill it out, assuming that like the rest of the information on the form, it is mandatory? 

Is the consent informed?

Fair information principles require that before people can meaningfully consent to the collection of personal data they need to understand the answers to the following questions: Why is the information being collected?  To what uses will it be put, with whom will it be shared?  We have to ask, will the answers to these question be known? Will they be shared and visible?  Will they be drawn to the attention of travelers? 

Can such consent be freely given?

In a best-case scenario where the field is clearly marked as voluntary and the necessary information about purposes is provided—can such indicia really overrule our instinctive fear/understanding that failing to “volunteer” such information can be an invitation for increased scrutiny. 

Is it relevant?

Even if the problem of mandatory “volunteering” of information is addressed, what exactly is the point?  Is this information in some way relevant?  It is suggested that this initiative is the result of

… increasing pressure to scrutinize social media profiles after the San Bernardino shooting in December of last year. One of the attackers had posted a public announcement on Facebook during the shooting, and had previously sent private Facebook messages to friends discussing violent attacks. Crucially, the private messages were sent before receiving her visa. That news provoked some criticism, although investigators would have needed significantly more than a screen name to see the messages.

If this is meant to be a security or surveillance tool,  is it likely to be effective as such? Will random trawling of social network participation—profiling based on profiles—truly yield actionable intelligence?

Here’s the problem: every individual’s social media presence is inherently performative.  In order to accurately interpret interactions within online social media spaces, it is imperative to recognize that these utterances, performances, and risks are undertaken within a particular community and with a view to acquiring social capital within that particular community

Many will ask, if information is public why worry about protections?  Because too often issues of the accuracy, reliability or truthfulness of information in these various “publics” are not considered when defaulting to presumptive publicness as justification. All such information needs to be understood in context.

Context is even more crucial when such information is being consulted and used in various profiling enterprises, and especially so when it is part of law enforcement or border security. There is a serious risk of sarcasm, artistic expression, mere frustration or hyperbole resulting in the criminalization of individuals who are thoughtless (or indeed simply not thinking along the lines preferred by law enforcement agencies) rather than dangerous. 

The call for comments contains extensive background, but the summary they provide is simple:

U.S. Customs and Border Protection (CBP) of the Department of Homeland Security will be submitting the following information collection request to the Office of Management and Budget (OMB) for review and approval in accordance with the Paperwork Reduction Act: CBP Form I-94 (Arrival/Departure Record), CBP Form I-94W (Nonimmigrant Visa Waiver Arrival/Departure), and the Electronic System for Travel Authorization (ESTA). This is a proposed extension and revision of an information collection that was previously approved. CBP is proposing that this information collection be extended with a revision to the information collected. This document is published to obtain comments from the public and affected agencies.

They are calling for comments. You have until 22 August 2016 to let them hear yours.  https://federalregister.gov/a/2016-14848

When the “Child” in “Child Pornography” is the Child Pornographer

A US decision this month found that a 17-year-old who sent a picture of his own erect penis was guilty of the offence of second degree dealing in depictions of a minor engaged in sexually explicit conduct.

The person in question was already serving a Special Sex Offender Dispositional Alternative (SSODA) as the result of an earlier adjudication for communicating with a minor for immoral purposes when he began harassing one of his mother’s former employees, a 22-year-old single mother with an infant daughter.

That harassment began with telephone calls making sexual sounds or asking sexual questions. On the afternoon of June 2, 2013, she received two text messages: one with a picture of an erect penis, and the other with the message, "Do u like it babe? It's for you. And for Your daughter babe."

The appeal was focused on a couple of questions:

Was charging him with this offence a violation of his freedom of speech?

Key to the reasoning here is the recognition that minors have no superior right to distribute sexually explicit materials involving minors than adults do.  To interpret the statute differently, in the opinion of the court, would render the statute meaningless. 

The First Amendment does not consider child pornography a form of protected expression. There is no basis for creating a right for minors to express themselves in such a manner, and, therefore, no need to place a limiting construction on a statute that does not impinge on a constitutional right. Accordingly, we conclude that the dealing in depictions of minors statute does not violate the First Amendment when applied to minors producing or distributing sexually explicit photographs of themselves.

Was the offence too vaguely worded?   

The argument here is simple – would a reasonable person really think that sending a photo of his own genitals would constitute the crime of child pornography?  Again, the court deals with this handily, finding the statute wording to be clear.  Whether many teens engage in sexting is immaterial – the test isn’t whether many people aren’t following the law, but rather whether they are unable to understand it.  

Nothing in the text of the statute suggests that there are any exceptions to its anti-dissemination or anti-production language. The statute is aimed at eliminating the creation and distribution of images of children engaged in explicit sexual behavior. It could hardly be any plainer and does not remotely suggest there is an exception for self-produced images.

Finally, the ACLU made arguments on policy reasons.

Was it irrational or counterintuitive that the subject of the photo could also be guilty of its distribution?  The court thinks not – there is no requirement for a specific identified victim, because the focus is on the traffic in images. 

Another policy issue raised was the concern about the application of such a precedent to a case of teens “sexting” each other.  This could potentially have been a stronger policy position had this *been* a case of sexting – but it was not.  This wasn’t communication between equals – it was harassment. 

Is This a Problem?

Clearly, dick pics are ubiquitous these days.  Is this decision an overreaction?  No. It’s not.  Know what else is ubiquitous these days?  Harassment and hate directed at women in online spaces (and offline). 

Reasonable people have raised concerns about the inclusion of registering as a sex offender as part of the sentence. 

To be clear, the sentence was time served, and registration as a sex offender. 

  • Registration of an individual who was already in treatment (thus far ineffective) for communicating with a minor for immoral purposes. 
  • Who had unrelated pending charges (dismissed by agreement of the parties) for indecent exposure.  
  • And for behaviour that was part of a campaign of sexual harassment. 

Frankly, that inclusion of registration as a sex offender is not unusual or uncalled for. 

 

Should corporations *really* be the arbiters of free speech?

Facebook, Twitter, YouTube and Microsoft – in partnership with the European Commission – have unveiled a new code of conduct regarding hate speech.  This commitment is part of the response to the Brussels terrorist attacks, and is explicitly targeted at countering what can best be described as “terrorist propaganda”.

Hate speech, for these purposes, is set out in Framework Decision 2008/913/JHA of 28 November 2008, and is focussed on “racism and xenophobia”, which are recognized as “direct violations of the principles of liberty, democracy, respect for human rights and fundamental freedoms and the rule of law, principles upon which the European Union is founded and which are common to the Member States.”

The Framework Article 1 sets out the offences:

(a)  publicly inciting to violence or hatred directed against a group of persons or a member of such agroup defined by reference to race, colour, religion, descent or national or ethnic origin;

(b)  the commission of an act referred to in point (a) by public dissemination or distribution of tracts, pictures or other material;

(c)  publicly condoning, denying or grossly trivialising crimes of genocide, crimes against humanity and war crimes as defined in Articles 6, 7, and 8 of the Statute of the International Criminal Court, directed against a group of persons or a member of such a group defined by reference to race, colour, religion, descent or national or ethnic origin when the conduct is carried out in a manner likely to incite to violence or hatred against such a group or a member of such a group;

(d)  publicly condoning, denying or grossly trivialising the crimes defined in Article 6 of the Charter of the International Military Tribunal appended to the London Agreement of 8 August 1945, directed against a group of persons or a member of such a group defined by reference to race, colour, religion, descent or national or ethnic origin when the conduct is carried out in a manner likely to incite to violence or hatred against such a group or a member of such a group;

Under the Code of Conduct, these technology and social media companies commit to reviewing and acting upon notifications for removal of hate speech—removing or disabling access to such content within 24 hours. 

They also commit to educating and raising awareness with their users about the types of content not permitted under these rules and community guidelines.

Call me a cynic, but while I applaud the idea, I don’t have a lot of faith in its implementation.  We’ve witnessed years of vitriol and hatred based on sex, gender, gender identity and expression, and sexual orientation play out online, without much progress in disrupting or addressing it. Certainly the various platforms and companies haven’t been particularly effective in either educating or protecting users. Even when reporting tools and standards are in place, their application has tended to be fairly arbitrary and unreliable.

Apparently I’m not the only one with concerns about this – European Digital Rights (EDRi) and Access Now released a contemporaneous statement announcing the “…decision not to take part in future discussions and confirming that we do not have confidence in the ill-considered ‘code of conduct’ that was agreed.”

EDRi and Access Now’s concerns are not with effectiveness – they cut deeper, questioning both the process by which the code was developed, and the effect of the code:

  • creation of the code of conduct took place in a non-democratic, non-representative way; 
  • the “code of conduct” downgrades the law to a second-class status, behind the “leading role” of private companies that are being asked to arbitrarily implement their terms of service; 
  • the project as set out seems to exploit unclear liability rules for companies; and 
  • there are serious risks for freedom of expression since legal but controversial content could be deleted as a result of this “voluntary” and unaccountable take-down mechanism.

The two organizations emphasize that their separation from the project (and process) should not be construed as indicating a lack of commitment to the underlying aims – as they state:

[c]ountering hate speech online is an important issue that requires open and transparent discussions to ensure compliance with human rights obligations. This issue remains a priority for our organisations and we will continue working for the development of transparent, democratic frameworks

How do we do this going forward?

Must we keep technology companies on the boundaries of the protection battles?  Or keep their efforts separate from those of under law?

Could tech companies work with (and within) the law?  Can they do so without effectively becoming agents of the state? 

And no matter how we organize ourselves, how can we make such codes and commitments more than lip service – how can we create and encourage efficacy in their application?  Perhaps more to the point, how do we inculcate a real understanding of the price and prevalence of hate speech (of all sorts) in these spaces such that strategies and solutions are developed with an accurate understanding of the issue? 

It IS time to address issues of hate speech and risk/danger online.  It is also time, however, to do so appropriately – to go back to the drawing board and, via broad consultation an authoritative, transparent and enforceable process should be developed and implemented.  One that, at a mimimum:

  • Recognizes the key role(s) of privacy Identifies civil rights and civil liberties as key issues 
  • Builds transparency and accountability into both the development process and into whatever strategy is ultimately arrived at 
  • Ensures a balanced process that includes public sector, private sector, and civil society voices are heard. 

So…let’s start by establishing a multi-stakeholder engagement process tasked with defining the parameters and needs of hate speech protection and developing attendant best practices for privacy, accountability and transparency within that process.   Once that design framework is agreed to, it will be much clearer how best to implement the process; and how to ensure that it is appropriately balanced against concerns around freedom of expression.

Oh, and while we're at it?  If we're going to finally come up with an effective way to address these issues, then let's include sex, gender, gender identity and expression, and sexual orientation into our definition of hate speech while we're at it.  We know we need to!

Where and When is it Reasonable to Expect Your Messages to be Private (and what protection does it offer anyway)?

When you text message someone, do you have a reasonable expectation of privacy in that message?

R. v. Pelucco was a 2015 BC Court of Appeal decision involving a warrantless search of text messages found in a cell phone.  The question was whether the sender had a reasonable expectation of privacy in those messages.   The majority concluded that when legal and social norms were applied, a sender would ordinarily have a reasonable expectation that the messages would remain private. Justice Groberman writing for the majority, concluded that the lack of control once the message had been sent was a relevant factor in assessing objective reasonableness, but not determinative.

I’ve written about this decision previously here.

iphone-5-delete-text-messages-5.jpg

What about when you message someone privately using an online platform? 

In R v Craig, released 11 April 2016, police obtained private online messages between Mr. Craig, E.V.  and several of E.V’s friends from Nexopia, a Canada-based social network site targeted at teens. 

Mr. Craig (22) and E.V.(13) originally met via (private) messaging each other on Nexopia.  Messaging continued, as did offline meetings that ultimately resulted in him (illegally) providing her with alcohol and having sexual relations (to which she could not legally consent, being 13) with her.  When two girls from E.V.’s school overheard a conversation with E.V. regarding her sexual encounter with Mr. Craig, they reported it to a school counsellor. The counsellor subsequently called the police, and the police investigation commenced. He was charged and convicted of sexual touching of a person under the age of 16sexual assault, and internet luring (communicating with a person under the age of 16 years for the purpose of facilitating the commission of an offence under s. 151 with that person). 

When the police interviewed E.V., she provided Mr. Craig’s name and logged on to her Nexopia account to print out messages between them, including a photo of Mr. Craig.    A friend of E.V. also provided pages from her own account containing messages with Mr. Craig in which he admitted to having sex with E.V. 

Police obtained a search warrant for messages on the Nexopia servers under the usernames of E.V., several of her friends, and Mr. Craig.  A number of the documents seized from Nexopia were not disclosed to the defence pursuant to a Criminal Code presumptively forbidding production of complainant or witness records when the charge is sexual assault or sexual interference.  A “record” is one that contains “personal information for which there is a reasonable expectation of privacy.” 

Craig argued that there was no reasonable expectation of privacy in those messages -- that the messages were sent, received and stored on Nexopia’s servers, and thus had never been private.  Accordingly, the defence should be able to access them. 

The threshold for reasonable expectation was articulated as the expectations of the sender at the time the message was sent.  In this case, the messages were “personal communications between friends and confidantes, and were not intended for wider circulation beyond the small circle of friends.”  Accordingly, there was a reasonable expectation of privacy in the messages and they were protected from having to be disclosed to Mr. Craig.

Mr. Craig then sought to exert his own reasonable expectation of privacy over (some of) the Nexopia messages.  The trial judge disagreed, finding that Mr. Craig had no reasonable expectation of privacy in the messages, even those he had authored and sent himself because he had no control over them after sending. 

On appeal, the “control” test was rejected:

While recognizing that electronic surveillance is a particularly serious invasion of privacy, the reasoning is of assistance in this case. Millions, if not billions, of emails and “messages” are sent and received each day all over the world. Email has become the primary method of communication. When an email is sent, one knows it can be forwarded with ease, printed and circulated, or given to the authorities by the recipient. But it does not follow, in my view, that the sender is deprived of all reasonable expectation of privacy. I will discuss this further below. To find that is the case would permit the authorities to seize emails, without prior judicial authorization, from recipients to investigate crime or simply satisfy their curiosity. In my view, the analogy between seizing emails and surreptitious recordings is valid to this extent. [para 63]

Instead, the Court of Appeal found that Mr. Craig DID have an objectively reasonable expectation of privacy in the messages seized by the police, on the basis of both:

  • An emerging Canadian norm of recognizing an expectation of privacy in information given to third parties;

  • The nature of the information itself, since it exposed intimate details of his lifestyle, personal choices, and identifying information;

 (The appeal continued on to find that not only did Mr. Craig have an expectation of privacy in the messages, but that his s. 8 Charter rights against unreasonable search and seizure had been violated.   HOWEVER, the violation was not egregious or intention, it had no or negligible impact on Mr. Craig’s interests, and accordingly admission of the messages into evidence would not bring the administration of justice into disrepute.  In fact, they noted, the case dealt with serious charges involving offences against a young teenager, and this too weighed in favour of admitting the evidence.  The appeal was dismissed, with the Court of Appeal finding that there had been no substantial wrong or miscarriage of justice at trial). 

So there you have it:

Yes, you may well have a reasonable expectation of privacy in messages you’ve sent to others, either via text or online platforms. 

Remember though, that doesn’t mean they stay private – it only means that they (and by extension you and your informational dignity and autonomy) must be treated in accordance with Charter protections

Is IP address personal information (in Europe)?

Are IP addresses personal information?  On 28 October, the German Federal Court of Justice referred the question to the European Court of Justice (they who gave us the contentious Google Spain decision).

The case stems from the fact that when users visit German government sites, the site collects their IP addresses along with other information.  This information is logged and stored “in order to track down and prosecute unlawful hacking”. 

For once Canada can consider itself well ahead of the curve.   The Office of the Privacy Commissioner of Canada is clear that “An Internet Protocol (IP) address can be considered personal information if it can be associated with an identifiable individual.”  A 2013 report from that Office “What an IP Address Can Reveal About You” goes further into the subject, ultimately concluding that

…knowledge of subscriber information, such as phone numbers and IP addresses, can provide a starting point to compile a picture of an individual's online activities, including:

·         Online services for which an individual has registered;

·         Personal interests, based on websites visited; and

·         Organizational affiliations.

It can also provide a sense of where the individual has been physically (e.g., mapping IP addresses to hotel locations, as in the Petraeus case). 

This information can be sensitive in nature in that it can be used to determine a person’s leanings, with whom they associate, and where they travel, among other things.  What’s more, each of these pieces of information can be used to uncover further information about an individual. 

The Federation of German Consumer Organizations has raised concerns that classifying IP address as personal information could create delays and onerous administrative and consent requirements for internet use in Europe, or alternatively could necessitate a reconsideration of some of the provisions of the EU Data Protection Directive.  It would be interesting to hear from a similar Canadian body as to their experiences….or perhaps the CJEU should consider some kind of case study in order to include practical experience into their considerations.

It’s obscurity, not apocalypse: All that the “right to be forgotten” decision has created is the right to ask to have information removed from search engine results.

The National Post recently carried a week of op-eds, all focused on responding to the “right to be forgotten” that was (allegedly) created by the Google Spain decision.   Some extremely well-known and widely-respected experts have weighed in on the subject: 

Ann Cavoukian and Chris Wolfe: 

…while personal control is essential to privacy, empowering individuals to demand the removal of links to unflattering, but accurate, information arguably goes far beyond protecting privacy… The recent extreme application of privacy rights in such a vague, shotgun manner threatens free expression on the Internet. We cannot allow the right to privacy to be converted into the right to censor.

Brian Lee Crowley:

This ruling threatens to change the Internet from a neutral platform, on which all the knowledge of humanity might eventually be made available, to a highly censored network, in which, every seven seconds, another person may unilaterally decide that they have a right to be forgotten and to have the record of their past suppressed…We have a duty to remember, not to forget; a duty not to let the past go, simply because it is inconvenient or embarrassing.

Paula Todd:

Should there be exceptions to the principle of “let the public record stand”? Most countries already have laws that do just that — prohibiting and ordering the deletion of online criminal defamation, cyberabuse and images of child sexual abuse, for example. Google, and other search engines, do invent algorithms that position certain results more prominently. Surely a discussion about tweaking those algorithms would have been less draconian than this cyber censorship.

With all due respect to these experts, I cannot help but feel that each of them has missed the central point – pushed by the rhetoric about a “right to be forgotten” into responding to a hypothetical idea rather than the concrete reality of the decision.

It is not about mere reputation grooming.

It is not about suppressing or rewriting history.

It is not about silencing critics.

It is not about scrubbing clean the public record.

It is not about protecting people from the consequences of their actions.

Frankly, this ruling isn’t the creation of a new form of censorship or suppression – rather, it’s a return to what used to be.  The decision sets the stage for aligning new communications media with more traditional lifespans of information and a restoration of the eventual drawing of a curtain of obscurity over information as its timeliness fades. 

Facing the facts:

It is important to be clear that all the “right to be forgotten” decision has created is the right to ask to have information removed from search engine results.   

There is no guarantee that the information will be removed – in its decision the court recognized that while indeed there are situations where removal would be appropriate, each request requires a careful balancing of individual rights of informational self-determination against the public interest.

It is also worth pointing out that this is hardly the only recourse users have to protect and shape online privacy, identity, and reputation. In an Atlantic article about the introduction of Facebook Social Graph, the authors comment that:

Online, obscurity is created through a combination of factors. Being invisible to search engines increases obscurity. So does using privacy settings and pseudonyms. Disclosing information in coded ways that only a limited audience will grasp enhances obscurity, too. Since few online disclosures are truly confidential or highly publicized, the lion's share of communication on the social web falls along the expansive continuum of obscurity: a range that runs from completely hidden to totally obvious.

Other ubiquitous privacy protective techniques don’t tend to engender the same concerns as “right to be forgotten”.  Nobody is ringing alarm bells equating privacy settings with censorship – indeed, we encourage the use of privacy settings as responsible online behavior.  And while there are certainly concerns about the use of pseudonyms, those concerns are focused on accountability, not freedom of speech or access to information.  In fact, the use of pseudonyms is widely considered as facilitating freedom of speech, not preventing it. 

Bottom line:

I’m all for freedom of expression.  Not a fan of censorship either.  So I would like to make a heartfelt plea to the community of thinkers who focus on this area of emerging law, policy, and culture: exaggerations and overreactions don’t help clarify these issues and are potentially damaging in the long run.  A clear understanding of what this decision is —and what it is not— will be the best first step towards effective, balanced implementation.