Ontario Privacy in Public Spaces Decision: The Need to Recognize Privacy as a Dignity Baseline, Not an Injury-Based Claim

An Ottawa woman has successfully argued for a privacy right in public spaces.  After video of her jogging along the parkway was included in a commercial, she sued for breach of privacy and appropriation of personality

"The filming of Mme. Vanderveen's likeness was a deliberate and significant invasion of her privacy given its use in a commercial video," the judge added. 

While pleased with the outcome, I’m a little uncomfortable with the presentation (and not sure whether that’s about the claimant or the media).  It appears that the privacy arguments here were grounded in “dignity”, and particularly in self-image.  That is, at the time the video was taken, the claimant was (or felt herself to be) overweight and had only recently taken up jogging after the birth of her children.  She testified that she thought the video made her look overweight and it caused her anxiety and discomfort. As her lawyer stated, “[s]he’s an incredibly fit person. And here’s this video — she looks fine in it — except that when she sees it, she doesn’t see herself. That’s the dignity aspect of privacy that’s protected in the law.”

pheobe running.jpg

In response, the company appears also to have focussed on self-esteem and injury.  “They made the argument that if they don’t use someone’s image in a way that is embarrassing or if they don’t portray someone in an unflattering light — here it is just her jogging and it’s not inherently objectionable — that they should be allowed to use the footage.  In contrast, the claimant argued that how someone sees themself is more important than how a third person sees them.”

Why does this bother me?  For the same reason that the damage threshold bothers me….because invasion of privacy is an injury in and of itself. 

By focussing on her self-image and dignity, we’re left to wonder whether, if another individual had been filmed without their consent, had tried to cover their face when they saw the camera (as did the claimant here) and yet was included in the video, would a court come to the same result?  Or is there some flavour of “intention infliction of emotional suffering” creeping into this decision?  When the judge states that “I find that a reasonable person, this legally fictitious person who plays an important role in legal determinations, would regard the privacy invasion as highly offensive and the plaintiff testified as to the distress, humiliation or anguish that it caused her” what “injuries” are implicitly being normalized?  The source of the injury seems to be that of being (or believing oneself to look) overweight – is (and should) size be conflated with humiliation?  The judge concludes that while “Mme Vanderveen is concerned about the persona that she presents and about her personal privacy I find that she is not unusually concerned or unduly sensitive about this” but I find myself wondering about the social context.  Would a man claiming the same distress/humiliation/anguish in this situation have been taken as seriously?   

The judge found that "[t]he photographer was not just filming a moving river, he or she was waiting for a runner to jog along the adjacent jogging trail to advertise the possibility of the particular activity in Westboro."  Because of the desire to capture someone running, part of the damages included an estimate of what it would have cost to hire an actor to run along the river.  This is where the privacy breach takes place – the deliberate capture of an individual’s image, and its use without their knowledge or consent for commercial purposes.

The issue isn’t how she felt about herself, nor whether she like(d) the way she looks in the video – it is the act of making and using the video of her in the first place.  When we focus on the injury to her dignity, we risk misdirecting the focus, making it about the individual rather than about the act of privacy invasion. 

Individuals shouldn’t have to display their wounds in order to be considered worthy of the protection of law.  Rather, law should be penalizing those who do not take care to protect and respect privacy.  That’s how we respect dignity – by recognizing it as an inherent right possessed by persons, with a concurrent right not to have that privacy invaded. 

Removing Unlawful Content Isn’t a Right to be Forgotten – It’s Justice

A federal court decision released 30 January 2017 has (re)ignited discussion of the “right to be forgotten” (RTBF) in Canada. 

The case revolved around the behaviour of Globe24h.com (the URL does not appear to be currently available, but it is noteworthy that their Facebook page is still online), a website that republishes Canadian court and tribunal decisions. 

The publication of these decisions is not, itself, inherently problematic.  Indeed, the Office of the Privacy Commissioner (OPC) has previously found that an  organization (unnamed in the finding, but presumably CanLii or a similar site) had collected, used and disclosed court decisions for appropriate purposes pursuant to subsection 5(3) of PIPEDA.  The Commissioner determined that the company's purpose in republishing was to support the open courts principle, by making court and tribunal decisions more readily available to Canadian legal professionals and academics. Further, that the company's subscription-based research tools and services did not undermine the balance between privacy and the open courts principle that had been struck by Canadian courts, nor was the operation of those tools inconsistent with OPC’s guidance on the issue.  It is important to note that this finding relied heavily on the decision by the organization NOT to allow search engines to index decisions within its database or otherwise making them available to non-subscribers. 

In its finding, the OPC references another website – Globe24h.com – about which they had received multiple well-founded complaints.  Regarding Globe24h.com, which did allow search engines to index decisions as well as hosting commercial advertising and charging a fee for removal of personal information, the Commissioner found that:

  1. He did have jurisdiction over the (Romanian-based) site, given its real and substantial connection to Canada;
  2. the site was not collecting, using and disclosing the information for exclusively journalistic purposes and thus was not exempt from PIPEDA’s requirements.
  3. that Globe24h’s purpose of making available Canadian court and tribunal decisions through search engines – which allows the sensitive personal information of individuals to be found by happenstance or by anyone, anytime for any purpose –  was NOT one that a reasonable person would consider to be appropriate in the circumstances; and
  4. that although the information was publicly available, the site’s use was not consistent with the open courts principle for which it was originally made available, and thus PIPEDA’s requirement for knowledge and consent did apply to Globe24h.com.

Accordingly, he found the complaints well-founded.

From there, the complaint proceeded to Federal Court, with the Privacy Commissioner appearing as a party to the application.

The Federal Court concurred with the Privacy Commissioner that: PIPEDA did apply to Globe24h.com; that the site was engaged in commercial activity; and that it’s purposes were not exclusively journalistic.  On reviewing its collection, use and disclosure of the information, the Court determined that the exclusion for publicly available information did not apply, and that Globe24h had contravened PIPEDA. 

Where it gets interesting is in the remedies granted by the Court.  Strongly influenced by the Privacy Commissioner’s submission, the Court:

  1. issued an order requiring Globe24h.com to correct its practices to comply with sections 5 to 10 of PIPEDA;
  2. relied upon s.16 of PIPEDA, which authorizes the Court grant remedies to address systemic non-compliance to issue a declaration that Gobe24h.com had contravened PIPEDA; and
  3. awarded damages in the amount of $5000 and costs in the amount of $300.

The reason this is interesting is the explicit recognition by the Court that:

A declaration that the respondent has contravened PIPEDA, combined with a corrective order, would allow the applicant and other complainants to submit a request to Google or other search engines to remove links to decisions on Globe24h.com from their search results. Google is the principal search engine involved and its policy allows users to submit this request where a court has declared the content of the website to be unlawful. Notably, Google’s policy on legal notices states that completing and submitting the Google form online does not guarantee that any action will be taken on the request. Nonetheless, it remains an avenue open to the applicant and others similarly affected. The OPCC contends that this may be the most practical and effective way of mitigating the harm caused to individuals since the respondent is located in Romania with no known assets. [para 88]

It is this line of argument that has fed response to the decision.  The argument is that, by explicitly linking its declaration and corrective order with the ability of claimants to request that search engine’s remove the content at issue from their results, the decision has created a de facto RTBF in Canada. 

With all due respect, I disagree.  A policy on removing content that a court has declared to be unlawful is not equivalent to a “right to be forgotten.”  RTBF, as originally set out, recognized that under certain conditions (i.e., where specific information is inaccurate, inadequate, irrelevant or excessive), individuals have the right to ask search engines to remove links to personal information about them.  In contrast, the issue here is not that the information is “inaccurate, inadequate, irrelevant or excessive” – rather, it is that the information has been declared UNLAWFUL. 

The RTBF provision of the General Data Protection Regulation – Article 17 – sets out circumstances in which a request for erasure would not be honoured because there are principles at issue that transcend RTBF and justify keeping the data online – legal requirements, freedom of expression, interests of public health, and the necessity of processing the data for historical, statistical and scientific purposes.

We are not talking here about an overarching right to control dissemination of these publicly available court records.  The importance of the open court principle was explicitly addressed by both the OPC and the Federal Court, and weighted in making their determinations.  In so doing, the appropriate principled flexibility has been exercised – the very principled flexibility that is implicit in Article 17. 

I do not dispute that a policy conversation about RTBF needs to take place, nor that explicitly setting out parameters and principles would be of assistance going forward.  Perhaps the pending Supreme Court of Canada decision in Google v Equustek Solutions will provide that guidance. 

Regardless, the decision in Globe24h.com does not create RTBF– rather, it exercises its power under PIPEDA to craft appropriate remedies to facilitate justice.

 

Police Bodycams: Crossing the Line from Accountability to Shaming

 

Police bodycams are an emerging high-profile tool in law enforcement upon which many hopes for improved oversight, accountability, even justice are pinned.

When it comes to police bodycams, there are many perspectives:

  • Some celebrate them as an accountability measure, almost an institutionalized sousveillance.
  • For others, they’re an important new contribution to the public record
  • And where they are not included in the public record, they can at least serve as internal documents, subject to Access to Information legislation.

These are all variations on a theme – the idea that use of police bodycams and their resulting footage are about public trust and police accountability.

But what happens when they’re used in other ways?

In Spokane, Washington recently a decision was made to use bodycam footage for the purpose of shaming/punishment.  In this obviously edited footage, Sgt. Eric Kannberg deals calmly with a belligerent drunk, using de-escalation techniques even after the confrontation gets physical.  Ultimately, rather than meting out the typical visit to the  drunk tank, the officer opts to proceed via a misdemeanor charge and the ignominy of having the footage posted to Spokane P.D.'s Facebook page. The implications of this approach in terms of privacy, dignity, and basic humanity are far-reaching.

The Office of the Privacy Commissioner of Canada has issued Guidance for the Use of Body-Worn Cameras by Law Enforcement;  guidance that strives to balance privacy and accountability. The Guidelines include:

Use and disclosure of recordings

The circumstances under which recordings can be viewed:

  • Viewing should only occur on a need-to-know basis. If there is no suspicion of illegal activity having occurred and no allegations of misconduct, recordings should not be viewed.
  • The purposes for which recordings can be used and any limiting circumstances or criteria, for example, excluding sensitive content from recordings being used for training purposes. 
  • Defined limits on the use of video and audio analytics.
  • The circumstances under which recordings can be disclosed to the public, if any, and parameters for any such disclosure. For example, faces and identifying marks of third parties should be blurred and voices distorted wherever possible.
  • The circumstances under which recordings can be disclosed outside the organization, for example, to other government agencies in an active investigation, or to legal representatives as part of the court discovery process.

Clearly, releasing footage in order to shame an individual would not fall within these parameters. 

After the posted video garnered hundreds of thousands of views, its subject is now threatening to sue.  He is supported by the ACLU, which expressed concerns about both the editing and the release of the footage. 

New technologies offer increasingly powerful new tools for policing.  They may also intersect with old strategies of social control such as gossip and community shaming.  The challenge – or at least an important challenge– relates to whether those intersections should be encouraged or disrupted.

As always, a fresh examination of the privacy implications precipitated by the implementation of new technology is an important step as we navigate towards new technosocial norms.

The Right(s) to One’s Own Body

In July, police approached a computer engineering professor in Michigan to assist them with unlocking a murder victim’s phone by 3D-printing the victim’s fingerprints. 

It is a well-established principle of law that ‘there is no property in a corpse.’ This means that the law does not regard a corpse as property protected by rights.  So hey, why not, right? 

There is even an easy argument to be made that this is in the public interest.  Certainly, that seems to be how Professor Anil Jain (to whom the police made the request) feels: “If we can assist law enforcement that’s certainly a good service we can do,” he says.   

Marc Rotenberg, President of the Electronic Privacy Information Centre (EPIC) notes that if the phone belonged to a crime suspect, rather than a victim, police would be subject to a Supreme Court ruling requiring them to get a search warrant prior to unlocking the phone—with a 3D-printed finger or otherwise.

I’ve got issues with this outside the victim/suspect paradigm though. 

For instance, I find myself wondering about the application of this to live body parts. 

I’ve always been amused by the R v Bentham case, from the UK House of Lords in 2005. Bentham broke into a house to commit robbery and in course of this, used his fingers in his pocket to make a gun shape.  He was arrested.  Though he was originally convicted of possessing a firearm or imitation thereof, that conviction was overturned on the basis that it wasn’t possible for him to “possess” part of his own body.  But…if you can’t “possess” your own body, why wait for death before the State makes a 3-D copy of it for its own purposes?

And…we do have legislation about body parts, both live and dead – consider the regulation of organ donation and especially payment for organs.  Consider too the regulation of surrogacy, and of new reproductive technologies. 

Maybe this is a new area to ponder – it doesn’t fit neatly into existing jurisprudence and policy around the physical body.  The increasing use of biometric identifiers to protect personal information inevitably raises new issues that must be examined. 

UPDATE:  It turns out that the 3D printed fingerprint replica wasn’t accurate enough to unlock the phone.  Undeterred, law enforcement finally used a 2D replica on conductive paper, with the details enhanced/filled in manually.  This doesn’t really change the underlying concern, does it? 

How About We Stop Worrying About the Avenue and Instead Focus on Ensuring Relevant Records are Linked?

Openness of information, especially when it comes to court records, is an increasingly difficult policy issue.  We have always struggled to balance the protection of personal information against the need for public information and for justice (and the courts that dispense it) to be transparent.  Increasingly dispersed and networked information makes this all the more difficult. 

In 1991’s Vickery v. Nova Scotia Supreme Court (Prothonotary) Justice Cory (writing in dissent, but in agreement with the Court on these statements)  positioned the issue as being inherently about the tension between the privacy rights of an acquitted individual versus the importance of court information and recordsbeing open.

…two principles of fundamental importance to our democratic society which must be weighed in the balance in this case.  The first is the right to privacy which inheres in the basic dignity of the individual.  This right is of intrinsic importance to the fulfilment of each person, both individually and as a member of society.  Without privacy it is difficult for an individual to possess and retain a sense of self-worth or to maintain an independence of spirit and thought.
The second principle is that courts must, in every phase and facet of their processes, be open to all to ensure that so far as is humanly possible, justice is done and seen by all to be done.  If court proceedings, and particularly the criminal process, are to be accepted, they must be completely open so as to enable members of the public to assess both the procedure followed and the final result obtained.  Without public acceptance, the criminal law is itself at risk.

Historically the necessary balance has been arrived at less by policy negotiation than by physical and geographical limitations.  When one must physically attend the court house to search for and collect information from various sources, the time, expense and effort necessary functions as its own form of protection.  As Elizabeth Judge has noted, however, “with the internet, the time and resource obstacles for accessing information were dramatically lowered. Information in electronic court records made available over the Internet could be easily searched and there could be 24-hour access online, but with those gains in efficiency comes a loss of privacy.”

At least arguably, part of what we have been watching play out with the Right to be Forgotten is a new variation of these tensions.  Access to these forms of information is increasingly easily and generally available – all it requires is a search engine and a name.  In return, news stories, blog posts, social media discussions and references to legal cases spill across the screen.  With RTBF and similar suggestions,  we seek to limit this information cascade to that which is relevant and recent. 

This week saw a different strategy employed.  As part of the sentences for David and Collet Stephan – whose infant son died of meningitis due to their failure to access medical care for him when he fell ill – the Alberta court required that notice of the sentence be posted on Prayers for Ezekiel and any other social media sites maintained by and dealing with the subject of their family.  (NOTE:  As of 6 July 2016, this order has not been complied with).

Contrary to some, I do not believe that the requirement to post is akin to a sandwich board, nor that this is about shaming.  Rather, it seems to me that in an increasingly complex information spectrum, insisting that sentence be clearly and verifiably linked to information about the issue.  Instead, I agree that

… it is a clear sign that the courts are starting to respond to the increasing power of social media, and to the ways that criminals can attract supporters and publicity that undermines faith in the legal system. It also points to the difficulties in upholding respect for the courts in an era when audiences are so fragmented that the facts of a case can be ignored because they were reported in a newspaper rather than on a Facebook post.

There has been (and continues to be) a chorus of complaints about RTBF and its supposed potential to frustrate (even censor) the right to KNOW.  Strangely, that same chorus does not seem to be raising their voices in celebration of this decision.  And yet…. doesn’t requiring that conviction and sentence be attached to “news” of the original issue address many of the concerns raised by anti-RTBF forces? 

 

ScoreAssured’s unsettling assurances

Hearing a lot of talk about Tenant Assured – an offering from new UK company ScoreAssured.  Pitched as being an assessment tool for “basic information” as well as “tenant worthiness,” TenantAssist scrapes social media sites (named sites so far as Facebook, Twitter, LinkedIn, and Instagram) content – including conversations and private messages – and then runs the data through natural language processing and other analytic software to produce a report. 

The report rates the selected individual on five “traits” – extraversion, neuroticism, openness, agreeableness, and conscientiousness.   The landlord never directly views posts of the potential tenant, but the report will include detailed information such as activity times, particular phrases, pet ownership etc.

Is this really anything new?  We know that employers, college admissions, and even prospective landlords have long been using social media reviews as part of their background check process. 

TenantAssured would say that at least with their service the individual is asked for and provides consent.   And that is, at least nominally, true.  But let’s face it – consent that is requested as part of a tenancy application is comparable to the consent for a background check on an employment application – “voluntary” only if you’re willing to not go any further in the process.  Saying “no” is perceived as a warning flag that will likely result in one not being hired or not getting housing. Declining jobs and/or accommodations is not a luxury everyone can afford. 

Asked about the possibility of a backlash from users, co-founder Steve Thornhill confidently asserted that “people will give up their privacy to get something they want.”  That may be the case…but personally I’m concerned that people may be forced to give up their privacy to get something they urgently need (or quite reasonably want).

But let’s presume for a second that the consent is “freely” given. Problems with this model remain: 

  • Reports may include information such as pregnancy, political opinions, age, etc. – information that is protected by human rights codes.  (Thornhill says, “all we can do is give them the information, it’s up to landlords to do the right thing”)
  • Performed identity – our self-presentation on social media sites is constructed for particular (imagined) audiences.  To remove it from that context does not render it presumptively true or reliable – quite the opposite.
  • Invisibility of standards – how are these traits being assessed?  What values are being associated with particular behaviours, phrases and activities and are they justified?  An individual who is currently working in a bar or nightclub might show activity and language causing them to be receive negative ratings as excessive partiers or unstable, for instance.  In fact, the Telegraph demonstrated this by running reports on their financial journalists (people who, for obvious reasons, tend to use words like fraud and loan rather frequently) and sure enough the algorithm rated them negatively in “financial stability”.
  • Unlike credit bureaus, which are covered under consumer protection laws, there is no regulation of this sector.  What that means, among other things, is that there is not necessarily any way for an individual to know what is included in their report, let along challenge the accuracy or completeness of such a report. 

The Washington Post quite correctly identifies this as an exponential change in social media monitoring, writing that “…Score Assured, with its reliance on algorithmic models and its demand that users share complete account access, is something decidedly different from the sort of social media audits we’re used to seeing.  Those are like cursory quality-control check; this is more analogous to data strip-mining.”

Would it fly here?

Again, we know that background checks and credit checks for prospective tenants aren’t new.  We also know that, in Canada at least, our Information and Privacy Commissioners have had occasion to weigh in on these issues.

In 2004, tenant screening in Ontario suffered a setback when the  Privacy Commissioner of Ontario instructed the (then) Ontario Rental Housing Tribunal to stop releasing so much personal information in their final orders. As a result, names are now routinely removed from the orders, making it significantly more difficult to scrape the records generally.  As for individual queries, unless you know the names of the parties, the particular rental address and file number already, you will probably not be able to find anything about a person’s history in such matters.

Now, with the release of PIPEDA Report of Findings #2016-002, Feb 19, 2016 (posted 20 May 2016), that line of business is even more firmly shuttered.  There, the OPC investigated the existence of a “bad tenant” list that was maintained by the landlord association. The investigation raised numerous concerns about the list:

  • Lack of consent by individuals for their information to be collected and used for such a purpose
  • Lack of accountability – there was no way for individuals to ascertain if any information about them was on the bad tenant list, who had placed it there, and what the information was. 
  • Simultaneously, the landlord association was also not assessing the accuracy or credibility of any of the personal information that it collected, placed on the list and regularly disclosed to other landlords, who then made decisions based upon it.
  • Further, there was no way to ensure accuracy of the information on the list, and no way for individuals to challenge the accuracy or completeness of the information.

It was the finding of the Privacy Commissioner of Canada that by maintaining and sharing this information, the association was acting as a credit reporting agency, albeit without the requisite license from the province.  Accordingly, the Commissioner found that the purpose for which the tenant personal information was collected, used or disclosed was not appropriate under s.5(3) of PIPEDA.  The institution, despite disagreeing with the characterization of it as a credit bureau, implemented the recommendation to destroy the “bad tenant” list, cease collecting information for such a list, and to no longer share personal information about prospective tenants without explicit consent.

This is good news, but the temptation to monetize violations of privacy continues. SecureAssist has expansive plans.  They anticipate launching (by the end of July 2016) similar “report” products targeted at Human Resources officers and employers, as well as parents seeking nannies. 

“If you’re living a normal life,” Thornhill asserts, “then, frankly, you have nothing to worry about.”  We all need to ask– who defines “normal”?  And since when is a corporation’s definition of “normal” the standard for basic human dignity needs like employment or housing? 

 

 

Revenge Porn: In Ontario, You’ll Pay With More Than Karma

Doe 464533 v N.D. is a January 2016 decision from the Ontario Superior Court of Justice that makes a strong statement that those who engage in revenge porn will pay with more than just karma points!

The case involved an 18-year-old girl, away at university but still texting, phoning, emailing and otherwise connecting with her ex-boyfriend. Though the formal relationship had ended in spring, they continued to see each other “romantically” through the summer and into that autumn.  These exchanges included him sending multiple intimate photos and videos of himself, and requesting the same of her. 

After months of pressure, she made an intimate video, but was still uncomfortable sharing it.  She texted making clear her misgivings and he convinced her to relent, reassuring her that no one else would ever see the video. Eventually and despite her misgivings she sent the video to him.

Shortly thereafter, she learned that her ex had, on the same day he received it, posted the video to an online website.  He was also sharing it with some of their high school classmates.  She was devastated and humiliated by the discovery, leading to emotional and physical distress that required ongoing counselling, as well as suffering academically and socially. 

The video was online for approximately three weeks before his mother (hearing of the incident from the victim) forced him to remove it.  As the Judge points out, “[t]here is no way to know how many times it was viewed or downloaded during that time, or if and how many times it may have been copied onto other media storage devices…or recirculated.”

The damage is not, of course, limited to that three-week period – it is persistent and ongoing.  She continues to struggle with depression and anxiety.  She lives with the knowledge that former classmates and community members are aware of the video (and in some cases have viewed it), something that has caused harm to her reputation. In addition, she is concerned about the possibility that the video may someday resurface and have an adverse impact on her employment, her career, or her future relationships.

 

The police declined to become involved due to the age(s) of those involved, but she did bring a civil action against him. 

She was successful on her claim of breach of confidence.

She was successful on her claim of intentional infliction of mental distress.

But where it gets really interesting is in Justice Stinson’s assessment of the invasion of privacy claim.

Building upon the recognition of a tort of intrusion upon seclusion in Ontario, he returns to that analysis to locate the injury here as one not of intrusion but of public disclosure of embarrassing facts.  

Normally, the three factors necessary to show such a tort would be:

  1. The disclosure must be a public one. 
  2. The facts disclosed must be private; and
  3. The matter made public must be one which would be offensive and objectionable to a reasonable man of ordinary circumstances.

It is incontrovertible that the video was publicly disclosed. The subject matter of the video – apparently her masturbating – is certainly private.  The first two elements are made out. 

Here is where the judge wins my heart – he refuses to layer sexual shame on an already victimized plaintiff.  Instead of focussing on the subject of the video (her masturbating), he modifies the final requirement so that the requirement is that either the matter publicized or the act of publication itself would be highly offensive to a reasonable person.

In this case, it is the behaviour of the ex that is offensive:

…the defendant posted on the Internet a privately-shared and highly personal intimate video recording of the plaintiff. I find that in doing so he made public an aspect of the plaintiff’s private life. I further find that a reasonable person would find such activity, involving unauthorized public disclosure of such a video, to be highly offensive. It is readily apparent that there was no legitimate public concern in him doing so.

Justice Stinson issues an injunction directing the ex to immediately destroy any and all intimate images or recordings of the plaintiff in whatever form they may exist that he has in his possession, power or control.  A further order permanently prohibits him from publishing, posting, sharing or otherwise disclosing in any fashion any intimate images or recordings of her.  Finally, he is permanently prohibited from communicating with her or members of her immediate family, directly or indirectly.

As for damages, the judge mentions that her claim is limited by procedure to $100,000.    He then considers the following:

  • ·         The circumstances of the victim at the time of the events, including factors such as age and vulnerability. The plaintiff was 18 years old at the time of the incident, a young adult who was a university student. Judging by the impact of the defendant’s actions, she was a vulnerable individual;
  • ·         The circumstances of the assaults including their number, frequency and how violent, invasive and degrading they were. The wrongful act consisted of uploading to a pornographic website a video recording that displayed intimate images of the plaintiff. The defendant’s actions were thus very invasive and degrading. The recording was available for viewing on the Internet for some three weeks. It is impossible to know how many times it was viewed, copied or downloaded, or how many copies still exist elsewhere, out of the defendant’s (and the plaintiff’s – and the Court’s) control. As well, the defendant showed the video to his friends, who were also acquaintances of the plaintiff. Although therewas no physical violence, in these circumstances, especially in light of the multiple times the video was viewed by others and, more importantly, the potential for the video still to be in circulation, it is appropriate to regard this as tantamount to multiple assaults on the plaintiff’s dignity;
  • ·         The circumstances of the defendant, including age and whether he or she was in a position of trust. The defendant was also 18 years of age. He and the plaintiff had been in an intimate – and thus trusting – relationship over a lengthy period. It was on this basis, and on the basis of his assurances that he alone would view it, that he persuaded her to provide the video. His conduct was tantamount to a breach of trust; and
  • ·         The consequences for the victim of the wrongful behaviour including ongoing psychological injuries. As described above, the consequences were emotionally and psychologically devastating for the plaintiff and are ongoing

He awards:

General damages:  $50,000

Aggravated damages (where injury was aggravated by the manner in which it was done):  $25,000

Punitive damages:  $25,000         

With pre-judgement interest and her costs for the action, the full award is $141,708.03

Is it enough to make up for the violation?  No, but I can’t imagine any amount would be.  I hope it’s enough to make the next malicious ex think twice before engaging in this type of behaviour.

On top of that, she gets validation.

She gets recognition that NOTHING she did was inappropriate or offensive.

The judge commends her for earning her undergraduate degree despite these events, as well as for her courage and resolve in pursuing the remedies to which she is entitled. Further, he lets her know that through that courage, she has set a precedent that will allow others who are similarly victimized to seek recourse.

 

 

 

 

 

Where and When is it Reasonable to Expect Your Messages to be Private (and what protection does it offer anyway)?

When you text message someone, do you have a reasonable expectation of privacy in that message?

R. v. Pelucco was a 2015 BC Court of Appeal decision involving a warrantless search of text messages found in a cell phone.  The question was whether the sender had a reasonable expectation of privacy in those messages.   The majority concluded that when legal and social norms were applied, a sender would ordinarily have a reasonable expectation that the messages would remain private. Justice Groberman writing for the majority, concluded that the lack of control once the message had been sent was a relevant factor in assessing objective reasonableness, but not determinative.

I’ve written about this decision previously here.

iphone-5-delete-text-messages-5.jpg

What about when you message someone privately using an online platform? 

In R v Craig, released 11 April 2016, police obtained private online messages between Mr. Craig, E.V.  and several of E.V’s friends from Nexopia, a Canada-based social network site targeted at teens. 

Mr. Craig (22) and E.V.(13) originally met via (private) messaging each other on Nexopia.  Messaging continued, as did offline meetings that ultimately resulted in him (illegally) providing her with alcohol and having sexual relations (to which she could not legally consent, being 13) with her.  When two girls from E.V.’s school overheard a conversation with E.V. regarding her sexual encounter with Mr. Craig, they reported it to a school counsellor. The counsellor subsequently called the police, and the police investigation commenced. He was charged and convicted of sexual touching of a person under the age of 16sexual assault, and internet luring (communicating with a person under the age of 16 years for the purpose of facilitating the commission of an offence under s. 151 with that person). 

When the police interviewed E.V., she provided Mr. Craig’s name and logged on to her Nexopia account to print out messages between them, including a photo of Mr. Craig.    A friend of E.V. also provided pages from her own account containing messages with Mr. Craig in which he admitted to having sex with E.V. 

Police obtained a search warrant for messages on the Nexopia servers under the usernames of E.V., several of her friends, and Mr. Craig.  A number of the documents seized from Nexopia were not disclosed to the defence pursuant to a Criminal Code presumptively forbidding production of complainant or witness records when the charge is sexual assault or sexual interference.  A “record” is one that contains “personal information for which there is a reasonable expectation of privacy.” 

Craig argued that there was no reasonable expectation of privacy in those messages -- that the messages were sent, received and stored on Nexopia’s servers, and thus had never been private.  Accordingly, the defence should be able to access them. 

The threshold for reasonable expectation was articulated as the expectations of the sender at the time the message was sent.  In this case, the messages were “personal communications between friends and confidantes, and were not intended for wider circulation beyond the small circle of friends.”  Accordingly, there was a reasonable expectation of privacy in the messages and they were protected from having to be disclosed to Mr. Craig.

Mr. Craig then sought to exert his own reasonable expectation of privacy over (some of) the Nexopia messages.  The trial judge disagreed, finding that Mr. Craig had no reasonable expectation of privacy in the messages, even those he had authored and sent himself because he had no control over them after sending. 

On appeal, the “control” test was rejected:

While recognizing that electronic surveillance is a particularly serious invasion of privacy, the reasoning is of assistance in this case. Millions, if not billions, of emails and “messages” are sent and received each day all over the world. Email has become the primary method of communication. When an email is sent, one knows it can be forwarded with ease, printed and circulated, or given to the authorities by the recipient. But it does not follow, in my view, that the sender is deprived of all reasonable expectation of privacy. I will discuss this further below. To find that is the case would permit the authorities to seize emails, without prior judicial authorization, from recipients to investigate crime or simply satisfy their curiosity. In my view, the analogy between seizing emails and surreptitious recordings is valid to this extent. [para 63]

Instead, the Court of Appeal found that Mr. Craig DID have an objectively reasonable expectation of privacy in the messages seized by the police, on the basis of both:

  • An emerging Canadian norm of recognizing an expectation of privacy in information given to third parties;

  • The nature of the information itself, since it exposed intimate details of his lifestyle, personal choices, and identifying information;

 (The appeal continued on to find that not only did Mr. Craig have an expectation of privacy in the messages, but that his s. 8 Charter rights against unreasonable search and seizure had been violated.   HOWEVER, the violation was not egregious or intention, it had no or negligible impact on Mr. Craig’s interests, and accordingly admission of the messages into evidence would not bring the administration of justice into disrepute.  In fact, they noted, the case dealt with serious charges involving offences against a young teenager, and this too weighed in favour of admitting the evidence.  The appeal was dismissed, with the Court of Appeal finding that there had been no substantial wrong or miscarriage of justice at trial). 

So there you have it:

Yes, you may well have a reasonable expectation of privacy in messages you’ve sent to others, either via text or online platforms. 

Remember though, that doesn’t mean they stay private – it only means that they (and by extension you and your informational dignity and autonomy) must be treated in accordance with Charter protections

Speaking of the Right to be Forgotten, Could We Please Forget This Fearmongering?

In the wake of the original Right to be Forgotten (RTBF) decision, citizens had the opportunity to apply to Google for removal from their search index of information that was inadequate, irrelevant, excessive and/or not in the public interest.  Google says that since the decision it has received more than 250,000 requests, and that they have concurred with the request and acted upon it in 41.6% of the cases

In France, even where Google accepted/approved the request for delisting, it implemented that only on specific geographical extensions of the search engine – primarily .fr (France) although in some cases other European extensions were included.  This strategy resulted in a duality where information that had been excluded from some search engine results was still available via Google.com and other geographic extensions.  Becoming aware of this, the President of CNIL (France’s data protection organization) formally gave notice to Google that it must delist information on all of its search engine domains.  In July 2015 Google filed an appeal of the order, citing the critiques that have become all-too-familiar – claiming that to do so would amount to censorship, as well as damaging the public’s right to information.

This week, on 21 September 2015, the President of CNIL rejec ted Google’s appeal for a number of reasons:

  • In order to be meaningful and consistent with the right as recognized, a delisting must be implemented on all extensions.  It is too easy to circumvent a RTBF that applies only on some extensions, which is inconsistent with the RTBF and creates a troubling situation where informational self-determination is a variable right;

  • Rejecting the conflation of RTBF with information deletion, the President emphasized that delisting does NOT delete information from the internet.  Even while removed from search listings, the information remains directly accessible on the source website.

  • The presumption that the public interest is inherently damaged fails to acknowledge that the public interest is considered in the determination of whether to grant a particular request.  RTBF is not an absolute right – it requires a balancing of the interest of the individual against the public’s right to information; and

  • This is not a case where France is attempting to impose French law universally – rather, CNIL “simply requests full observance of European legislation by non European players offering their services in Europe.”

With the refusal of its (informal) appeal, Google is now required to comply with the original CNIL order.  Failure to do so will result in fines that begin in the $300,000 range but could rise as high as 2-5% of Google’s global operating costs.


My Home Is My Castle – Unless You’re Making Art

Thinking over the recent finding that a photographer who took one year’s worth of pictures of the family who lived in the building across from him through their window.  Done surreptitiously, no consent or knowledge of the photography taking place – in fact, the Fosters only found out about the series when Arne Svenson exhibited “The Neighbours” in a local gallery and they were recognized.

Seems simple – your home is your castle, this guy was taking pictures of them in their private home without their knowledge or consent – but the Appellate Court found that this did not constitute either stalking or an invasion of privacy.  Why?  Because it is “art”.

But “the invasion of privacy of one’s home that took place here is not actionable ... because the defendant’s use of the images in question constituted art work” and were not used for advertising or in trade.

Is an invasion of privacy determined  by the uses to which the products of that invasion are put?  I’m willing to concede that the use to which the product of an invasion of privacy are put could/should certainly be factored in to a determination of damages or remedy.  But surely the invasion and the uses to which its product are put should be addressed separately?

This is a US case, so it’s hard to know how a Canadian court would deal with the same situation.

home is castle.jpg

I would hope that the issues would be dealt with separately – first a consideration of whether there has been an invasion of privacy in collecting the information, and second an examination of the use/disclosure of the information. 

In examining the collection of information – the year of taking candid photos of their life inside the apartment – I would hope that the focus of the inquiry would be on the expectation of privacy of the Foster family.  There can be no question that they believed themselves to be in the privacy of their own home – the surreptitious photography of their actions is unquestionably outside their expectations.  We might even look to the Supreme Court of Canada’s approach in R v Clarke for clarity.  In that case, which dealt with a man masturbating at the window of his illuminated living room, the court explored whether acts committed in one’s own home could constitute an act “in a public place” by reason of visibility.  The Supreme Court of Canada concluded that a “public place” was to be defined as “any place to which the public have access as of right or by invitation, express or implied”.  “Access” means “the right or opportunity to reach or use or visit” and not the ability of those who are neither entitled nor invited to enter a place to see or hear from the outside, through uncovered windows or open doors, what is transpiring within.  Regardless of whether the photographer was able to see inside the Fosters’ apartment or not, it is clearly a private space within which he intruded.


First Do No Harm (to Research Interests)

The Council of Canadian Academics released its report Accessing Health And Health-Related Data in Canada on 31 March 2015.  A strong and comprehensive piece of work, this report—which was requested by the Canadian Institutes of Health Research (CIHR)— represents the efforts of a 14-member expert panel, chaired by Andrew K. Bjerring, former President and CEO of CANARIE Inc. In its report, the panel assesses the current state of knowledge surrounding timely access to health and health-related data- key for both health research and health system innovation.

Despite the work of multiple experts, and input from additional experts (including from the Office of the Privacy Commissioner of Canada) who acted as reviewers, the approach to privacy taken by the report is a disappointing one.

The starting point seems to be the presumption that data must be shared and that its sharing is of great value, while privacy concerns are primarily an issue of regulation of data management rather than active protection of individual privacy.  For instance, the value of associating multiple data related to the same individual is privileged over the privacy risks of such association and data mining, with the report suggesting linking the pieces of information prior to de-identification in order to preserve the data mining potential – a stance which privileges research outcomes over the protection of an individual from data mining.   It is difficult to reconcile this with a key finding that “the risk of potential harms resulting from access to data is tangible but low”.

The sharing of health data today in Canada is a fait accompli – the Canada Health Act mandates that medical professionals (doctors, physiotherapists, pharmacists etc.) turn over this sensitive information.  The question is whether de facto de-identification and information sharing is sufficient to protect privacy, and, in fact, whether that protection is even the end goal.  This report and its suggested approaches are aimed more at managing privacy concerns (e.g., via development of a privacy review board similar to a research ethics review) than about actual privacy for Canadians.

Reclaiming YourSelf

“I felt that my silence implied that I *should* be ashamed….”

I LOVE this project, both the explanatory video and the photo shoot to which it refers.  Danish journalist Emma Holten, who had been victimized by revenge porn, on the importance of consent. 

We have seen the results of public shaming of the sexuality of girls and women – we’ve seen it in the suicide of Amanda Todd, the death of Rehteah Parsons.  In the way(s) others use the threat of releasing/sharing such photos to attempt to extort and manipulate girls and women.  And Holten is correct that this is grounded in misogyny, in the hatred and objectification of women. 

It is grounded too in the underlying attitude that female bodies and sexuality are wrong.  If these images, those naked bodies were not presumptively “shameful”, their revelation could not leveraged as a threat.  The judgments that perpetuate the sharing of such photos (“you shouldn’t have been such a whore”) reinforce and reiterate that shame. 

Holten’s response, to refuse to be shamed about her body and sexuality is a powerful one.  The decision to participate in a photo shoot and release those photos publicly – to actively share images of her body, to refuse to feel shamed about her sexuality – is an important one.  By refusing to allow herself to be subverted or silenced, she instead takes the site/sight of her “shame” and transforms it, making of it not only a moment of resistance but a response and refutation.  A celebration and a reclamation.

Is IP address personal information (in Europe)?

Are IP addresses personal information?  On 28 October, the German Federal Court of Justice referred the question to the European Court of Justice (they who gave us the contentious Google Spain decision).

The case stems from the fact that when users visit German government sites, the site collects their IP addresses along with other information.  This information is logged and stored “in order to track down and prosecute unlawful hacking”. 

For once Canada can consider itself well ahead of the curve.   The Office of the Privacy Commissioner of Canada is clear that “An Internet Protocol (IP) address can be considered personal information if it can be associated with an identifiable individual.”  A 2013 report from that Office “What an IP Address Can Reveal About You” goes further into the subject, ultimately concluding that

…knowledge of subscriber information, such as phone numbers and IP addresses, can provide a starting point to compile a picture of an individual's online activities, including:

·         Online services for which an individual has registered;

·         Personal interests, based on websites visited; and

·         Organizational affiliations.

It can also provide a sense of where the individual has been physically (e.g., mapping IP addresses to hotel locations, as in the Petraeus case). 

This information can be sensitive in nature in that it can be used to determine a person’s leanings, with whom they associate, and where they travel, among other things.  What’s more, each of these pieces of information can be used to uncover further information about an individual. 

The Federation of German Consumer Organizations has raised concerns that classifying IP address as personal information could create delays and onerous administrative and consent requirements for internet use in Europe, or alternatively could necessitate a reconsideration of some of the provisions of the EU Data Protection Directive.  It would be interesting to hear from a similar Canadian body as to their experiences….or perhaps the CJEU should consider some kind of case study in order to include practical experience into their considerations.

It’s obscurity, not apocalypse: All that the “right to be forgotten” decision has created is the right to ask to have information removed from search engine results.

The National Post recently carried a week of op-eds, all focused on responding to the “right to be forgotten” that was (allegedly) created by the Google Spain decision.   Some extremely well-known and widely-respected experts have weighed in on the subject: 

Ann Cavoukian and Chris Wolfe: 

…while personal control is essential to privacy, empowering individuals to demand the removal of links to unflattering, but accurate, information arguably goes far beyond protecting privacy… The recent extreme application of privacy rights in such a vague, shotgun manner threatens free expression on the Internet. We cannot allow the right to privacy to be converted into the right to censor.

Brian Lee Crowley:

This ruling threatens to change the Internet from a neutral platform, on which all the knowledge of humanity might eventually be made available, to a highly censored network, in which, every seven seconds, another person may unilaterally decide that they have a right to be forgotten and to have the record of their past suppressed…We have a duty to remember, not to forget; a duty not to let the past go, simply because it is inconvenient or embarrassing.

Paula Todd:

Should there be exceptions to the principle of “let the public record stand”? Most countries already have laws that do just that — prohibiting and ordering the deletion of online criminal defamation, cyberabuse and images of child sexual abuse, for example. Google, and other search engines, do invent algorithms that position certain results more prominently. Surely a discussion about tweaking those algorithms would have been less draconian than this cyber censorship.

With all due respect to these experts, I cannot help but feel that each of them has missed the central point – pushed by the rhetoric about a “right to be forgotten” into responding to a hypothetical idea rather than the concrete reality of the decision.

It is not about mere reputation grooming.

It is not about suppressing or rewriting history.

It is not about silencing critics.

It is not about scrubbing clean the public record.

It is not about protecting people from the consequences of their actions.

Frankly, this ruling isn’t the creation of a new form of censorship or suppression – rather, it’s a return to what used to be.  The decision sets the stage for aligning new communications media with more traditional lifespans of information and a restoration of the eventual drawing of a curtain of obscurity over information as its timeliness fades. 

Facing the facts:

It is important to be clear that all the “right to be forgotten” decision has created is the right to ask to have information removed from search engine results.   

There is no guarantee that the information will be removed – in its decision the court recognized that while indeed there are situations where removal would be appropriate, each request requires a careful balancing of individual rights of informational self-determination against the public interest.

It is also worth pointing out that this is hardly the only recourse users have to protect and shape online privacy, identity, and reputation. In an Atlantic article about the introduction of Facebook Social Graph, the authors comment that:

Online, obscurity is created through a combination of factors. Being invisible to search engines increases obscurity. So does using privacy settings and pseudonyms. Disclosing information in coded ways that only a limited audience will grasp enhances obscurity, too. Since few online disclosures are truly confidential or highly publicized, the lion's share of communication on the social web falls along the expansive continuum of obscurity: a range that runs from completely hidden to totally obvious.

Other ubiquitous privacy protective techniques don’t tend to engender the same concerns as “right to be forgotten”.  Nobody is ringing alarm bells equating privacy settings with censorship – indeed, we encourage the use of privacy settings as responsible online behavior.  And while there are certainly concerns about the use of pseudonyms, those concerns are focused on accountability, not freedom of speech or access to information.  In fact, the use of pseudonyms is widely considered as facilitating freedom of speech, not preventing it. 

Bottom line:

I’m all for freedom of expression.  Not a fan of censorship either.  So I would like to make a heartfelt plea to the community of thinkers who focus on this area of emerging law, policy, and culture: exaggerations and overreactions don’t help clarify these issues and are potentially damaging in the long run.  A clear understanding of what this decision is —and what it is not— will be the best first step towards effective, balanced implementation. 

 

R v Spencer: a new era of privacy jurisprudence for Canada


The newspapers are trumpeting the Supreme Court of Canada decision in R v Spencer, as well they should.  It was a good, thoughtful decision, one that conveys a strong understanding of privacy. 

The case, on appeal from Saskatchewan, dealt with a situation where police requested (and received) subscriber information from an ISP based on an IP address.  The information was revealed by the ISP in response to a request with no warrant.  The SCC was asked to determine whether this was an unreasonable search and seizure in contravention of s.8 of the Charter and they determined that it was.  

Until this decision, (some) Canadian ISP’s were of the opinion that exceptions for information revealed to certain bodies for law enforcement, security or related purposes as set out in s. 7 of PIPEDA authorized the provision of personal information without the necessity of a warrant.  Today’s decision puts an end to that practice. 


In examining the subject matter of the search, the court rejected a limited approach that saw the information as merely the name and address of an ISP subscriber, holding that to do so was to miss the fact that the information at issue was the subscriber information as linked to particular Internet activity as well as the inferences that might be drawn from that profile (para 32, emphasis mine). 

The court also employed a new and nuanced tripartite understanding of information privacy, looking at privacy variously as secrecy; as control over information; and as anonymity.  (para 38)

It is this final category of privacy as anonymity where the decision perhaps makes its greatest contribution.  In relation to user online activity, the Court focused extensively on the idea of privacy as anonymity, writing at para 46 that:

Moreover, the Internet has exponentially increased both the quality and quantity of information that is stored about Internet users. Browsing logs, for example, may provide detailed information about users’ interests. Search engines may gather records of users’ search terms. Advertisers may track their users across networks of websites, gathering an overview of their interests and concerns. “Cookies” may be used to track consumer habits and may provide information about the options selected within a website, which web pages were visited before and after the visit to the host website and any other personal information provided...[t]he user cannot fully control or even necessarily be aware of who may observe a pattern of online activity, but by remaining anonymous — by guarding the link between the information and the identity of the person to whom it relates — the user can in large measure be assured that the activity remains private…


Ultimately, the Court concluded that there was (or could be) a reasonable expectation of privacy as to the anonymity of their online activities.  Given this reasonable expectation of privacy, the police obtaining the subscriber information from the ISP without a warrant was a violation of s.8 of the Charter and thus an unconstitutional search.

This finding is an important one and not just for the privacy of individual internet users.  Indeed, in light of current concerns about security and cyberbullying, as currently expressed in C-13 in S-4.  C-13, the newest iteration of the government’s lawful access legislation combined with cyberbullying provisions, contains provisions for voluntary warrantless disclosure.  The Court’s strong recognition of a constitutional reasonable expectation of privacy in such information is in direct opposition to the presumptions underlying such provisions.  S-4, which would update PIPEDA, has been criticized as expanding the expanding scope of voluntary disclosure, an approach which must also be reconsidered in light of today’s decision.

Government has long sought to justify lawful access-type legislation as creating the new powers necessary in order to address new technologies.  In its decision, the Court addresses concerns expressed by law enforcement that requiring a warrant could impede or even facilitate the investigation of online crime, countering at para 49 that:

In light of the grave nature of the criminal wrongs that can be committed online, this concern cannot be taken lightly. However, in my view, recognizing that there may be a privacy interest in anonymity depending on the circumstances falls short of recognizing any “right” to anonymity and does not threaten the effectiveness of law enforcement in relation to offences committed on the Internet. In this case, for example, it seems clear that the police had ample information to obtain a production order requiring Shaw to release the subscriber information corresponding to the IP address they had obtained.

This case dealt with child pornography – that the SCC are clear that a warrant was necessary in this case indicates that the privacy interest is an important one, not to be overridden easily.  This too should be read as a warning to the Government that expansion of intrusive powers into personal privacy must be grounded in demonstrable issues rather than mere unsupported assertions of necessity.




Protecting Intimacy, Preventing Revenge and Balancing Fundamental Rights

Recent court decisions in Germany and Israel seem to indicate a growing recognition of the importance of personal privacy as well as the potential(s) for damage to privacy as a result of disclosure of intimate images and/or details.

In Israel, the Supreme Court just upheld the 2011 decision in Plonit vs. Ploni and Almonit.  The case involved a challenge to the publication of a book, filed against both the author of the book and its publishers.  The book detailed a relationship with a female student and was written by the male with whom she had previously had a relationship.  In requesting that the publication be recalled, she claimed that her private and public world were described in graphic detail, including her body, emotions, weaknesses, conscience, activities and preferences for sexual stimulation.  The judge agreed that whether or not the book was classified as fiction, the plaintiff was sufficiently identifiable that the book was an invasion of privacy.  Naming both privacy and free speech to be fundamental rights, the judge felt that the appropriate balance between literary freedom and privacy, in this case, justified preventing the book from being published as well as paying the plaintiff damages. 

While private ownership of images rather than publication of text was at issue in Germany, the court struck a similar balance, at least in intimate context(s).  At the end of a relationship with a professional photographer, the woman plaintiff requested that he delete photographs and videos of her taken during their relationship.  When he refused, she went to court asking that they enforce her request.  A variety of images were at issue, both erotic and non-erotic.  It was unquestioned that they had all been taken with consent.  Nevertheless, the court determined that any consent given to possession of the images was withdrawn at the end of the relationship.  Given that and based on a right to image, and the recognition that intimate images go to the heart of the personality right, the court found that her ex had an obligation to delete upon request all nude or otherwise erotic images.  Images characterized as everyday or otherwise unlikely to compromise her privacy were excluded by the order and did not have to be deleted on request. 

With the EU recognizing a “right to be forgotten”), the media trumpeted the German decision as Germany upholding a right to one’s own image, the media claimed a victory for those victimized by revenge porn.  Viktor Mayer-Schönberger notes that the German case is a particular manifestation of European doctrine rather than an iteration of the right to be forgotten.  "But what can be said is that is that these two rulings may make more and more people aware of their personal rights in the digital sphere. At the very least, it should embolden future claimants who pro-actively want to prevent revenge porn."

 

 


Charter Challenge to PIPEDA

The Canadian Civil Liberties Association, along with Chris Parsons of the Citizen Lab, have filed a challenge to some provisions of PIPEDA, specifically the parts of the Act that allow private corporations to disclose user personal information without a warrant to a government institution, for a number of reasons, including national security and the enforcement of any law of Canada, a province or a foreign jurisdiction. 

The fact that the information is being obtained from the private sector further complicates things.  As CCLA's General Counsel stated:  "Non-state actors are playing an increasingly large role in providing law enforcement and government agencies with information they request.  The current scheme is completely lacking in transparency and is inadequate in terms of accountability mechanisms."     

CCLA's legal challenge asks that provisions of PIPEDA be struck as an unconstitutional violation of the right to life, liberty and security of the person (s.7) and the right to be free from unreasonable search and seizure (s.8) under the Charter. 

Feeling Safe Doesn’t Mean You Are: Conflating Alarm/Notification with Prevention


Recently various news stories have trumpeted Kitestring as:  a safety app for women” and “an app that makes sure you get home safe.”   In an April 2014 story, service creator Stephan Boyer explains that he founded Kitestring to keep my girlfriend safe.   Even feminist blog site Jezebel’s headline invoked the claim that Kitestring makes people safer, though the story itself acknowledges that the value of the service is in making women feel safer.

Kitestring is a web-based service that takes on the role of a safety call – when enabled, it notifies pre-designated persons if the user does not check-in within a pre-set time period.  Where other “safety” apps require some positive action in order to sound an alert – bSafe creates a safety alarm button that must be pushed in order to alert others, while Nirbhaya sends out the alarm message when the phone is shaken – Kitestring will send out the alert *unless* the positive action of checking-in is undertaken.   


What we’ve got here is another iteration of the belief that the more information is collected, the more we can know, predict and protect.  And while it’s easier to critique this position when looking at issues like invasive NSA monitoring, even voluntary services like this one have this same logical flaw inherent in them.  In this case, without disregarding the importance of a safety call (via telephone or through any of these services) and of access to services such as this one, equating sounding an alarm with keeping the individual (virtually always identified as female) safe is a dangerous overstatement. 

Public surveillance cameras have long been touted as making public spaces (as well as those within them) safer.  Evidence doesn’t exactly support these claims though – studies looking at CCTV in London, England have consistently found little or no correlation between the presence and/or prevalence of CCTC cameras and crime prevention or reduction.  To put it harshly, public video surveillance (whether recorded or live monitored) won’t prevent me being raped.  The video record of it may be of assistance in identifying the rapist, but even that is uncertain, depending as it does on quality of camera and recording, camera positioning, etc. 

I’m not against services like Kitestring – I want people to know if I don’t get home or somehow fall off the grid.  That said, letting people know I’ve gone missing isn’t the same as preventing the problem in the first place.  Headlines that claim “This New App could’ve Prevented My Friends’ Rape” are optimistic at best, misleading at worst. 


arrest in lieu of warrant?

Astounding. 

On 29 April, the Supreme Court of the US, in a 6-3 decision, concluded that someone who has been arrested in "absent" from the premises and therefore cannot refuse to allow a search of their home. 

In a previous case -- one dealing with domestic violence -- the situation had the husband refusing to allow law enforcement to enter the premises, while the wife consented.  There, the Supreme Court ruled that law enforcement should NOT have entered the premises because though one partner was consenting, the other was on the premises and refusing.  That refusal, said the court, was sufficient to deny entrance in the absence of a search warrant.

The recent decision keys on the idea of absence, with the majority ruling that someone cannot refuse a search when they are not at home.   Thus, there is no need to get a search warrant  where two occupants disagree about whether to allow entrance to the home and the one refusing consent is arrested, since they are no longer considered "present" and are thus unable to refuse entrance. 

“We therefore hold that an occupant who is absent due to a lawful detention or arrest stands in the same shoes as an occupant who is absent for any other reason,” Alito said.
 

While I don't disagree that there are situations where an emergency should trump the need to get authorization to enter a private home, in the absence of an emergency law enforcement should NOT be able to subvert the obligation to get a warrant merely by arresting those who stand in their way. 

 

PROTECTING CONSUMER PRIVACY: Corporate Intrusions Increasingly Being Taken Seriously

On 29 October 2013, the Federal Court of Canada released its decision in Chitrakar v Bell TV. In a victory for privacy rights under PIPEDA legislation the court decided against Bell TV and awarded damages of $21,000 after the company obtained a customer’s credit bureau report without his knowledge or consent. 

internet-law.jpg

The award marks a significant advance for consumer privacy rights by taking the violation of those rights more seriously and awarding exemplary damages – the first time this has been done under PIPEDA and a clear signal from the court that organizations are expected to conform to the spirit as well as the letter of their privacy responsibilities. 

The case revolves around the following sequence of events: Bell ran a credit bureau report on new customer Mr. Rabi Chitrakar, 1 December 2010, when he ordered satellite television service.  When the equipment was delivered on 31 December 2010, Mr. Chitrakar signed what he understood to be a Proof of Delivery form.  Bell later inserted his signature on their standard TB Rental Agreement which includes a clause consenting to Bell doing a credit check.  After Mr. Chitrakar discovered that a credit check had been done, he filed a complaint with Bell in March 2011 and continued to seek an explanation from Bell, who gave him what the court characterizes as “the royal runaround”.  In finding for Mr. Chitrakar the Federal Court assessed the award at $10,000, with another $10,000 in exemplary damages due to Bell’s conduct and $1,000 for costs. 

PIPEDA, the Personal Information Privacy and Electronic Documents Act, allows a complainant to proceed to the Federal Court of Canada after the Privacy Commissioner’s investigation and report.  When this is done, the court has the power to order an organization to correct practices that do not comply with the law, and to publish notices of the changes it expects to make. It can also award compensation for damages suffered.  This award was significantly higher than previous awards under PIPEDA. 

The first award of damages under s. 16 of PIPEDA took place in 2010, in Nammo, a case dealing with erroneous information provided on a credit check.  Despite describing the provision of false credit information as “intrusive, embarrassing and humiliating as a brief and respectful strip search” only $5000 was awarded. 

Clarifying “proof of harm”

In making its determination in Nammo, the Court referred to the Supreme Court of Canada decision in Vancouver (City) v Ward, where damages were awarded despite there being no maliciousness, no intention to harm, and no harm shown.  The decision to make an award was based on reasoning that awards of damages serve multiple persons, including compensation, vindication and deterrence and accordingly the recognition that even where no harm need be compensated for, damages could still be warranted where the aims of vindication and/or deterrence were met. 

In the Chitrakar case the damage award was not based on proof of harm.  Rather, the court writes:

[t]he fixing of damages for privacy rights’ violations is a difficult matter absent evidence of direct loss.  However, there is no reason to require that the violation be egregious before damages will be awarded.  To do so would undermine the legislative intent of paragraph 16(c) which provides that damages be awarded for privacy violations including but not limited to damages for humiliation.

I’ve written before about the US requirement for proof of harm in such cases, juxtaposing it against the formula set out in the Ontario Court of Appeal’s Jones v Tsige decision (an important decision because it established a common law tort of privacy in Ontario) and arguing that the breach of privacy is in itself the harm, and that the meaning and purpose of damages being available for privacy breaches are compromised when proof of (additional) harm is required.

The Chitrakar case underscores the emerging recognition that a privacy violation is in itself harmful and deserving of redress, this time articulated by the Federal Court of Canada in a decision under PIPEDA. This is further evidence of an increased awareness that privacy is an important part of the right of dignity and autonomy and that redress for violations is justified once the infringement has established, regardless of whether any further harm has ensued.

This decision is good news—for advocates of privacy rights, for scholars of the interpretation and application of privacy law, and for ordinary consumers trying to protect their personal privacy and dignity.

**thanks to David T.S. Fraser for finding the decision and putting it on Google Docs.