Ontario Privacy in Public Spaces Decision: The Need to Recognize Privacy as a Dignity Baseline, Not an Injury-Based Claim

An Ottawa woman has successfully argued for a privacy right in public spaces.  After video of her jogging along the parkway was included in a commercial, she sued for breach of privacy and appropriation of personality

"The filming of Mme. Vanderveen's likeness was a deliberate and significant invasion of her privacy given its use in a commercial video," the judge added. 

While pleased with the outcome, I’m a little uncomfortable with the presentation (and not sure whether that’s about the claimant or the media).  It appears that the privacy arguments here were grounded in “dignity”, and particularly in self-image.  That is, at the time the video was taken, the claimant was (or felt herself to be) overweight and had only recently taken up jogging after the birth of her children.  She testified that she thought the video made her look overweight and it caused her anxiety and discomfort. As her lawyer stated, “[s]he’s an incredibly fit person. And here’s this video — she looks fine in it — except that when she sees it, she doesn’t see herself. That’s the dignity aspect of privacy that’s protected in the law.”

pheobe running.jpg

In response, the company appears also to have focussed on self-esteem and injury.  “They made the argument that if they don’t use someone’s image in a way that is embarrassing or if they don’t portray someone in an unflattering light — here it is just her jogging and it’s not inherently objectionable — that they should be allowed to use the footage.  In contrast, the claimant argued that how someone sees themself is more important than how a third person sees them.”

Why does this bother me?  For the same reason that the damage threshold bothers me….because invasion of privacy is an injury in and of itself. 

By focussing on her self-image and dignity, we’re left to wonder whether, if another individual had been filmed without their consent, had tried to cover their face when they saw the camera (as did the claimant here) and yet was included in the video, would a court come to the same result?  Or is there some flavour of “intention infliction of emotional suffering” creeping into this decision?  When the judge states that “I find that a reasonable person, this legally fictitious person who plays an important role in legal determinations, would regard the privacy invasion as highly offensive and the plaintiff testified as to the distress, humiliation or anguish that it caused her” what “injuries” are implicitly being normalized?  The source of the injury seems to be that of being (or believing oneself to look) overweight – is (and should) size be conflated with humiliation?  The judge concludes that while “Mme Vanderveen is concerned about the persona that she presents and about her personal privacy I find that she is not unusually concerned or unduly sensitive about this” but I find myself wondering about the social context.  Would a man claiming the same distress/humiliation/anguish in this situation have been taken as seriously?   

The judge found that "[t]he photographer was not just filming a moving river, he or she was waiting for a runner to jog along the adjacent jogging trail to advertise the possibility of the particular activity in Westboro."  Because of the desire to capture someone running, part of the damages included an estimate of what it would have cost to hire an actor to run along the river.  This is where the privacy breach takes place – the deliberate capture of an individual’s image, and its use without their knowledge or consent for commercial purposes.

The issue isn’t how she felt about herself, nor whether she like(d) the way she looks in the video – it is the act of making and using the video of her in the first place.  When we focus on the injury to her dignity, we risk misdirecting the focus, making it about the individual rather than about the act of privacy invasion. 

Individuals shouldn’t have to display their wounds in order to be considered worthy of the protection of law.  Rather, law should be penalizing those who do not take care to protect and respect privacy.  That’s how we respect dignity – by recognizing it as an inherent right possessed by persons, with a concurrent right not to have that privacy invaded. 

Police Bodycams: Crossing the Line from Accountability to Shaming

 

Police bodycams are an emerging high-profile tool in law enforcement upon which many hopes for improved oversight, accountability, even justice are pinned.

When it comes to police bodycams, there are many perspectives:

  • Some celebrate them as an accountability measure, almost an institutionalized sousveillance.
  • For others, they’re an important new contribution to the public record
  • And where they are not included in the public record, they can at least serve as internal documents, subject to Access to Information legislation.

These are all variations on a theme – the idea that use of police bodycams and their resulting footage are about public trust and police accountability.

But what happens when they’re used in other ways?

In Spokane, Washington recently a decision was made to use bodycam footage for the purpose of shaming/punishment.  In this obviously edited footage, Sgt. Eric Kannberg deals calmly with a belligerent drunk, using de-escalation techniques even after the confrontation gets physical.  Ultimately, rather than meting out the typical visit to the  drunk tank, the officer opts to proceed via a misdemeanor charge and the ignominy of having the footage posted to Spokane P.D.'s Facebook page. The implications of this approach in terms of privacy, dignity, and basic humanity are far-reaching.

The Office of the Privacy Commissioner of Canada has issued Guidance for the Use of Body-Worn Cameras by Law Enforcement;  guidance that strives to balance privacy and accountability. The Guidelines include:

Use and disclosure of recordings

The circumstances under which recordings can be viewed:

  • Viewing should only occur on a need-to-know basis. If there is no suspicion of illegal activity having occurred and no allegations of misconduct, recordings should not be viewed.
  • The purposes for which recordings can be used and any limiting circumstances or criteria, for example, excluding sensitive content from recordings being used for training purposes. 
  • Defined limits on the use of video and audio analytics.
  • The circumstances under which recordings can be disclosed to the public, if any, and parameters for any such disclosure. For example, faces and identifying marks of third parties should be blurred and voices distorted wherever possible.
  • The circumstances under which recordings can be disclosed outside the organization, for example, to other government agencies in an active investigation, or to legal representatives as part of the court discovery process.

Clearly, releasing footage in order to shame an individual would not fall within these parameters. 

After the posted video garnered hundreds of thousands of views, its subject is now threatening to sue.  He is supported by the ACLU, which expressed concerns about both the editing and the release of the footage. 

New technologies offer increasingly powerful new tools for policing.  They may also intersect with old strategies of social control such as gossip and community shaming.  The challenge – or at least an important challenge– relates to whether those intersections should be encouraged or disrupted.

As always, a fresh examination of the privacy implications precipitated by the implementation of new technology is an important step as we navigate towards new technosocial norms.

Predictive? Or Reinforcing Discriminatory and Inequitable Policing Practices?

UPTURN released its report on the use of predictive policing on 31 August 2016.  

The report, entitled “Stuck in a Pattern:  Early Evidence on Predictive Policing and Civil Rights” reveals a number of issues both with the technology and its adoption:

  •  Lack of transparency about how the systems work
  • Concerns about the reliance on historical crime data, which may perpetuate inequities in policing rather than provide an objective base for analysis
  • Over-confidence on the part of law enforcement and courts on the accuracy, objectivity and reliability of information produced by the system
  • Aggressive enforcement as a result of (over) confidence in the data produced by the system
  • Lack of audit or outcome measures tracking in order to assess system performance and reliability

The report notes that they surveyed the 50 largest police forces in the USA and ascertained that at least 20 of them were using a “predictive policing system” and another 11 actively exploring options to do so.  In addition, they note that “some sources indicate that 150 or more departments may be moving toward these systems with pilots, tests, or new deployments.”

Concurrent with the release of the report, a number of privacy, technology and civil rights organizations released a statement setting forth the following arguments (and expanding upon them).

  1. A lack of transparency about predictive policing systems prevents a meaningful, well-informed public debate. 
  2. Predictive policing systems ignore community needs.
  3. Predictive policing systems threaten to undermine the constitutional rights of individuals
  4. Predictive policing systems are primarily used to intensify enforcement rather than to meet human needs
  5. Police could use predictive tools to identify which officers might engage in misconduct, but most departments have not done so
  6. Predictive policing systems are failing to monitor their racial impact.

Signatories of the statement included:

The Leadership Conference on Civil and Human Rights

18 Million Rising

American Civil Liberties Union

Brennan Center for Justice

Center for Democracy & Technology

Center for Media Justice

Color of Change

Data & Society Research Institute

Demand Progress

Electronic Frontier Foundation

Free Press

Media Mobilizing Project

NAACP

National Hispanic Media Coalition

Open MIC (Open Media and Information Companies Initiative)

Open Technology Institute at New America

Public Knowledge

 

The Right(s) to One’s Own Body

In July, police approached a computer engineering professor in Michigan to assist them with unlocking a murder victim’s phone by 3D-printing the victim’s fingerprints. 

It is a well-established principle of law that ‘there is no property in a corpse.’ This means that the law does not regard a corpse as property protected by rights.  So hey, why not, right? 

There is even an easy argument to be made that this is in the public interest.  Certainly, that seems to be how Professor Anil Jain (to whom the police made the request) feels: “If we can assist law enforcement that’s certainly a good service we can do,” he says.   

Marc Rotenberg, President of the Electronic Privacy Information Centre (EPIC) notes that if the phone belonged to a crime suspect, rather than a victim, police would be subject to a Supreme Court ruling requiring them to get a search warrant prior to unlocking the phone—with a 3D-printed finger or otherwise.

I’ve got issues with this outside the victim/suspect paradigm though. 

For instance, I find myself wondering about the application of this to live body parts. 

I’ve always been amused by the R v Bentham case, from the UK House of Lords in 2005. Bentham broke into a house to commit robbery and in course of this, used his fingers in his pocket to make a gun shape.  He was arrested.  Though he was originally convicted of possessing a firearm or imitation thereof, that conviction was overturned on the basis that it wasn’t possible for him to “possess” part of his own body.  But…if you can’t “possess” your own body, why wait for death before the State makes a 3-D copy of it for its own purposes?

And…we do have legislation about body parts, both live and dead – consider the regulation of organ donation and especially payment for organs.  Consider too the regulation of surrogacy, and of new reproductive technologies. 

Maybe this is a new area to ponder – it doesn’t fit neatly into existing jurisprudence and policy around the physical body.  The increasing use of biometric identifiers to protect personal information inevitably raises new issues that must be examined. 

UPDATE:  It turns out that the 3D printed fingerprint replica wasn’t accurate enough to unlock the phone.  Undeterred, law enforcement finally used a 2D replica on conductive paper, with the details enhanced/filled in manually.  This doesn’t really change the underlying concern, does it? 

Social Media at the Border: Call for Comments: Until 22 August 2016

The US Customs and Border Protection Agency are proposing to add new fields to the form people fill out when entering/leaving the country—fields where travelers would voluntarily enter their social media contact information.  The forms would even list social media platforms of interest in order to make it easier to provide the information. 

This raises serious concerns. Some might ask, how can this be controversial if it’s voluntary?  If someone doesn’t want to share the info, they’ll say, then don’t. Case closed.

Unfortunately, it isn’t that simple. This initiative raises some serious questions:

Is it really voluntary?

Are individuals likely to understand that provision of this information is, in fact, voluntary?  If this becomes part of the standard customs declaration form, how many people will just fill it out, assuming that like the rest of the information on the form, it is mandatory? 

Is the consent informed?

Fair information principles require that before people can meaningfully consent to the collection of personal data they need to understand the answers to the following questions: Why is the information being collected?  To what uses will it be put, with whom will it be shared?  We have to ask, will the answers to these question be known? Will they be shared and visible?  Will they be drawn to the attention of travelers? 

Can such consent be freely given?

In a best-case scenario where the field is clearly marked as voluntary and the necessary information about purposes is provided—can such indicia really overrule our instinctive fear/understanding that failing to “volunteer” such information can be an invitation for increased scrutiny. 

Is it relevant?

Even if the problem of mandatory “volunteering” of information is addressed, what exactly is the point?  Is this information in some way relevant?  It is suggested that this initiative is the result of

… increasing pressure to scrutinize social media profiles after the San Bernardino shooting in December of last year. One of the attackers had posted a public announcement on Facebook during the shooting, and had previously sent private Facebook messages to friends discussing violent attacks. Crucially, the private messages were sent before receiving her visa. That news provoked some criticism, although investigators would have needed significantly more than a screen name to see the messages.

If this is meant to be a security or surveillance tool,  is it likely to be effective as such? Will random trawling of social network participation—profiling based on profiles—truly yield actionable intelligence?

Here’s the problem: every individual’s social media presence is inherently performative.  In order to accurately interpret interactions within online social media spaces, it is imperative to recognize that these utterances, performances, and risks are undertaken within a particular community and with a view to acquiring social capital within that particular community

Many will ask, if information is public why worry about protections?  Because too often issues of the accuracy, reliability or truthfulness of information in these various “publics” are not considered when defaulting to presumptive publicness as justification. All such information needs to be understood in context.

Context is even more crucial when such information is being consulted and used in various profiling enterprises, and especially so when it is part of law enforcement or border security. There is a serious risk of sarcasm, artistic expression, mere frustration or hyperbole resulting in the criminalization of individuals who are thoughtless (or indeed simply not thinking along the lines preferred by law enforcement agencies) rather than dangerous. 

The call for comments contains extensive background, but the summary they provide is simple:

U.S. Customs and Border Protection (CBP) of the Department of Homeland Security will be submitting the following information collection request to the Office of Management and Budget (OMB) for review and approval in accordance with the Paperwork Reduction Act: CBP Form I-94 (Arrival/Departure Record), CBP Form I-94W (Nonimmigrant Visa Waiver Arrival/Departure), and the Electronic System for Travel Authorization (ESTA). This is a proposed extension and revision of an information collection that was previously approved. CBP is proposing that this information collection be extended with a revision to the information collected. This document is published to obtain comments from the public and affected agencies.

They are calling for comments. You have until 22 August 2016 to let them hear yours.  https://federalregister.gov/a/2016-14848

When the “Child” in “Child Pornography” is the Child Pornographer

A US decision this month found that a 17-year-old who sent a picture of his own erect penis was guilty of the offence of second degree dealing in depictions of a minor engaged in sexually explicit conduct.

The person in question was already serving a Special Sex Offender Dispositional Alternative (SSODA) as the result of an earlier adjudication for communicating with a minor for immoral purposes when he began harassing one of his mother’s former employees, a 22-year-old single mother with an infant daughter.

That harassment began with telephone calls making sexual sounds or asking sexual questions. On the afternoon of June 2, 2013, she received two text messages: one with a picture of an erect penis, and the other with the message, "Do u like it babe? It's for you. And for Your daughter babe."

The appeal was focused on a couple of questions:

Was charging him with this offence a violation of his freedom of speech?

Key to the reasoning here is the recognition that minors have no superior right to distribute sexually explicit materials involving minors than adults do.  To interpret the statute differently, in the opinion of the court, would render the statute meaningless. 

The First Amendment does not consider child pornography a form of protected expression. There is no basis for creating a right for minors to express themselves in such a manner, and, therefore, no need to place a limiting construction on a statute that does not impinge on a constitutional right. Accordingly, we conclude that the dealing in depictions of minors statute does not violate the First Amendment when applied to minors producing or distributing sexually explicit photographs of themselves.

Was the offence too vaguely worded?   

The argument here is simple – would a reasonable person really think that sending a photo of his own genitals would constitute the crime of child pornography?  Again, the court deals with this handily, finding the statute wording to be clear.  Whether many teens engage in sexting is immaterial – the test isn’t whether many people aren’t following the law, but rather whether they are unable to understand it.  

Nothing in the text of the statute suggests that there are any exceptions to its anti-dissemination or anti-production language. The statute is aimed at eliminating the creation and distribution of images of children engaged in explicit sexual behavior. It could hardly be any plainer and does not remotely suggest there is an exception for self-produced images.

Finally, the ACLU made arguments on policy reasons.

Was it irrational or counterintuitive that the subject of the photo could also be guilty of its distribution?  The court thinks not – there is no requirement for a specific identified victim, because the focus is on the traffic in images. 

Another policy issue raised was the concern about the application of such a precedent to a case of teens “sexting” each other.  This could potentially have been a stronger policy position had this *been* a case of sexting – but it was not.  This wasn’t communication between equals – it was harassment. 

Is This a Problem?

Clearly, dick pics are ubiquitous these days.  Is this decision an overreaction?  No. It’s not.  Know what else is ubiquitous these days?  Harassment and hate directed at women in online spaces (and offline). 

Reasonable people have raised concerns about the inclusion of registering as a sex offender as part of the sentence. 

To be clear, the sentence was time served, and registration as a sex offender. 

  • Registration of an individual who was already in treatment (thus far ineffective) for communicating with a minor for immoral purposes. 
  • Who had unrelated pending charges (dismissed by agreement of the parties) for indecent exposure.  
  • And for behaviour that was part of a campaign of sexual harassment. 

Frankly, that inclusion of registration as a sex offender is not unusual or uncalled for. 

 

Data Schadenfreude and the Right to be Forgotten

Oh the gleeful headlines. In the news recently:

Researchers Uncover a Flaw in Europe’s Tough Privacy Rules

NYU Researchers Find Weak Spots in Europe’s “Right to be Forgotten” Data Privacy Law

We are hearing the triumphant cries of “Aha! See? We told you it was a bad idea! “

But what “flaw” did these researchers actually uncover?

The Right to be Forgotten (RTBF), as set out by the court, recognized that search engines are “data controllers” for the purposes of data protection rules, and that under certain conditions (i.e., where specific information is inaccurate, inadequate, irrelevant or excessive), individuals have the right to ask search engines to remove links to personal information about them. 

Researchers were able to identify 30-40% of delisted mass media URLs and in so doing extrapolate the names of the persons who requested the delisting—in other words, identify precisely who was seeking to be “forgotten”. 

This was possible because while the RTBF requires search engines to delist links, it does NOT require newspaper articles or other source material to be removed from the Internet.  RTBF doesn’t require erasure – it is, as I’ve pointed out in the past, merely a return to obscurity.  So actually, the process worked exactly as expected. 

Of course, the researchers claim that the law is flawed – but let’s examine at the RTBF provision in the General Data Protection Regulation.  Article 17’s Right to Erasure sets out a framework where an individual may request from a data controller the erasure of personal data relating to them, the abstention of further dissemination of such data, and obtain from third parties the erasure of any links to or copy or replication of that data in listed circumstances.  There are also situations set out that would override such a request and justify keeping the data online – legal requirements, freedom of expression, interests of public health, and the necessity of processing the data for historical, statistical and scientific purposes.

This is the context of the so-called “flaw” being trumpeted. 

Again, just because a search engine removes links to materials that does NOT mean it has removed the actual materials—it simply makes them harder to find.  There’s no denying that this is helpful—a court decision or news article from a decade ago is difficult to find unless you know what you’re looking for, and without a helpful central search overview such things will be more likely to remain buried in the past.  One could consider this a partial return to the days of privacy through obscurity, but “obscurity” does not mean “impenetrable.”  Yes, a team of researchers from New York University Tandon School of Engineering, NYU Shanghai, and the Federal University of Minas Gerais in Brazil was able to find some information. So too (in the dark ages before search engine indexing) could a determined searcher or team of searchers uncover information through hard work. 

So is privacy-through-obscurity a flaw?  A loophole?  A weak spot?  Or is it a practical tool that balances the benefits of online information availability with the privacy rights of individuals? 

It strikes me that the RTBF is working precisely as it should.

The paper, entitled The Right to be Forgotten in the Media: A Data-Driven Study is available at http://engineering.nyu.edu/files/RTBF_Data_Study.pdf.  It will be presented the 16th Annual Privacy Enhancing Technologies Symposium in Darmstadt, Germany, in July, and will be published in the proceedings.

 

Should corporations *really* be the arbiters of free speech?

Facebook, Twitter, YouTube and Microsoft – in partnership with the European Commission – have unveiled a new code of conduct regarding hate speech.  This commitment is part of the response to the Brussels terrorist attacks, and is explicitly targeted at countering what can best be described as “terrorist propaganda”.

Hate speech, for these purposes, is set out in Framework Decision 2008/913/JHA of 28 November 2008, and is focussed on “racism and xenophobia”, which are recognized as “direct violations of the principles of liberty, democracy, respect for human rights and fundamental freedoms and the rule of law, principles upon which the European Union is founded and which are common to the Member States.”

The Framework Article 1 sets out the offences:

(a)  publicly inciting to violence or hatred directed against a group of persons or a member of such agroup defined by reference to race, colour, religion, descent or national or ethnic origin;

(b)  the commission of an act referred to in point (a) by public dissemination or distribution of tracts, pictures or other material;

(c)  publicly condoning, denying or grossly trivialising crimes of genocide, crimes against humanity and war crimes as defined in Articles 6, 7, and 8 of the Statute of the International Criminal Court, directed against a group of persons or a member of such a group defined by reference to race, colour, religion, descent or national or ethnic origin when the conduct is carried out in a manner likely to incite to violence or hatred against such a group or a member of such a group;

(d)  publicly condoning, denying or grossly trivialising the crimes defined in Article 6 of the Charter of the International Military Tribunal appended to the London Agreement of 8 August 1945, directed against a group of persons or a member of such a group defined by reference to race, colour, religion, descent or national or ethnic origin when the conduct is carried out in a manner likely to incite to violence or hatred against such a group or a member of such a group;

Under the Code of Conduct, these technology and social media companies commit to reviewing and acting upon notifications for removal of hate speech—removing or disabling access to such content within 24 hours. 

They also commit to educating and raising awareness with their users about the types of content not permitted under these rules and community guidelines.

Call me a cynic, but while I applaud the idea, I don’t have a lot of faith in its implementation.  We’ve witnessed years of vitriol and hatred based on sex, gender, gender identity and expression, and sexual orientation play out online, without much progress in disrupting or addressing it. Certainly the various platforms and companies haven’t been particularly effective in either educating or protecting users. Even when reporting tools and standards are in place, their application has tended to be fairly arbitrary and unreliable.

Apparently I’m not the only one with concerns about this – European Digital Rights (EDRi) and Access Now released a contemporaneous statement announcing the “…decision not to take part in future discussions and confirming that we do not have confidence in the ill-considered ‘code of conduct’ that was agreed.”

EDRi and Access Now’s concerns are not with effectiveness – they cut deeper, questioning both the process by which the code was developed, and the effect of the code:

  • creation of the code of conduct took place in a non-democratic, non-representative way; 
  • the “code of conduct” downgrades the law to a second-class status, behind the “leading role” of private companies that are being asked to arbitrarily implement their terms of service; 
  • the project as set out seems to exploit unclear liability rules for companies; and 
  • there are serious risks for freedom of expression since legal but controversial content could be deleted as a result of this “voluntary” and unaccountable take-down mechanism.

The two organizations emphasize that their separation from the project (and process) should not be construed as indicating a lack of commitment to the underlying aims – as they state:

[c]ountering hate speech online is an important issue that requires open and transparent discussions to ensure compliance with human rights obligations. This issue remains a priority for our organisations and we will continue working for the development of transparent, democratic frameworks

How do we do this going forward?

Must we keep technology companies on the boundaries of the protection battles?  Or keep their efforts separate from those of under law?

Could tech companies work with (and within) the law?  Can they do so without effectively becoming agents of the state? 

And no matter how we organize ourselves, how can we make such codes and commitments more than lip service – how can we create and encourage efficacy in their application?  Perhaps more to the point, how do we inculcate a real understanding of the price and prevalence of hate speech (of all sorts) in these spaces such that strategies and solutions are developed with an accurate understanding of the issue? 

It IS time to address issues of hate speech and risk/danger online.  It is also time, however, to do so appropriately – to go back to the drawing board and, via broad consultation an authoritative, transparent and enforceable process should be developed and implemented.  One that, at a mimimum:

  • Recognizes the key role(s) of privacy Identifies civil rights and civil liberties as key issues 
  • Builds transparency and accountability into both the development process and into whatever strategy is ultimately arrived at 
  • Ensures a balanced process that includes public sector, private sector, and civil society voices are heard. 

So…let’s start by establishing a multi-stakeholder engagement process tasked with defining the parameters and needs of hate speech protection and developing attendant best practices for privacy, accountability and transparency within that process.   Once that design framework is agreed to, it will be much clearer how best to implement the process; and how to ensure that it is appropriately balanced against concerns around freedom of expression.

Oh, and while we're at it?  If we're going to finally come up with an effective way to address these issues, then let's include sex, gender, gender identity and expression, and sexual orientation into our definition of hate speech while we're at it.  We know we need to!

The Missing (and Missed) Context in R v Elliot

The decision in Internet harassment case R v. Elliot is a great example of the importance of context in shaping and understanding online interactions – and of what can happen when technology is conflated with context.

Background

The circumstances here are complex, originating in discussions on a hashtagged local political group.  Ms. Guthrie and Mr. Elliot had previously met in person to discuss the possibility of his designing a poster for an event, though ultimately he was not asked to perform the task.  The conflict between Elliot and Guthrie appears to have begun after a web-based game was released—the object of which was to punch an image of “Feminist Frequency” scholar Anita Sarkeesian until bruised and bloody. They differed on the controversial decision to expose the identity of the anonymous game designer, exposing him to the opprobrium of both his local community and online feminist community at large.  From there, relations between Elliott and many members of the feminist community became increasingly fractured and angry. 

To show criminal harassment in this case requires establishing:

  • ·        That Elliot engaged in conduct –  repeated communication via twitter
  • ·         That caused Ms. Guthrie and Ms. Reilly to feel harassed
  • ·         That he was aware of their harassment;
  • ·         The harassment caused them to fear for their safety; and
  • ·         That their fear was reasonable in the circumstances.

Breaking Down the Decision

In his long and carefully reasoned decision, Justice Knazan begins with an examination of the technological context in which it is rooted – the twitter platform.  The background on twitter does not include technical specs – rather, it is derived from those involved in the case.  A glossary of twitter terms is provided.  Next, the different means of communication on the platform are elucidated – a direct message, a public message, replying to a message, mentioning someone in a message, retweeting, etc.  – along with the (potential) audience for each. 

First, the difficulty of examining the communication in its entirety was discussed.  The communications took place over an extended period of time, and complainants did not have recollection of the exact text of each individual one.  The police detective, along with software and the twitter platform, was able to assemble a collection of tweets, but there were issues.   

The Court did contemplate the importance of context, with the Judge writing that

I cannot fully understand all the circumstances within the meaning of s. 264 with respect to the proven tweets without seeing the tweets that precede them on the printed-out page. When other tweets appear on the page of the printed tweets, which are in the end the exact product of the Sysomos search, they are then as much a part of the evidence as the original tweets. Their provenance and date are proven just as much as the main tweet that led to the result that Det. Bangild obtained. There is no difference between them and the searched-for tweets, even if no witness has confirmed that they were sent.

Using this expansive view, the court determines that conduct on twitter can constitute communication for the purposes of this case.

Ms. Guthrie

Turning to the charges, the court first examines Mr. Elliot’s interactions with Ms. Guthrie in order to determine whether the requirements of the charge are met.

  • Yes, there was repeated communication
  • Yes, the repeated communication made her feel harassed
  • No, Mr. Elliot did not know that she was harassed BUT he was reckless in that he was aware of a risk of harassment yet continued his behaviour despite that; and
  • Ms. Guthrie was fearful

It is on the issue of whether that fear was reasonable in all the circumstances where Ms. Guthrie’s claim fails.  Despite prefacing his analysis with the recognition that:

That Ms. Guthrie is a woman is relevant. Crown counsel submits that “a reasonable person, especially, a woman, would find Mr. Elliott’s tweets and behaviour concerning and scary.” Women are vulnerable to violence and harassment by men, and Ms. Guthrie advocates for understanding and change. I must judge the reasonableness of Ms. Guthrie’s fear in all the circumstances and on the evidence

nevertheless, the circumstances are misapplied.  Looking at the range of tweets and conversation trails, Ms. Guthrie’s active decision to “block” Mr. Elliott on twitter, her request to him to stop contacting her, and his continued participation in group conversation in which she was involved, the judge suggests that while under normal circumstances this would be sufficient to establish reasonableness, in this case it is not.  Instead, focussing on the technological context and reviewing the history between the Elliot and Guthrie, he (mis)interprets her anger and frustration about Elliot’s participation on certain hashtags and from this concludes that Ms. Guthrie’s belief that Elliot's activities indicated an obsession with her was unreasonable in the circumstances.   

With no tweets that explicitly show sexual or violence threats, no tweets that the judge interprets as demonstrating the irrationality perceived by Ms. Guthrie, the charge of criminal harassment of Ms. Guthrie fails, because the fear she experienced was not considered reasonable in all the circumstances.

Ms. Reilly

Mr. Elliott communicated directly with Ms. Reilly. He identified her explicitly when publicly complaining about her request that he stop following her feed, called her “fucking nuts”, and told her that “This is Twitter” and that her request that he stop replying to her posts offended his “sensibilities”. He replied to her retweets by asking how she would feel if he was so delusional as to ask her not to retweet him. He also communicated with Ms. Reilly indirectly during the period, by mentioning her in other tweets.

In this situation, the court found that:

  • Yes, there was repeated communication
  • Yes, the repeated communication made her feel harassed; and
  • Yes, Mr. Elliot did know that she was harassed

Here, it was the requirement that Ms. Reilly be fearful as a result of the harassment that derailed the case. 

A key feature in Ms. Reilly’s interactions with Mr. Elliott was fear for her physical safety – at one point he made reference to the public location where she and others were meeting.  This led her to fear that he was at that location, and she subsequently checked the room to ensure he was not.  This led to an ongoing concern about his ability to monitor meetings by the group and about physical safety in those instances.  Ms. Reilly had even filed a complaint with Twitter in September 2012, stating among other things that “…I am part of a ladies group that meets Mondays, and he is ‘tweet eavesdropping/stalking’ this group, which also leads many of us to be concerned for our safety in real life, as this has now begun to feel like a real life threat.”

The judge appears to have been put-off by her behaviour on twitter – Justice Knazan notes that “Ms. Reilly’s retweeting of forceful, insulting, unconfirmed and ultimately inaccurate attacks [against Elliot] suggesting pedophilia – combined with her tentative, hypothetical concerns that he could possibly move from online to offline harassment, and her knowledge that he never came to the Cadillac Lounge and never again referred to her whereabouts – raises doubt in my mind to whether she was afraid of Mr. Elliott.”

It was this doubt, ultimately, that compromised this charge – the judge was not satisfied beyond a reasonable doubt that the communications resulted in Ms. Reilly’s fearing for her safety. 

The Overlooked Context

Now let’s look at the larger issues of context. 

Early in the decision, Justice Knazan quotes from the McLaughlin/L’Heureux-Dube minority concurring judgement in R. v. R.D.S.:

41     It is axiomatic that all cases litigated before judges are, to a greater or lesser degree, complex. There is more to a case than who did what to whom, and the questions of fact and law to be determined in any given case do not arise in a vacuum. Rather, they are the consequence of numerous factors, influenced by the innumerable forces which impact on them in a particular context. Judges, acting as finders of fact, must inquire into those forces. In short, they must be aware of the context in which the alleged crime occurred.

42     Judicial inquiry into the factual, social and psychological context within which litigation arises is not unusual. Rather, a conscihttp://gawker.com/what-is-gamergate-and-why-an-explainer-for-non-geeks-1642909080ous, con­text­ual inquiry has become an accepted step towards judicial impartiality. In that regard, Professor Jennifer Nedelsky's "Embodied Diversity and the Challenges to Law" (1997), 42 McGill L.J. 91, at p. 107, offers the following comment:

What makes it possible for us to genuinely judge, to move beyond our private idiosyncracies and preferences, is our capacity to achieve an "enlarge­ment of mind". We do this by taking different perspectives into account. This is the path out of the blindness of our subjective private con­ditions. The more views we are able to take into account, the less likely we are to be locked into one perspective .... It is the capacity for "enlarge­ment of mind" that makes autonomous, impartial judgment possible.

 So what is this larger context?

Both Ms. Guthrie and Ms. Reilly were active in online and offline feminist communities.  Mr. Elliot too was politically active – remember that the original connection here was his volunteering to assist in designing something for an event in which the women were involved. 

These events are the context in which they live and work.  And they are the context within which the experiences and fears of Ms. Guthrie and Ms. Reilly must be understood. 

There is no indication in the decision that these issues were put forward by the Crown.  Perhaps they were.  Either way, they certainly do not seem to have been factored into “all the circumstances” within which these charges are being analyzed and assessed. 

The focus on the twitter platform and desire to properly parse and understand the technological context within which these charges arose is admirable.  But the technological is not and is never the only context within which interactions take place.  By ignoring the broader social and political context, this decision misses an important opportunity to address a growing problem—how to balance freedom of speech with women’s right to participate fully both onlione and offline without fear

 

 

 

 

 

solitary, poor, nasty, brutish and short

Reviewing data from the Family Online Safety Institute led Larry Magid to contemplating cyberbullying, among other things:  

But what's even more disturbing is that young adults -- mostly digital natives -- "are more likely than any other demographic group to experience online harassment." Nearly two-thirds of that 18-29 age-group "have been the target of at least one of the six elements of harassment that were queried in the survey. Among those 18-24, the proportion is 70 percent." It's even worse for young women.

You just *know* that why it’s disturbing is because it happens online….whereas I’d bet that these numbers aren’t any different than the numbers of kids who’ve been bullied and ostracized and harassed for years – it’s just that when it happens online suddenly someone wants to know about it. 

I’ll admit, the online aspect does make it more pervasive – if I hadn’t been able to go home at night, to get away at least in part….well, I doubt I’d have made it to adulthood.  With social media and other tools, the harassment can reach out and smack you, no matter where you are.  There are no safe spaces anymore. 

But let’s not pretend this is new or that it has its genesis in technology – up to 70% have experienced bullying between 18-24?  Bet the numbers are the same or worse in high school. 

Hobbes described the life of man as “solitary, poor, nasty, brutish, and short.”  For far too many of us that describes adolescence and young adulthood spent with our “peers”….

 

Peeple: the Commodification of Social Control?

From www.forthepeeple.com

Meet Peeple

We are a concept that has never been done before in a digital space that will allow you to really see how you show up in this world as seen through the eyes of your network.

Peeple is an app that allows you to rate and comment about the people you interact with in your daily lives on the following three categories: personal, professional, and dating.

Peeple will enhance your online reputation for access to better quality networks, top job opportunities, and promote more informed decision making about people.

My first interest in reputation in online spaces came from a particular kind of knowledge –knowledge that any girl who went to high school has -- that “reputation” and “dating” are never a good combination. Such evaluations are never as objective or truthful as they purport to be, and never without a cost to those who are being assessed/rated.  Maybe everyone knows this, but I’m inclined to think that some of us—those who by virtue of our Otherness are inevitably the object of critical review—internalize that knowledge at a much deeper level.

Given this, I confess that I smiled ruefully when I saw a photo of the two founders of Peeple—the self-described “positivity app launching in November 2015” that purports to enable users to rank people the way other apps (think Yelp) rank restaurants and, say, public restrooms. Peeple’s founders are blondish, youngish, and conventionally attractive.

Normal
0





false
false
false

EN-US
JA
X-NONE

 
 
 
 
 
 
 
 
 


 
 
 
 
 
 
 
 
 
 
 


 <w:LatentStyles DefLockedState="false" DefUnhideWhenUsed="false"
DefSemiHidden="false" DefQFormat="false" DefPriority="99"
LatentStyleCount="371"…

                    Nicole McCullough and Julia Cordray

 

 

I’m not noting their appearance to be dismissive…but I am suggesting (fairly or not) that those who are least likely to have been socially marginalized and ostracized are also perhaps most likely to believe that an app designed to rate and comment on other people could “spread love and positivity.”

Frenzied media coverage has raised many of the most obvious problems with this business idea including:

  • Users can set up profiles for others without  the consent of the person being rated
  • Ratings are inherently subjective
  • There aren’t credible safeguards for accuracy or protections from bias
  • It will be up to a combination of automated software and human site administrators to determine if feedback is “positive” or “negative”, whether to publish it or remove it, etc.
  • It presumes, without evidence, that crowd-sourced opinions are reliable
  • The fundamental concept is an invasion of privacy and threat to reputation
  • The approach objectifies human beings and commoditizes interpersonal relationships.

These are all important concerns, but I’d like to take a step back and look at the larger overarching potential impact of Peeple in terms of creating a state of perpetual surveillance that itself enforces and reinforces particular (mainstream) expectations of behaviour.

This project brings to mind the Panopticon—an architectural concept for institutional buildings, designed so that inmates/inhabitants can be observed from a central point without knowing whether they are being watched at any given moment. It’s based on philosopher Jeremy Bentham’s assertion that “the more constantly the persons to be inspected are under the eyes of the persons who should inspect them, the more perfectly will the purpose X of the establishment have been attained. Ideal perfection, if that were the object, would require that each person should actually be in that predicament, during every instant of time.” (Jeremy Bentham, The Panopticon Writings by Mweran Bozovic at Letter 1). 

Philosopher Michel Foucault later elaborated upon Bentham’s notion of the Panopticon, seeing in it a metaphor for the exercise of power in modern societies.  He explains that “…it arranges things in such a way that the exercise of power is not added on from the outside, like a rigid, heavy constraint, to the functions it invests, but is so subtly present in them as to increase their efficiency by itself increasing its own points of contact. (See Michel Foucault, Discipline and Punish: The Birth of the Prison).

What does this have to do with Peeple?  What overarching control could there be when the app itself clearly states that it is simply sharing feedback?  These reports aren’t happening in a vacuum – inevitably ratings are made with reference to a shared community standard  – setting and reinforcing community norms and reviewing whether or not individuals’ have appropriately met or performed those standards. 

Traditionally, surveillance within the panopticon was intended to impose and enforce chosen norms/rules/behaviours.  Its goal was the production of “docile bodies” – to remove the need for policing of behaviour via force, replacing it instead with the creation of a state of vulnerability induced by the perception of perpetual visibility that resulted in individuals self-policing their own behaviours towards the desired outcome. 

With Peeple, we run that same risk of creating docile bodies and enforcing desired behaviours – knowing that information is collected and shared will (perhaps inevitably) influence the behaviour of an individual who is subject to those reviews. Anyone who wants to continue active and productive participation in a community must be aware of this information repository and the standards that it maintains and enforces.  The collection and sharing of reputation becomes in essence a form of social control. 

Worryingly, in the case of Peeple, it’s a form of social control that is both privately administered and inherently commodified.

Speaking of the Right to be Forgotten, Could We Please Forget This Fearmongering?

In the wake of the original Right to be Forgotten (RTBF) decision, citizens had the opportunity to apply to Google for removal from their search index of information that was inadequate, irrelevant, excessive and/or not in the public interest.  Google says that since the decision it has received more than 250,000 requests, and that they have concurred with the request and acted upon it in 41.6% of the cases

In France, even where Google accepted/approved the request for delisting, it implemented that only on specific geographical extensions of the search engine – primarily .fr (France) although in some cases other European extensions were included.  This strategy resulted in a duality where information that had been excluded from some search engine results was still available via Google.com and other geographic extensions.  Becoming aware of this, the President of CNIL (France’s data protection organization) formally gave notice to Google that it must delist information on all of its search engine domains.  In July 2015 Google filed an appeal of the order, citing the critiques that have become all-too-familiar – claiming that to do so would amount to censorship, as well as damaging the public’s right to information.

This week, on 21 September 2015, the President of CNIL rejec ted Google’s appeal for a number of reasons:

  • In order to be meaningful and consistent with the right as recognized, a delisting must be implemented on all extensions.  It is too easy to circumvent a RTBF that applies only on some extensions, which is inconsistent with the RTBF and creates a troubling situation where informational self-determination is a variable right;

  • Rejecting the conflation of RTBF with information deletion, the President emphasized that delisting does NOT delete information from the internet.  Even while removed from search listings, the information remains directly accessible on the source website.

  • The presumption that the public interest is inherently damaged fails to acknowledge that the public interest is considered in the determination of whether to grant a particular request.  RTBF is not an absolute right – it requires a balancing of the interest of the individual against the public’s right to information; and

  • This is not a case where France is attempting to impose French law universally – rather, CNIL “simply requests full observance of European legislation by non European players offering their services in Europe.”

With the refusal of its (informal) appeal, Google is now required to comply with the original CNIL order.  Failure to do so will result in fines that begin in the $300,000 range but could rise as high as 2-5% of Google’s global operating costs.


At what point it is no longer reasonable to expect any privacy at all?

Legal tests around privacy speak of a “reasonable expectation” of privacy.  As we become more and more aware of the multiple forms that tracking takes and the multiplicity of them with which we engage in a given day, how do we retain a reasonable expectation of privacy?  On top of that, government surveillance and private sector data mining are increasing and intersecting, and this too raises the question of the reasonableness of any expectation of privacy.   Is there is some kind of knowledge saturation point after which holding any expectation of privacy is presumptively unreasonable?  

The Case:  R. v. Pelucco, 2015 BCCA 370 (CanLII), http://canlii.ca/t/gkrd1

The accused was arranging, through text messages, to sell one kilogram of cocaine to Mr. Guray, when the police arrested Mr. Guray and seized his cellphone. Posing as Mr. Guray, they used his cellphone to arrange via text message to meet the accused and then arrested him. Police found drugs in his vehicle, and obtained a search warrant to search his residence, where more drugs were found. He was charged with three drug offences. At trial, he successfully applied to have all evidence excluded, contending that Mr. Guray had been unlawfully arrested, and that the search of the text messages on Mr. Guray’s cellphone violated his own right to be secure against unreasonable search and seizure. The Crown appealed, arguing that because the accused had no control over Mr. Guray’s cellphone, he had no reasonable expectation of privacy in the text messages from him that were recorded on it.

Justice Goepel, in his dissenting opinion in R v. Pelucco takes on Mr. Pelucco’s expectation of privacy in text messages that he had sent, as well as the larger question of whether any expectation of privacy could be reasonable in any circumstances.  Shutting down the suggestion that increased public knowledge and experience should somehow negate the reasonable expectation of privacy, he states that “the expectation of privacy is not meant to be a factual description of whether Canadians expect to be free from interference from the state, such that the state could reduce subjective expectations of privacy solely through adopting sufficiently invasive techniques.”  Building on this, and citing Tessling and Patrick, he puts forward

…the self-evident principle that the government cannot create an intrusive spying regime, diminishing Canadians’ expectation of privacy as a result, and claim that its conduct does not offend the constitution because Canadians, due to the serious invasion of privacy, no longer expect their affairs to remain private from government agents.

Admittedly these points are made in dissent, but this analysis is not key to his dissent, nor does it stand in opposition to the majority decision.  Although the dissent and majority do ultimately differ in their interpretations of the reasonableness of Mr. Pelucco’s expectation of privacy, that difference is not grounded in a limiting of reasonableness due to common expectations. 

Both majority and dissent, then agree that Mr. Pelucco himself had a (subjective) expectation of privacy, although they differ when it comes to the question of whether his individual belief was in fact objectively reasonable – the majority working from the assumption that a sender will ordinarily have a reasonable expectation that a text message will remain private in the hands of its recipient, while the dissent concluded instead that once the text message was received by Mr. Guray, Mr. Pelucco lost the ability to control what was done with or to it and therefore couldn’t reasonably have had an expectation of privacy. 

In the end, the case is resolved as follows à (1) it is objectively reasonable that a person would expect that a text message that they sent will remain private in the hands of its recipient; (2) in the actual facts of this situation there is nothing unusual or remarkable about the content or circumstances of the text conversation that might contradict or contraindicate this expectation of privacy;  and thus (3) the expectation remains.  

I am pleased that the criminal aspect of this has not swayed the analysis – that the Wong recognition that whether or not persons have a reasonable expectation of privacy does not depend on whether those persons were engaged in illegal activities continues to hold.   Pleased too at this recent statement and the recognition of another principle of interpretation –that State surveillance and intrusion cannot cloak itself in constitutionality merely by being so prevalent that it is held to diminish or erase any/every individual’s reasonable expectation of privacy.

LinkedIn, Spam and Reputation

 

A 12 June decision in California regarding LinkedIn illustrates an increasingly nuanced understanding of reputation in the context of online interactions.

When a user is setting up a LinkedIn account, they are led through a variety of screens that solicit personal information. Although most of the information is not mandatory, LinkedIn’s use of a “meter” to indicate the “completeness” of a profile actively encourages the sharing of information.  Among other things, these steps enable LinkedIn to gain access to the address book contact information of the new user, and prompt that user to provide permission to use that contact information to invite those contacts to establish a relationship on LinkedIn. 

The lawsuit alleges that LinkedIn is inappropriately collecting and using this contact information. LinkedIn, pointing to the consent for this use provided by customers, had sought to have the case dismissed.  Judge Koh looked at the whole process and found that while consent was given for the initial email to contacts, LinkedIn also sent two follow up emails to those contacts who did not respond to the original – and that there was no user consent provided for these follow-ups. 

What is interesting about the decision to allow this part of the claim to go forward is Koh’s analysis of harm.  That analysis doesn’t stop with whether LinkedIn has consent for the follow-up emails, rather she examines what the effect of this practice might be and concludes that it "could injure users' reputations by allowing contacts to think that the users are the types of people who spam their contacts or are unable to take the hint that their contacts do not want to join their LinkedIn network."  Given this, she suggests that users could pursue claims that LinkedIn violated their right of publicity, which protects them from unauthorized use of their names and likenesses for commercial purposes, and violated a California unfair competition law.

 

#YesAllWomen: a digital chorus with the power to transform

In response to the recent Isla Vista shootings, and the misogynistic manifesto of the shooter, Twitter was inundated by #YesAllWomen – a hashtag appended to tweets detailing the many ways in which women’s experiences are shaped by misogyny, sexism and fear.  More than 150,000 tweets had already used the hashtag by 3am Sunday.

Social scientist Steph Harold in a 22 May, 2014 article reflected on her experience of online activism, creating a viral hashtag, and what she learned from it.  Ms. Herold used Twitter to appeal to women who had chosen abortion, asking them  to speak out as an act of challenge to anti-choice organizations and agendas, asking them to tell their stories and use the hashtag #ihadanabortion.  The thread exploded, with over 10,000 uses of the hashtag in the first day. This included people sharing their stories, as well as anti-choice activists using the hashtag to shame, while various media and advocacy groups weighed in.  The experience was a transformative one for Ms. Herold, causing her to think extensively about the power of social media and the question of how to measure its effectiveness.  She outlines four ways to create real cultural change around abortion, all of which she insists must be  grounded in hard work and activism, not just social media. 

I wonder, however, if the very strategies she identifies can’t actually be served by social media rather than being distinct from it. 

I remember reading about the role of consciousness-raising groups in second wave feminism—how the realization that what had seemed like individual inadequacies or inabilities were in fact common to many was instrumental in politicizing and empowering women. Frank exchanges of stories provided important context, perspective, and solidarity. 

As I witnessed the growth of #YesAllWomen and the responses to its messages I felt as though there was the potential for something beyond a viral twitter moment to result from this. 

The Atlantic says simply that “…that the vast majority of men who explore it with an open mind will come away having gained insights and empathy without much time wasted on declarations that are thoughtless” an insight eloquently articulated in Neil Gaiman’s tweet “the #yesallwomen hashtag is filled with hard, true, sad and angry things. I can empathise & try to understand & know I never entirely will.”

The feminist website Jezebel elevates the conversation swirling around the hashtag to an even loftier level stating “…now with trends like the #YesAllWomen hashtag, we are uprooting everyday sexism, the ideas that perpetuate systematic marginalization, outright violence towards women, rape culture, and demonization of women who deign to stand up for themselves, forcing it out and showing just how pervasive and destructive it is.”

Herold’s four strategies for creating meaningful cultural/policy change are:

  • ·         Address silence, shame and fear
  • ·         Increase visibility
  • ·        Transform negative attitudes, beliefs and stereotypes; and
  • ·         Deconstruct myths and misperceptions

When I look at the powerful acts of speech, visibility and witness that populate #YesAllWomen, it seems to me that we’re seeing precisely these strategies in action.  I don’t want to oversimplify, and I’m certainly not claiming that a weekend worth of Twitter posts will in and of themselves lead to social and cultural transformation.  That said, there is a big segment of the population who rarely do (because they don't have to) ever imagine what it's actually like to live as another gender (and/or race, ethnicity, sexual orientation…). 

It is my hope that the raising of individual voices in the digital chorus of #YesAllWomen, and the larger recognitions they inspire can help remedy that failure of imagination and facilitate the development of empathy.