Reputation & Precarity in the On-Demand Economy

 

A recent Globe & Mail survey of millennials showed an increased precarity in their employment experiences and potential for future employment development:

Almost one-quarter of the generation of young adults born between 1981 and 2000 are working temporary or contract jobs, nearly double the rate for the entire job market. Almost one-third are not working in their field of education, 21 per cent are working more than one job, and close to half are looking for a new job.

“Contract jobs” encompasses what we’ve come to know as the “gig economy” – also known as the “sharing economy”.  This encompasses many different activities.  An April 2017 Canadian Centre for Policy Alternatives survey found that within the Greater Toronto Area the two most popular were ride/car transportation (i.e. Uber or similar services) and cleaning services.  Also represented were:   home cooked meals, food delivery, carpooling, home rentals, home repairs and sharing office spaces as well as parking spaces. 

These kinds of platforms are inextricably linked with reputation and ratings systems.  Ratings systems, whether visible or not, are what the gig economy relies upon to regulate the behaviour of service providers and give consumers of the services the tools for risk management and thus the ability to trust “reliable” services.

Entitled “Sharing Economy or On-Demand Service Economy”, the CCPA report examines both workers and consumers, reviewing who is providing services within this sector, the conditions of their work, and public perceptions of the need to change regulations to keep up with this new sector.

The report confirms that the sharing (or gig) economy is extremely precarious – by forcing its participants to accept the status of “independent contractor”, they are not employees and as such they are not protected by employment law and do not contribute towards employment insurance and other social safety net programs.

The report paints a picture of who is doing this kind of work:

  • 48% women, 1% transgender
  • 54% racialized persons
  • 51% have children under the age of 18
  • 90% have some post-secondary education
  • 70% under the age of 45
    • 32% btw 18-29
    • 39% btw 30-44

The report indicates that these are not “casual” workers – many of those currently participating expect to still be doing so a year from now, and most (63%) are working at it full time.  Despite working full-time, it’s not particularly remunerative – only 16% make more than 90% of their income from their sharing economy work.  Despite this, almost 60% do count on it for more than 50% of their income, with racialized and immigrant workers counting on it in even greater numbers.  Most (58%) of these workers have a household income of less than $80,000 per year.

On top of that, as nominally self-employed persons, workers are responsible for the costs (and the risks) of the services they provide, with no backup or support from the platforms that enable and contract for their work.

The report concludes by arguing for strengthened safety regulations to protect workers and consumers.   I don’t disagree that regulation is important is necessary, but I wish the report had focused more on how the gig economy currently purports to regulate itself – via reputation – and the impacts and risks of this.

Workers cite various reasons for their participation in this economy – 64% do it to make extra money, 63% because they like it, 55% because it’s the only way to make a living right now, and 53% say it’s something to do until they can find something better – but the overall picture that emerges is one of a difficult labour market, shrinking secure full-time employment opportunities, and an expensive economy.  That means that its participants are under duress already – we’re not talking about the free-spirited millennial making some extra cash to travel.  We’re talking about people who are doing what they have to do in order to keep their heads (barely) above water.

That makes it even more problematic when workers providing these services are harassed.  These workers are especially vulnerable to racist and sexual harassment – and they are not covered by the protections that are in place for traditional employees.   And the fact that they are subject to ratings for services provided makes it difficult for workers to protect themselves effectively from or even to address the fact that it happened. And with no social safety net backing them up, they are effectively trapped in such situations – they can’t remove themselves from it without risking losing everything. 

Recent research looked at the service industry focusing on the lower-than-minimum-hourly-wage allowed in British Columbia and its impact.  The research showed that this creates a dependence on tips that in turn creates the necessity of enduring rudeness and harassment.  These behaviours become a normalized part of employment, with these risks exacerbated for women when employers encourage (or even require) sexualized clothing and demeanor to boost tipping.

Kaitlyn Matulewicz, the researcher, says that:

“The women I interviewed either just like take it, laugh it off, endure it to get through the night or quit when they’ve had enough of what they’re experiencing,”

In the gig economy, there is no hourly wage, and no insurable earnings, meaning no safety net.  This isn’t about supplementing income with tips – it is about continuing to work at all.  A necessity.  And this heightens the power imbalance between workers and consumers in the gig economy, and the precarity experienced by those workers.

 

Get out your pitchforks!

Two fascinating and complementary stories hit the news this week. 

The Breitbart/Shopify business relationship has come under fire on one hand, while at the same time there has been an outcry over PayPal’s refusal to process a payment that appeared to be related to “Syria”.

Here’s the thing– in a democracy you can’t have it both ways.  It’s a bit absurd to complain when PayPal refuses to process donations made to Wikileaks, or won’t accept a payment “because Syria” and then vilify Shopify for setting itself up as a neutral, apolitical platform.

In an open letter, Shopify CEO Tobias Lütke clarified the company’s position:

We offer a software service in the form of an ecommerce platform which hundreds of thousands of businesses and entrepreneurs use to sell millions of products online. We are a service provider. We do not, and will not, refuse the Shopify service to anyone based on their political views, sexual orientation, ethnicity, etc. Doing so could set a dangerous precedent of exclusion.

Much has been made of the fact that Shopify’s ToS contains clauses giving it the right to “refuse service to anyone for any reason at any time,” and that it also can “modify or terminate the Service for any reason, without notice at any time.”  Lütke’s response is that the platform will provide service to any organization if their activities are within the law. 

As someone who makes a hobby of studying online ToS verbiage I can tell you the intention of these clauses is not to lay the groundwork for policing the politics of customers.

The open letter also underscores the clear distinction between Shopify’s provision of service to Breitbart and any endorsement or tacit approval of Breitbart.  Lütke writes:

Shopify is an unlikely defender of Breitbart’s right to sell products. I’m a liberally minded immigrant, leading a predominantly liberal workforce, hailing from predominantly liberal cities and countries. I’m against exclusion of any kind — whether that’s restricting people from Muslim-majority nations from entering the US, or kicking merchants off our platform if they’re operating within the law.

…we do not advertise on Breitbart. Breitbart uses Google Adsense to earn income through advertising, and while we do use Google to buy such ads, we specifically instructed Google to not allow any Shopify ads on their site. This has been in place for months.

To be honest, when I first heard about this my knee-jerk reaction was revulsion. I too was put off on a social/cultural/political level that Shopify hosts Breitbart’s e-commerce. But I quickly reminded myself that I’ve also been critical of PayPal’s ethics when cutting off accounts on the basis of political and/or “moral” criteria.

People can, of course, choose to do business with Shopify or not. But it’s useful to consider, what if the next business they opt not to service is one that you *do* support?  Would you still advocate the exercise of those ToS clauses? 

Neutrality as an ideal is important—in business, in law, and especially in business law.  This is not to say neutrality should trump everything – child pornography, hate speech, and other illegal activities are all regulated by the law, not the free market.

Shopify is simply providing an ecommerce platform, and the companies using that platform will thrive or fail based on the free market. 

Neutrality as a concept does not mean only things that are popular should exist. The notion of net neutrality is grounded in an understanding that the Internet is built upon openness. It is precisely this openness that enables people to connect and exchange information freely (as long as the information or service is not illegal.) That openness is why the Internet is so powerful and why advocates of net neutrality guard it so fiercely.

If one makes the effort to consider this case objectively, it’s clear that Shopify is articulating and championing that very openness of access. This is to be commended, not condemned.

 

Removing Unlawful Content Isn’t a Right to be Forgotten – It’s Justice

A federal court decision released 30 January 2017 has (re)ignited discussion of the “right to be forgotten” (RTBF) in Canada. 

The case revolved around the behaviour of Globe24h.com (the URL does not appear to be currently available, but it is noteworthy that their Facebook page is still online), a website that republishes Canadian court and tribunal decisions. 

The publication of these decisions is not, itself, inherently problematic.  Indeed, the Office of the Privacy Commissioner (OPC) has previously found that an  organization (unnamed in the finding, but presumably CanLii or a similar site) had collected, used and disclosed court decisions for appropriate purposes pursuant to subsection 5(3) of PIPEDA.  The Commissioner determined that the company's purpose in republishing was to support the open courts principle, by making court and tribunal decisions more readily available to Canadian legal professionals and academics. Further, that the company's subscription-based research tools and services did not undermine the balance between privacy and the open courts principle that had been struck by Canadian courts, nor was the operation of those tools inconsistent with OPC’s guidance on the issue.  It is important to note that this finding relied heavily on the decision by the organization NOT to allow search engines to index decisions within its database or otherwise making them available to non-subscribers. 

In its finding, the OPC references another website – Globe24h.com – about which they had received multiple well-founded complaints.  Regarding Globe24h.com, which did allow search engines to index decisions as well as hosting commercial advertising and charging a fee for removal of personal information, the Commissioner found that:

  1. He did have jurisdiction over the (Romanian-based) site, given its real and substantial connection to Canada;
  2. the site was not collecting, using and disclosing the information for exclusively journalistic purposes and thus was not exempt from PIPEDA’s requirements.
  3. that Globe24h’s purpose of making available Canadian court and tribunal decisions through search engines – which allows the sensitive personal information of individuals to be found by happenstance or by anyone, anytime for any purpose –  was NOT one that a reasonable person would consider to be appropriate in the circumstances; and
  4. that although the information was publicly available, the site’s use was not consistent with the open courts principle for which it was originally made available, and thus PIPEDA’s requirement for knowledge and consent did apply to Globe24h.com.

Accordingly, he found the complaints well-founded.

From there, the complaint proceeded to Federal Court, with the Privacy Commissioner appearing as a party to the application.

The Federal Court concurred with the Privacy Commissioner that: PIPEDA did apply to Globe24h.com; that the site was engaged in commercial activity; and that it’s purposes were not exclusively journalistic.  On reviewing its collection, use and disclosure of the information, the Court determined that the exclusion for publicly available information did not apply, and that Globe24h had contravened PIPEDA. 

Where it gets interesting is in the remedies granted by the Court.  Strongly influenced by the Privacy Commissioner’s submission, the Court:

  1. issued an order requiring Globe24h.com to correct its practices to comply with sections 5 to 10 of PIPEDA;
  2. relied upon s.16 of PIPEDA, which authorizes the Court grant remedies to address systemic non-compliance to issue a declaration that Gobe24h.com had contravened PIPEDA; and
  3. awarded damages in the amount of $5000 and costs in the amount of $300.

The reason this is interesting is the explicit recognition by the Court that:

A declaration that the respondent has contravened PIPEDA, combined with a corrective order, would allow the applicant and other complainants to submit a request to Google or other search engines to remove links to decisions on Globe24h.com from their search results. Google is the principal search engine involved and its policy allows users to submit this request where a court has declared the content of the website to be unlawful. Notably, Google’s policy on legal notices states that completing and submitting the Google form online does not guarantee that any action will be taken on the request. Nonetheless, it remains an avenue open to the applicant and others similarly affected. The OPCC contends that this may be the most practical and effective way of mitigating the harm caused to individuals since the respondent is located in Romania with no known assets. [para 88]

It is this line of argument that has fed response to the decision.  The argument is that, by explicitly linking its declaration and corrective order with the ability of claimants to request that search engine’s remove the content at issue from their results, the decision has created a de facto RTBF in Canada. 

With all due respect, I disagree.  A policy on removing content that a court has declared to be unlawful is not equivalent to a “right to be forgotten.”  RTBF, as originally set out, recognized that under certain conditions (i.e., where specific information is inaccurate, inadequate, irrelevant or excessive), individuals have the right to ask search engines to remove links to personal information about them.  In contrast, the issue here is not that the information is “inaccurate, inadequate, irrelevant or excessive” – rather, it is that the information has been declared UNLAWFUL. 

The RTBF provision of the General Data Protection Regulation – Article 17 – sets out circumstances in which a request for erasure would not be honoured because there are principles at issue that transcend RTBF and justify keeping the data online – legal requirements, freedom of expression, interests of public health, and the necessity of processing the data for historical, statistical and scientific purposes.

We are not talking here about an overarching right to control dissemination of these publicly available court records.  The importance of the open court principle was explicitly addressed by both the OPC and the Federal Court, and weighted in making their determinations.  In so doing, the appropriate principled flexibility has been exercised – the very principled flexibility that is implicit in Article 17. 

I do not dispute that a policy conversation about RTBF needs to take place, nor that explicitly setting out parameters and principles would be of assistance going forward.  Perhaps the pending Supreme Court of Canada decision in Google v Equustek Solutions will provide that guidance. 

Regardless, the decision in Globe24h.com does not create RTBF– rather, it exercises its power under PIPEDA to craft appropriate remedies to facilitate justice.

 

Building Trust in (and with) Big Data

 

For Data Privacy Day, the Ontario Information and Privacy Commissioner’s office held an event focused on Government and Big Data (recording available here).

Commissioner Brian Beamish’s opening remarks neatly encapsulate the struggle that is implicit in big data analysis (and indeed, in the notion of implied consent itself).

“I think the public wants the government – expects the government – to deliver services as effectively as possible,” he says. “That said, I think if the privacy risks aren’t recognized and addressed – if the public gets a sense that their privacy is not being respected – there is a definite possibility, or likelihood that public support for these activities will suffer.”

As an example, he spoke of the provincial provision of child and family services, in which multiple agencies and organizations may be involved in one file.  While sharing information between parties may facilitate quick and effective service as well as allowing outcome measure analysis and overall system improvement, it is simultaneously dependent on secondary uses and implied consent.  This may result in clients of the system feeling surveilled and intruded upon – triggering the “creepy factor”.

Later in the event, during the panel discussion, panelists spoke about the importance of trust between citizen and government, and the risk that big data analysis can corrupt or diminish that trust.

The key question that emerged was how can we capitalize on the possibilities offered by big data, while ensuring both that we respect individual privacy and that individuals know themselves to be protected so that their trust relationship with government remains intact (or is even improved)?

I’m not sure this can reasonably be accomplished within current legislative parameters.  Big data will require law and policy developed with big data in mind. 

There needs to be a way to ensure that the information used is accurate and trustworthy – information that is collected from secondary sources, publicly available banks, automatically generated and/or created through data mining runs the risk of being inaccurate or incomplete.  Analytics may be hampered by a lack of information and a lack of context.  In addition, both the source data and resulting conclusions may reflect a range of problems. For example, it may disproportionately represent specific populations while excluding others. Conclusions may carry the implicit societal biases of their time, or be poorly collected. Results may be misinterpreted based on pseudo-scientific insights such as confusing correlation with causation.

Beamish suggested that an effective approach would necessarily include principle-based legislation governing both data linking and big data analytics.  He posited a combination of factors, including the creation of a central data institute with expertise in privacy, human rights, and data ethics; data minimization requirements; privacy impact and threat risk assessments; mandatory breach notification; and appropriate governance and audit oversight. 

As big data becomes more than just a buzzword, and analytics are increasingly integrated into both service delivery and system design and assessment, the need to address this challenge is becoming more necessary and urgent. 

The Commissioner’s remarks and the subsequent panel discussions were part of an increasingly important conversation, one in which we all must engage.  But engagement alone is not enough – it is time to explore and develop concrete policies and procedures. It’s time to set parameters and controls on big data—with the emphasis on enhancing the trust relationship between citizens and government as a guiding principle every step of the way.

 

 

not being bullied is a baseline, not something you have to earn

So this post has popped up a few times on my feed:  https://www.indy100.com/article/i-am-the-woman-in-this-picture-and-this-is-what-it-was-like-7444991

And I feel really ambivalent about it.  I mean, obviously she gets to be the authority on her own experience, and I’m in no way trying to usurp that or diminish her voice.  That said…..

My thoughts

She writes:

The reason I am sharing this is because people think it is funny to laugh at people with disabilities. You can not see my disabilities but they are there and they are REAL. So next time you see photos making fun of people just remember you know nothing about these people or the struggles they face everyday. It is never just harmless fun to laugh at someone.

In a later addendum, she gives more detail about the spine disease, her mental illnesses, and her obesity, concluding with:

I did not choose to be photographed at a low point in my life. The fact that people assume I am fat because I am lazy is false. Or they assume I am fat because I want to be on disability. Obese people are treated as less than human and as something to ridicule. I just want people to be aware that fat people are people too.

I have tons of sympathy and empathy for this situation.  I *felt* it when she said she hadn’t paid any attention to the flash and the giggles because she was used to people laughing – I get that.  Been there and didn't buy the t-shirt because it didn't come in my size.  And her desire to pre-empt the criticisms she knows (we all know) are coming is also all too familiar – she names her obesity, she links it to mental illness, she distinguishes it from her spinal disease, and she assures the reader that she is fighting her body.  It's understandable self-protection, ...but….while I *understand* that mechanism, I feel as though it also dilutes this post. 

Here’s the thing – bullying is wrong.  Laughing at fat people is bullying.  Mocking persons with disabilities (visible or not) is bullying.  Ridiculing the mentally ill is bullying. 

We KNOW it’s wrong.  We shouldn’t (and don’t) have to justify ourselves.  Not being bullied – the right not to be bullied – isn’t something we “earn”, it’s something we have. 

You don’t have to be a good fatty – “I fight my weight daily and I have recently joined a gym” – in order to deserve to be treated like a human being.

You don’t have to be a good crazy –ongoing therapy and trying to get better – in order to deserve to be treated like a human being.

You don’t have to be a good cripple – being a trooper in the face of adversity, doing your best to be “normal”,  – in order to deserve to be treated like a human being.

You don’t have to be trying to “mitigate” what is “wrong” with you to justify not being bullied. 

Nothing is wrong with you.  Something is wrong with the bullies.  And something is wrong with the social narrative that still requires us to justify our entitlement to being accorded basic dignity and respect. 

AUSTRALIA: Federal Court says Metadata not Personal Information

Australia’s Federal Court has handed down a judgement on the question of what constitutes personal information, and specifically whether the metadata in a telecommunications account was personal information about that individual or whether it was “merely” about the services provided to that individual. 

The information at issue included phone network information such as the IP address, URLs visited on the account, cell tower locations during web use, and data pertaining to inbound calls. 

This is of interest here in Canada, relating as it does to C-51 and lawful access initiatives.

Canadian privacy advocates  have long argued that

In terms of Canadian privacy law…metadata must be recognized as constituting personally identifiable information, or Canadians will forever be in the dark about the full range of data that companies are collecting and how it might be being used.

This decision goes against that position, and in so doing strengthens the risk(s) of Canadian telecommunications providers not only withholding this data from their account holders but of sharing the information with law enforcement agencies freely without consideration of PIPEDA privacy requirements.

Refusing the Presumptive Inevitability of Rape (in virtual spaces and elsewhere)

To participate, therefore, in this disembodied enactment of life’s most body-centered activity is to risk the realization that when it comes to sex, perhaps the body in question is not the physical one at all, but its psychic double, the bodylike self-representation we carry around in our heads — and that whether we present that body to another as a meat puppet or a word puppet is not nearly as significant a distinction as one might have thought.  (Julien Dibbel, A Rape in Cyberspace, 1993)

 

My First Virtual Reality Groping.  23 years after Dibbel published A Rape in Cyberspace, this article was published this week.  In it, the author details her first experience with VR playing QuiVr.  After playing solo, she joined a multiplayer session.  While all the characters were physically similar, the players could hear each other speak and accordingly she could be identified as female via her voice.  Soon into the multiplayer session another player began to “virtually” assault her (character). 

The virtual groping feels just as real. Of course, you're not physically being touched, just like you're not actually one hundred feet off the ground, but it's still scary as hell she says. 

It’s hard to say what bothers me most about this article.

The title – the acceptance that this is only the first time, that on some level such assaults are inevitable, built into the fabric of VR, or perhaps of gaming?

The similarity to Dibbel’s report – the fact that 23 years later this is still happening, which suggests the inevitability hinted at in the title is not misplaced?

The call – not just for rules -- for standards to distinguish between annoyance and assault, again reinforcing the idea that this is inevitable, will always be inevitable?

The author writes that [n]ow that the shock has mostly worn off, I'm faced instead with the residual questions about the unbridled misogyny that spawns from gaming anonymity. It's easy to dismiss the most egregious offenses as the base actions of a few teenage boys, but I don't think it's as rare as a few bad apples. 

I have to agree.  I don’t think this is about a few bad apples.  I’m not sure it’s about apples at all.  I do think it’s about a culture of entitlement – to the space, to the tech, to the bodies within it and to the right to determine who “belongs”. 

The participants weren’t sure what should be done or how back in 1993, nor were the “wizards” who oversaw the space.  Regrettably, we still seem to be trapped in the same uncertainty.  Should “community standards” rule the day?  Should the standards of outside legal systems be applied?  How do we govern these spaces?  WHO governs these spaces?

I don’t have an answer. 

I do, however, have an innate certainty that the inevitability of these rapes must be addressed.

 

Police Bodycams: Crossing the Line from Accountability to Shaming

 

Police bodycams are an emerging high-profile tool in law enforcement upon which many hopes for improved oversight, accountability, even justice are pinned.

When it comes to police bodycams, there are many perspectives:

  • Some celebrate them as an accountability measure, almost an institutionalized sousveillance.
  • For others, they’re an important new contribution to the public record
  • And where they are not included in the public record, they can at least serve as internal documents, subject to Access to Information legislation.

These are all variations on a theme – the idea that use of police bodycams and their resulting footage are about public trust and police accountability.

But what happens when they’re used in other ways?

In Spokane, Washington recently a decision was made to use bodycam footage for the purpose of shaming/punishment.  In this obviously edited footage, Sgt. Eric Kannberg deals calmly with a belligerent drunk, using de-escalation techniques even after the confrontation gets physical.  Ultimately, rather than meting out the typical visit to the  drunk tank, the officer opts to proceed via a misdemeanor charge and the ignominy of having the footage posted to Spokane P.D.'s Facebook page. The implications of this approach in terms of privacy, dignity, and basic humanity are far-reaching.

The Office of the Privacy Commissioner of Canada has issued Guidance for the Use of Body-Worn Cameras by Law Enforcement;  guidance that strives to balance privacy and accountability. The Guidelines include:

Use and disclosure of recordings

The circumstances under which recordings can be viewed:

  • Viewing should only occur on a need-to-know basis. If there is no suspicion of illegal activity having occurred and no allegations of misconduct, recordings should not be viewed.
  • The purposes for which recordings can be used and any limiting circumstances or criteria, for example, excluding sensitive content from recordings being used for training purposes. 
  • Defined limits on the use of video and audio analytics.
  • The circumstances under which recordings can be disclosed to the public, if any, and parameters for any such disclosure. For example, faces and identifying marks of third parties should be blurred and voices distorted wherever possible.
  • The circumstances under which recordings can be disclosed outside the organization, for example, to other government agencies in an active investigation, or to legal representatives as part of the court discovery process.

Clearly, releasing footage in order to shame an individual would not fall within these parameters. 

After the posted video garnered hundreds of thousands of views, its subject is now threatening to sue.  He is supported by the ACLU, which expressed concerns about both the editing and the release of the footage. 

New technologies offer increasingly powerful new tools for policing.  They may also intersect with old strategies of social control such as gossip and community shaming.  The challenge – or at least an important challenge– relates to whether those intersections should be encouraged or disrupted.

As always, a fresh examination of the privacy implications precipitated by the implementation of new technology is an important step as we navigate towards new technosocial norms.

Predictive? Or Reinforcing Discriminatory and Inequitable Policing Practices?

UPTURN released its report on the use of predictive policing on 31 August 2016.  

The report, entitled “Stuck in a Pattern:  Early Evidence on Predictive Policing and Civil Rights” reveals a number of issues both with the technology and its adoption:

  •  Lack of transparency about how the systems work
  • Concerns about the reliance on historical crime data, which may perpetuate inequities in policing rather than provide an objective base for analysis
  • Over-confidence on the part of law enforcement and courts on the accuracy, objectivity and reliability of information produced by the system
  • Aggressive enforcement as a result of (over) confidence in the data produced by the system
  • Lack of audit or outcome measures tracking in order to assess system performance and reliability

The report notes that they surveyed the 50 largest police forces in the USA and ascertained that at least 20 of them were using a “predictive policing system” and another 11 actively exploring options to do so.  In addition, they note that “some sources indicate that 150 or more departments may be moving toward these systems with pilots, tests, or new deployments.”

Concurrent with the release of the report, a number of privacy, technology and civil rights organizations released a statement setting forth the following arguments (and expanding upon them).

  1. A lack of transparency about predictive policing systems prevents a meaningful, well-informed public debate. 
  2. Predictive policing systems ignore community needs.
  3. Predictive policing systems threaten to undermine the constitutional rights of individuals
  4. Predictive policing systems are primarily used to intensify enforcement rather than to meet human needs
  5. Police could use predictive tools to identify which officers might engage in misconduct, but most departments have not done so
  6. Predictive policing systems are failing to monitor their racial impact.

Signatories of the statement included:

The Leadership Conference on Civil and Human Rights

18 Million Rising

American Civil Liberties Union

Brennan Center for Justice

Center for Democracy & Technology

Center for Media Justice

Color of Change

Data & Society Research Institute

Demand Progress

Electronic Frontier Foundation

Free Press

Media Mobilizing Project

NAACP

National Hispanic Media Coalition

Open MIC (Open Media and Information Companies Initiative)

Open Technology Institute at New America

Public Knowledge

 

The Right(s) to One’s Own Body

In July, police approached a computer engineering professor in Michigan to assist them with unlocking a murder victim’s phone by 3D-printing the victim’s fingerprints. 

It is a well-established principle of law that ‘there is no property in a corpse.’ This means that the law does not regard a corpse as property protected by rights.  So hey, why not, right? 

There is even an easy argument to be made that this is in the public interest.  Certainly, that seems to be how Professor Anil Jain (to whom the police made the request) feels: “If we can assist law enforcement that’s certainly a good service we can do,” he says.   

Marc Rotenberg, President of the Electronic Privacy Information Centre (EPIC) notes that if the phone belonged to a crime suspect, rather than a victim, police would be subject to a Supreme Court ruling requiring them to get a search warrant prior to unlocking the phone—with a 3D-printed finger or otherwise.

I’ve got issues with this outside the victim/suspect paradigm though. 

For instance, I find myself wondering about the application of this to live body parts. 

I’ve always been amused by the R v Bentham case, from the UK House of Lords in 2005. Bentham broke into a house to commit robbery and in course of this, used his fingers in his pocket to make a gun shape.  He was arrested.  Though he was originally convicted of possessing a firearm or imitation thereof, that conviction was overturned on the basis that it wasn’t possible for him to “possess” part of his own body.  But…if you can’t “possess” your own body, why wait for death before the State makes a 3-D copy of it for its own purposes?

And…we do have legislation about body parts, both live and dead – consider the regulation of organ donation and especially payment for organs.  Consider too the regulation of surrogacy, and of new reproductive technologies. 

Maybe this is a new area to ponder – it doesn’t fit neatly into existing jurisprudence and policy around the physical body.  The increasing use of biometric identifiers to protect personal information inevitably raises new issues that must be examined. 

UPDATE:  It turns out that the 3D printed fingerprint replica wasn’t accurate enough to unlock the phone.  Undeterred, law enforcement finally used a 2D replica on conductive paper, with the details enhanced/filled in manually.  This doesn’t really change the underlying concern, does it? 

How About We Stop Worrying About the Avenue and Instead Focus on Ensuring Relevant Records are Linked?

Openness of information, especially when it comes to court records, is an increasingly difficult policy issue.  We have always struggled to balance the protection of personal information against the need for public information and for justice (and the courts that dispense it) to be transparent.  Increasingly dispersed and networked information makes this all the more difficult. 

In 1991’s Vickery v. Nova Scotia Supreme Court (Prothonotary) Justice Cory (writing in dissent, but in agreement with the Court on these statements)  positioned the issue as being inherently about the tension between the privacy rights of an acquitted individual versus the importance of court information and recordsbeing open.

…two principles of fundamental importance to our democratic society which must be weighed in the balance in this case.  The first is the right to privacy which inheres in the basic dignity of the individual.  This right is of intrinsic importance to the fulfilment of each person, both individually and as a member of society.  Without privacy it is difficult for an individual to possess and retain a sense of self-worth or to maintain an independence of spirit and thought.
The second principle is that courts must, in every phase and facet of their processes, be open to all to ensure that so far as is humanly possible, justice is done and seen by all to be done.  If court proceedings, and particularly the criminal process, are to be accepted, they must be completely open so as to enable members of the public to assess both the procedure followed and the final result obtained.  Without public acceptance, the criminal law is itself at risk.

Historically the necessary balance has been arrived at less by policy negotiation than by physical and geographical limitations.  When one must physically attend the court house to search for and collect information from various sources, the time, expense and effort necessary functions as its own form of protection.  As Elizabeth Judge has noted, however, “with the internet, the time and resource obstacles for accessing information were dramatically lowered. Information in electronic court records made available over the Internet could be easily searched and there could be 24-hour access online, but with those gains in efficiency comes a loss of privacy.”

At least arguably, part of what we have been watching play out with the Right to be Forgotten is a new variation of these tensions.  Access to these forms of information is increasingly easily and generally available – all it requires is a search engine and a name.  In return, news stories, blog posts, social media discussions and references to legal cases spill across the screen.  With RTBF and similar suggestions,  we seek to limit this information cascade to that which is relevant and recent. 

This week saw a different strategy employed.  As part of the sentences for David and Collet Stephan – whose infant son died of meningitis due to their failure to access medical care for him when he fell ill – the Alberta court required that notice of the sentence be posted on Prayers for Ezekiel and any other social media sites maintained by and dealing with the subject of their family.  (NOTE:  As of 6 July 2016, this order has not been complied with).

Contrary to some, I do not believe that the requirement to post is akin to a sandwich board, nor that this is about shaming.  Rather, it seems to me that in an increasingly complex information spectrum, insisting that sentence be clearly and verifiably linked to information about the issue.  Instead, I agree that

… it is a clear sign that the courts are starting to respond to the increasing power of social media, and to the ways that criminals can attract supporters and publicity that undermines faith in the legal system. It also points to the difficulties in upholding respect for the courts in an era when audiences are so fragmented that the facts of a case can be ignored because they were reported in a newspaper rather than on a Facebook post.

There has been (and continues to be) a chorus of complaints about RTBF and its supposed potential to frustrate (even censor) the right to KNOW.  Strangely, that same chorus does not seem to be raising their voices in celebration of this decision.  And yet…. doesn’t requiring that conviction and sentence be attached to “news” of the original issue address many of the concerns raised by anti-RTBF forces? 

 

Social Media at the Border: Call for Comments: Until 22 August 2016

The US Customs and Border Protection Agency are proposing to add new fields to the form people fill out when entering/leaving the country—fields where travelers would voluntarily enter their social media contact information.  The forms would even list social media platforms of interest in order to make it easier to provide the information. 

This raises serious concerns. Some might ask, how can this be controversial if it’s voluntary?  If someone doesn’t want to share the info, they’ll say, then don’t. Case closed.

Unfortunately, it isn’t that simple. This initiative raises some serious questions:

Is it really voluntary?

Are individuals likely to understand that provision of this information is, in fact, voluntary?  If this becomes part of the standard customs declaration form, how many people will just fill it out, assuming that like the rest of the information on the form, it is mandatory? 

Is the consent informed?

Fair information principles require that before people can meaningfully consent to the collection of personal data they need to understand the answers to the following questions: Why is the information being collected?  To what uses will it be put, with whom will it be shared?  We have to ask, will the answers to these question be known? Will they be shared and visible?  Will they be drawn to the attention of travelers? 

Can such consent be freely given?

In a best-case scenario where the field is clearly marked as voluntary and the necessary information about purposes is provided—can such indicia really overrule our instinctive fear/understanding that failing to “volunteer” such information can be an invitation for increased scrutiny. 

Is it relevant?

Even if the problem of mandatory “volunteering” of information is addressed, what exactly is the point?  Is this information in some way relevant?  It is suggested that this initiative is the result of

… increasing pressure to scrutinize social media profiles after the San Bernardino shooting in December of last year. One of the attackers had posted a public announcement on Facebook during the shooting, and had previously sent private Facebook messages to friends discussing violent attacks. Crucially, the private messages were sent before receiving her visa. That news provoked some criticism, although investigators would have needed significantly more than a screen name to see the messages.

If this is meant to be a security or surveillance tool,  is it likely to be effective as such? Will random trawling of social network participation—profiling based on profiles—truly yield actionable intelligence?

Here’s the problem: every individual’s social media presence is inherently performative.  In order to accurately interpret interactions within online social media spaces, it is imperative to recognize that these utterances, performances, and risks are undertaken within a particular community and with a view to acquiring social capital within that particular community

Many will ask, if information is public why worry about protections?  Because too often issues of the accuracy, reliability or truthfulness of information in these various “publics” are not considered when defaulting to presumptive publicness as justification. All such information needs to be understood in context.

Context is even more crucial when such information is being consulted and used in various profiling enterprises, and especially so when it is part of law enforcement or border security. There is a serious risk of sarcasm, artistic expression, mere frustration or hyperbole resulting in the criminalization of individuals who are thoughtless (or indeed simply not thinking along the lines preferred by law enforcement agencies) rather than dangerous. 

The call for comments contains extensive background, but the summary they provide is simple:

U.S. Customs and Border Protection (CBP) of the Department of Homeland Security will be submitting the following information collection request to the Office of Management and Budget (OMB) for review and approval in accordance with the Paperwork Reduction Act: CBP Form I-94 (Arrival/Departure Record), CBP Form I-94W (Nonimmigrant Visa Waiver Arrival/Departure), and the Electronic System for Travel Authorization (ESTA). This is a proposed extension and revision of an information collection that was previously approved. CBP is proposing that this information collection be extended with a revision to the information collected. This document is published to obtain comments from the public and affected agencies.

They are calling for comments. You have until 22 August 2016 to let them hear yours.  https://federalregister.gov/a/2016-14848

When the “Child” in “Child Pornography” is the Child Pornographer

A US decision this month found that a 17-year-old who sent a picture of his own erect penis was guilty of the offence of second degree dealing in depictions of a minor engaged in sexually explicit conduct.

The person in question was already serving a Special Sex Offender Dispositional Alternative (SSODA) as the result of an earlier adjudication for communicating with a minor for immoral purposes when he began harassing one of his mother’s former employees, a 22-year-old single mother with an infant daughter.

That harassment began with telephone calls making sexual sounds or asking sexual questions. On the afternoon of June 2, 2013, she received two text messages: one with a picture of an erect penis, and the other with the message, "Do u like it babe? It's for you. And for Your daughter babe."

The appeal was focused on a couple of questions:

Was charging him with this offence a violation of his freedom of speech?

Key to the reasoning here is the recognition that minors have no superior right to distribute sexually explicit materials involving minors than adults do.  To interpret the statute differently, in the opinion of the court, would render the statute meaningless. 

The First Amendment does not consider child pornography a form of protected expression. There is no basis for creating a right for minors to express themselves in such a manner, and, therefore, no need to place a limiting construction on a statute that does not impinge on a constitutional right. Accordingly, we conclude that the dealing in depictions of minors statute does not violate the First Amendment when applied to minors producing or distributing sexually explicit photographs of themselves.

Was the offence too vaguely worded?   

The argument here is simple – would a reasonable person really think that sending a photo of his own genitals would constitute the crime of child pornography?  Again, the court deals with this handily, finding the statute wording to be clear.  Whether many teens engage in sexting is immaterial – the test isn’t whether many people aren’t following the law, but rather whether they are unable to understand it.  

Nothing in the text of the statute suggests that there are any exceptions to its anti-dissemination or anti-production language. The statute is aimed at eliminating the creation and distribution of images of children engaged in explicit sexual behavior. It could hardly be any plainer and does not remotely suggest there is an exception for self-produced images.

Finally, the ACLU made arguments on policy reasons.

Was it irrational or counterintuitive that the subject of the photo could also be guilty of its distribution?  The court thinks not – there is no requirement for a specific identified victim, because the focus is on the traffic in images. 

Another policy issue raised was the concern about the application of such a precedent to a case of teens “sexting” each other.  This could potentially have been a stronger policy position had this *been* a case of sexting – but it was not.  This wasn’t communication between equals – it was harassment. 

Is This a Problem?

Clearly, dick pics are ubiquitous these days.  Is this decision an overreaction?  No. It’s not.  Know what else is ubiquitous these days?  Harassment and hate directed at women in online spaces (and offline). 

Reasonable people have raised concerns about the inclusion of registering as a sex offender as part of the sentence. 

To be clear, the sentence was time served, and registration as a sex offender. 

  • Registration of an individual who was already in treatment (thus far ineffective) for communicating with a minor for immoral purposes. 
  • Who had unrelated pending charges (dismissed by agreement of the parties) for indecent exposure.  
  • And for behaviour that was part of a campaign of sexual harassment. 

Frankly, that inclusion of registration as a sex offender is not unusual or uncalled for. 

 

Data Schadenfreude and the Right to be Forgotten

Oh the gleeful headlines. In the news recently:

Researchers Uncover a Flaw in Europe’s Tough Privacy Rules

NYU Researchers Find Weak Spots in Europe’s “Right to be Forgotten” Data Privacy Law

We are hearing the triumphant cries of “Aha! See? We told you it was a bad idea! “

But what “flaw” did these researchers actually uncover?

The Right to be Forgotten (RTBF), as set out by the court, recognized that search engines are “data controllers” for the purposes of data protection rules, and that under certain conditions (i.e., where specific information is inaccurate, inadequate, irrelevant or excessive), individuals have the right to ask search engines to remove links to personal information about them. 

Researchers were able to identify 30-40% of delisted mass media URLs and in so doing extrapolate the names of the persons who requested the delisting—in other words, identify precisely who was seeking to be “forgotten”. 

This was possible because while the RTBF requires search engines to delist links, it does NOT require newspaper articles or other source material to be removed from the Internet.  RTBF doesn’t require erasure – it is, as I’ve pointed out in the past, merely a return to obscurity.  So actually, the process worked exactly as expected. 

Of course, the researchers claim that the law is flawed – but let’s examine at the RTBF provision in the General Data Protection Regulation.  Article 17’s Right to Erasure sets out a framework where an individual may request from a data controller the erasure of personal data relating to them, the abstention of further dissemination of such data, and obtain from third parties the erasure of any links to or copy or replication of that data in listed circumstances.  There are also situations set out that would override such a request and justify keeping the data online – legal requirements, freedom of expression, interests of public health, and the necessity of processing the data for historical, statistical and scientific purposes.

This is the context of the so-called “flaw” being trumpeted. 

Again, just because a search engine removes links to materials that does NOT mean it has removed the actual materials—it simply makes them harder to find.  There’s no denying that this is helpful—a court decision or news article from a decade ago is difficult to find unless you know what you’re looking for, and without a helpful central search overview such things will be more likely to remain buried in the past.  One could consider this a partial return to the days of privacy through obscurity, but “obscurity” does not mean “impenetrable.”  Yes, a team of researchers from New York University Tandon School of Engineering, NYU Shanghai, and the Federal University of Minas Gerais in Brazil was able to find some information. So too (in the dark ages before search engine indexing) could a determined searcher or team of searchers uncover information through hard work. 

So is privacy-through-obscurity a flaw?  A loophole?  A weak spot?  Or is it a practical tool that balances the benefits of online information availability with the privacy rights of individuals? 

It strikes me that the RTBF is working precisely as it should.

The paper, entitled The Right to be Forgotten in the Media: A Data-Driven Study is available at http://engineering.nyu.edu/files/RTBF_Data_Study.pdf.  It will be presented the 16th Annual Privacy Enhancing Technologies Symposium in Darmstadt, Germany, in July, and will be published in the proceedings.

 

ScoreAssured’s unsettling assurances

Hearing a lot of talk about Tenant Assured – an offering from new UK company ScoreAssured.  Pitched as being an assessment tool for “basic information” as well as “tenant worthiness,” TenantAssist scrapes social media sites (named sites so far as Facebook, Twitter, LinkedIn, and Instagram) content – including conversations and private messages – and then runs the data through natural language processing and other analytic software to produce a report. 

The report rates the selected individual on five “traits” – extraversion, neuroticism, openness, agreeableness, and conscientiousness.   The landlord never directly views posts of the potential tenant, but the report will include detailed information such as activity times, particular phrases, pet ownership etc.

Is this really anything new?  We know that employers, college admissions, and even prospective landlords have long been using social media reviews as part of their background check process. 

TenantAssured would say that at least with their service the individual is asked for and provides consent.   And that is, at least nominally, true.  But let’s face it – consent that is requested as part of a tenancy application is comparable to the consent for a background check on an employment application – “voluntary” only if you’re willing to not go any further in the process.  Saying “no” is perceived as a warning flag that will likely result in one not being hired or not getting housing. Declining jobs and/or accommodations is not a luxury everyone can afford. 

Asked about the possibility of a backlash from users, co-founder Steve Thornhill confidently asserted that “people will give up their privacy to get something they want.”  That may be the case…but personally I’m concerned that people may be forced to give up their privacy to get something they urgently need (or quite reasonably want).

But let’s presume for a second that the consent is “freely” given. Problems with this model remain: 

  • Reports may include information such as pregnancy, political opinions, age, etc. – information that is protected by human rights codes.  (Thornhill says, “all we can do is give them the information, it’s up to landlords to do the right thing”)
  • Performed identity – our self-presentation on social media sites is constructed for particular (imagined) audiences.  To remove it from that context does not render it presumptively true or reliable – quite the opposite.
  • Invisibility of standards – how are these traits being assessed?  What values are being associated with particular behaviours, phrases and activities and are they justified?  An individual who is currently working in a bar or nightclub might show activity and language causing them to be receive negative ratings as excessive partiers or unstable, for instance.  In fact, the Telegraph demonstrated this by running reports on their financial journalists (people who, for obvious reasons, tend to use words like fraud and loan rather frequently) and sure enough the algorithm rated them negatively in “financial stability”.
  • Unlike credit bureaus, which are covered under consumer protection laws, there is no regulation of this sector.  What that means, among other things, is that there is not necessarily any way for an individual to know what is included in their report, let along challenge the accuracy or completeness of such a report. 

The Washington Post quite correctly identifies this as an exponential change in social media monitoring, writing that “…Score Assured, with its reliance on algorithmic models and its demand that users share complete account access, is something decidedly different from the sort of social media audits we’re used to seeing.  Those are like cursory quality-control check; this is more analogous to data strip-mining.”

Would it fly here?

Again, we know that background checks and credit checks for prospective tenants aren’t new.  We also know that, in Canada at least, our Information and Privacy Commissioners have had occasion to weigh in on these issues.

In 2004, tenant screening in Ontario suffered a setback when the  Privacy Commissioner of Ontario instructed the (then) Ontario Rental Housing Tribunal to stop releasing so much personal information in their final orders. As a result, names are now routinely removed from the orders, making it significantly more difficult to scrape the records generally.  As for individual queries, unless you know the names of the parties, the particular rental address and file number already, you will probably not be able to find anything about a person’s history in such matters.

Now, with the release of PIPEDA Report of Findings #2016-002, Feb 19, 2016 (posted 20 May 2016), that line of business is even more firmly shuttered.  There, the OPC investigated the existence of a “bad tenant” list that was maintained by the landlord association. The investigation raised numerous concerns about the list:

  • Lack of consent by individuals for their information to be collected and used for such a purpose
  • Lack of accountability – there was no way for individuals to ascertain if any information about them was on the bad tenant list, who had placed it there, and what the information was. 
  • Simultaneously, the landlord association was also not assessing the accuracy or credibility of any of the personal information that it collected, placed on the list and regularly disclosed to other landlords, who then made decisions based upon it.
  • Further, there was no way to ensure accuracy of the information on the list, and no way for individuals to challenge the accuracy or completeness of the information.

It was the finding of the Privacy Commissioner of Canada that by maintaining and sharing this information, the association was acting as a credit reporting agency, albeit without the requisite license from the province.  Accordingly, the Commissioner found that the purpose for which the tenant personal information was collected, used or disclosed was not appropriate under s.5(3) of PIPEDA.  The institution, despite disagreeing with the characterization of it as a credit bureau, implemented the recommendation to destroy the “bad tenant” list, cease collecting information for such a list, and to no longer share personal information about prospective tenants without explicit consent.

This is good news, but the temptation to monetize violations of privacy continues. SecureAssist has expansive plans.  They anticipate launching (by the end of July 2016) similar “report” products targeted at Human Resources officers and employers, as well as parents seeking nannies. 

“If you’re living a normal life,” Thornhill asserts, “then, frankly, you have nothing to worry about.”  We all need to ask– who defines “normal”?  And since when is a corporation’s definition of “normal” the standard for basic human dignity needs like employment or housing? 

 

 

It can happen to anyone....

Macleans magazine’s cover story in November 2005 announced that then-Privacy Commissioner of Canada Jennifer Stoddart’s cellphone records had been obtained by them.

Now the FTC’s Chief Technologist, Lorrie Cranor, has had a similar experience -- someone impersonated her and was able to highjack her cellphone number and acquire two top-of-the-line iPhones. 

I was interested in learning where the theft had occurred and how much of my personal information was in the hands of the thief. Section 609(e) of the Fair Credit Reporting Act requires that companies provide business records related to identity theft to victims within 30 days of receiving a written request. So, following the template provided by Identitytheft.gov, I wrote a letter to my carrier requesting all records related to the fraudulent upgrades on my account. After about two months my carrier sent me the records. I learned that the thief had used a fake ID with my name and her photo. She had acquired the iPhones at a retail story in Ohio, hundreds of miles from where I live, and charged them to my account on an installment plan. It appears she did not actually make use of either phone, suggesting her intention was to sell them for a quick profit. As far as I’m aware the thief has not been caught and could be targeting others with this crime.

I’ve said it before – blaming the user for failing to protect themselves adequately doesn’t work, and in fact perpetuates the problem.  Writing about her experience, Cranor is clear that “mobile carriers and third-party retailers need to be vigilant in their authentication practices to avoid putting their customers at risk of major financial loss and having email, social network, and other accounts compromised.”

 

Should corporations *really* be the arbiters of free speech?

Facebook, Twitter, YouTube and Microsoft – in partnership with the European Commission – have unveiled a new code of conduct regarding hate speech.  This commitment is part of the response to the Brussels terrorist attacks, and is explicitly targeted at countering what can best be described as “terrorist propaganda”.

Hate speech, for these purposes, is set out in Framework Decision 2008/913/JHA of 28 November 2008, and is focussed on “racism and xenophobia”, which are recognized as “direct violations of the principles of liberty, democracy, respect for human rights and fundamental freedoms and the rule of law, principles upon which the European Union is founded and which are common to the Member States.”

The Framework Article 1 sets out the offences:

(a)  publicly inciting to violence or hatred directed against a group of persons or a member of such agroup defined by reference to race, colour, religion, descent or national or ethnic origin;

(b)  the commission of an act referred to in point (a) by public dissemination or distribution of tracts, pictures or other material;

(c)  publicly condoning, denying or grossly trivialising crimes of genocide, crimes against humanity and war crimes as defined in Articles 6, 7, and 8 of the Statute of the International Criminal Court, directed against a group of persons or a member of such a group defined by reference to race, colour, religion, descent or national or ethnic origin when the conduct is carried out in a manner likely to incite to violence or hatred against such a group or a member of such a group;

(d)  publicly condoning, denying or grossly trivialising the crimes defined in Article 6 of the Charter of the International Military Tribunal appended to the London Agreement of 8 August 1945, directed against a group of persons or a member of such a group defined by reference to race, colour, religion, descent or national or ethnic origin when the conduct is carried out in a manner likely to incite to violence or hatred against such a group or a member of such a group;

Under the Code of Conduct, these technology and social media companies commit to reviewing and acting upon notifications for removal of hate speech—removing or disabling access to such content within 24 hours. 

They also commit to educating and raising awareness with their users about the types of content not permitted under these rules and community guidelines.

Call me a cynic, but while I applaud the idea, I don’t have a lot of faith in its implementation.  We’ve witnessed years of vitriol and hatred based on sex, gender, gender identity and expression, and sexual orientation play out online, without much progress in disrupting or addressing it. Certainly the various platforms and companies haven’t been particularly effective in either educating or protecting users. Even when reporting tools and standards are in place, their application has tended to be fairly arbitrary and unreliable.

Apparently I’m not the only one with concerns about this – European Digital Rights (EDRi) and Access Now released a contemporaneous statement announcing the “…decision not to take part in future discussions and confirming that we do not have confidence in the ill-considered ‘code of conduct’ that was agreed.”

EDRi and Access Now’s concerns are not with effectiveness – they cut deeper, questioning both the process by which the code was developed, and the effect of the code:

  • creation of the code of conduct took place in a non-democratic, non-representative way; 
  • the “code of conduct” downgrades the law to a second-class status, behind the “leading role” of private companies that are being asked to arbitrarily implement their terms of service; 
  • the project as set out seems to exploit unclear liability rules for companies; and 
  • there are serious risks for freedom of expression since legal but controversial content could be deleted as a result of this “voluntary” and unaccountable take-down mechanism.

The two organizations emphasize that their separation from the project (and process) should not be construed as indicating a lack of commitment to the underlying aims – as they state:

[c]ountering hate speech online is an important issue that requires open and transparent discussions to ensure compliance with human rights obligations. This issue remains a priority for our organisations and we will continue working for the development of transparent, democratic frameworks

How do we do this going forward?

Must we keep technology companies on the boundaries of the protection battles?  Or keep their efforts separate from those of under law?

Could tech companies work with (and within) the law?  Can they do so without effectively becoming agents of the state? 

And no matter how we organize ourselves, how can we make such codes and commitments more than lip service – how can we create and encourage efficacy in their application?  Perhaps more to the point, how do we inculcate a real understanding of the price and prevalence of hate speech (of all sorts) in these spaces such that strategies and solutions are developed with an accurate understanding of the issue? 

It IS time to address issues of hate speech and risk/danger online.  It is also time, however, to do so appropriately – to go back to the drawing board and, via broad consultation an authoritative, transparent and enforceable process should be developed and implemented.  One that, at a mimimum:

  • Recognizes the key role(s) of privacy Identifies civil rights and civil liberties as key issues 
  • Builds transparency and accountability into both the development process and into whatever strategy is ultimately arrived at 
  • Ensures a balanced process that includes public sector, private sector, and civil society voices are heard. 

So…let’s start by establishing a multi-stakeholder engagement process tasked with defining the parameters and needs of hate speech protection and developing attendant best practices for privacy, accountability and transparency within that process.   Once that design framework is agreed to, it will be much clearer how best to implement the process; and how to ensure that it is appropriately balanced against concerns around freedom of expression.

Oh, and while we're at it?  If we're going to finally come up with an effective way to address these issues, then let's include sex, gender, gender identity and expression, and sexual orientation into our definition of hate speech while we're at it.  We know we need to!

I Was Just Venting: Liability for Comments on One's Facebook Page

We’ve all used social media to vent about *something* —a bad day, a jerk on the bus, an ex– whatever is enraging us at the moment.   It’s arguable whether we intend those posts to be taken seriously or whether they’re just hyperbole.  The nature of venting is, after all, about release.  It’s cathartic.

But….what if you could be held liable for your venting?

Worse yet, what if you were held liable for what your friends said or did in response?

Sound crazy?  Turns out it’s possible…

Pritchard v Van Nes -- picture it – British Columbia…..<dissolve scene>

Mr. Pritchard and his family moved in next door to Ms. Van Nes and her family in 2008.  The trouble started in 2011, when the Van Nes family installed a two-level, 25-foot long, and 2-waterfall “fish pond” along their rear property line.  The (constant) noise of the water disturbed and distressed the Pritchards, who started out (as one would) by speaking to Ms. Van Nes about their concerns.

Alas, rather than getting better, the situation kept getting worse

  • the noise of the fish pond was sometimes drowned out by late-night parties thrown by the Van Nes family;

  • when the Pritchard’s complained about the noise, the next party included a loud explosion that Ms. Van Nes claimed was dynamite;

  • the lack of fence between the yards meant that the Van Nes children entered the Pritchard yard;

  • the lack of fence also allowed the Van Nes’ dog to roam (and soil) the Pritchard yard, as evidenced by more than 20 complaints to the municipality; and

  • parking (or allowing their guests to park) so as to block the Pritchard’s access to their own driveway.  When the Pritchards reported these obstructions to police, it only exacerbated tensions between the parties.

On June 9, 2014 tensions came to a head.  Ms. Van Nes published a Facebook post that included photographs of the Pritchard backyard:

Some of you who know me well know I’ve had a neighbour videotaping me and my family in the backyard over the summers.... Under the guise of keeping record of our dog...
Now that we have friends living with us with their 4 kids including young daughters we think it’s borderline obsessive and not normal adult behavior...
Not to mention a red flag because Doug works for the Abbotsford school district on top of it all!!!!
The mirrors are a minor thing... It was the videotaping as well as his request to the city of Abbotsford to force us to move our play centre out of the covenanted forest area and closer to his property line that really, really made me feel as though this man may have a more serious problem.

The post prompted 57 follow-ups – 48 of them from Facebook friends, and 9 by Ms. Van Nes herself. 

The narrative (and its attendant allegations) developed from hints to insinuations to flat out statements that Mr. Pritchard was variously a “pedophile”, “creeper”, “nutter”, “freak”, “scumbag”, “peeper” and/or “douchebag”.

Not content to keep this speculation on the Facebook page, a friend of Ms. Van Nes actually shared the post on his own Facebook page and encouraged others to do the same, and further suggested that Ms. Van Nes contact the principal of the school where Mr. Pritchard taught and “use his position as a teacher against him.  I would also send it to the newspaper.  Shame is a powerful tool.”

The following day, that same friend emailed the school principal, attaching the images from Ms. Van Nes’ page, some (one-sided) details of the situation and the warning that “I think you have a very small window of opportunity before someone begins to publicly declare that your school has a potential pedophile as a staff member. They are not going to care about his reasons – they care that kids may be in danger.”

That same day, another community member (Ms. Regnier  whose children had been taught by Mr. Pritchard and who believed him to be an excellent teacher and valuable resource for the school and community) became aware of Ms. Van Nes’ accusations and went to the school to inform Mr. Pritchard that accusations that he was a pedophile had surfaced on Facebook.  After talking with Mr. Pritchard, she accompanied him to the office to speak with the Principal (Mr. Horton), who had already received the email warning about Mr. Pritchard.  Mr. Horton contacted his superior, who, Mr. Horton testified, seemed shocked, asking Mr. Horton whether he believed the allegations; Mr. Horton said he did not, although he testified that he was concerned as the allegations reflected poorly on him and the school. He testified that if the allegations were substantiated, Mr. Pritchard would have had his teaching license revoked.

Tracking the allegations back to Ms. Van Nes’ Facebook page, Mr. Pritchard’s wife printed out the posts and Ms. Van Nes’ friends list.  They took this material with them to the police station to file a complaint.  Later that evening a police officer arrived at the Pritchard home to collect more details – when the Pritchards attempted to show him the content on Facebook they found that it was no longer accessible. 

Altogether, the post was visible on Ms. Van Nes’ Facebook page for approximately 27 ½ hours.  Its deletion, however, did not remove copies that had been placed on other Facebook pages or shared with others, nor could it prevent the spread of information. 

The effects have been many:

There was at least one child of one of Ms. Van Nes’ “friends” who commented on the posts, who was removed from his music programs. The next time he organized a band trip out of town and sought parent volunteers to be chaperones, he was overwhelmed with offers; that had never previously been the case. He feels that he has lost the trust of parents and students. He dreads public performances with the school music groups. Mr. Pritchard finds he is now constantly guarded in his interactions with students; for example, whereas before he would adjust a student’s fingers on an instrument, he now avoids any physical contact to shield himself from allegations of impropriety. He has cut back on his participation in extra-curricular activities. He has lost his love of teaching; he no longer finds it fun, and he wishes he had the means to get out of the profession. He considered responding to a private school’s advertisement for a summer employment position but did not because of a concern that the posts were still “out there”. Knowing that at least one prominent member of the community saw the posts and commented on them, he feels awkward, humiliated and stressed when out in public, wondering who might know about the Facebook posts and whether they believe the lies that were told about him.
Mr. Pritchard also testified as to how frightened he was that some of the posts suggested he should be confronted or threatened. Mr. Pritchard and his wife both testified that a short time after the posts, their doorbell was rung late at night, and their car was “keyed” in their driveway, an 80 cm scratch that cost approximately $2,000 to repair. His wife also testified to finding large rocks on their driveway and their front lawn.
They also both testified that their two sons, both of whom attended the school where their father teaches, are aware of the Facebook posts, and have appeared to be upset and worried as to the consequences.
Mr. Pritchard testified that he thinks it is unlikely that he could now get a job in another school district. He acknowledged that in fact he has no idea how far and wide the posts actually spread, but he spoke with conviction as to this belief, and I find the fact that he holds this belief to be an illustration of the terrible psychological impact this incident has had.

Who Is Liable and For What?

It’s a horrible tale, and nobody wins.  But what does the court have to say about it?

The claim for nuisance – that is, interference with Mr. Pritchard’s use and enjoyment of his land – is pretty clear.  Both the noise from the waterfall and the two years of the Van Nes’ dog defecating on their yard were clear interferences.  A permanent injunction that the waterfall not be operated between 10pm and 7am was issued.  The judge also awards $2000 for the waterfall noise, and a further $500 for the dog feces. 

The real issue here is, of course, the claim for defamation. 

Is Ms. Van Nes responsible for her own defamatory remarks?  Yes she is.  The remarks and their innuendo were defamatory, and were published to at least the persons who responded, likely to all 2059 of her friends, and (given Ms. Van Nes’ failure to use any privacy settings) viewable to any and all Facebook users.

Is Ms. Van Nes liable for the republication of her defamatory remarks by others? Republication, in this case, happened both on Facebook and via the letter to the school principal.  Yes she is, because she authorized those republications.  Looking at all the circumstances here, especially her frequent and ongoing engagement with the comment thread, the judge finds that Ms. Van Nes had constructive knowledge of Mr. Parks’ comments, soon after they were made.

Her silence, in the face of Mr. Parks’ statement, “why don’t we let the world know”, therefore effectively served as authorization for any and all republication by him, not limited to republication through Facebook. Any person in the position of Mr. Parks would have reasonably assumed such authorization to have been given. I find that the defendant’s failure to take positive steps to warn Mr. Parks not to take measures on his own, following his admonition to “let the world know”, leads to her being deemed to have been a publisher of Mr. Parks’ email to Mr. Pritchard’s principal, Mr. Horton.

Is Ms. Van Nes liable for defamatory third-party Facebook comments?  Again, the answer is yes.  The judge sets out the test for such liability as:  (1) actual knowledge of the defamatory material posted by the third party; (2) a deliberate act or deliberate inaction; and (3) power and control over the defamatory content.  If these three factors can be established, it can be said that the defendant has adopted the third party defamatory material as their own.

In the circumstances of the present case, the foregoing analysis leads to the conclusion that Ms. Van Nes was responsible for the defamatory comments of her “friends”. When the posts were printed off, on the afternoon of June 10th, her various replies were indicated as having been made 21 hours, 16 hours, 15 hours, 4 hours, and 3 hours previously. As I stated above, it is apparent, given the nine reply posts she made to her “friends”’ comments over that time period, that Ms. Van Nes had her Facebook page under, if not continuous, then at least constant viewing. I did not have evidence on the ability of a Facebook user to delete individual posts made on a user’s page; if the version of Facebook then in use did not provide users with that ability, then Ms. Van Nes had an obligation to delete her initial posts, and the comments, in their entirety, as soon as those “friends” began posting defamatory comments of their own. I find as a matter of fact that Ms. Van Nes acquired knowledge of the defamatory comments of her “friends”, if not as they were being made, then at least very shortly thereafter. She had control of her Facebook page. She failed to act by way of deleting those comments, or deleting the posts as a whole, within a reasonable time – a “reasonable time”, given the gravity of the defamatory remarks and the ease with which deletion could be accomplished, being immediately. She is liable to the plaintiff on that basis.

Having established all three potential forms of liability for defamation, Mr. Pritchard is awarded $50,000 in general damages and an additional $15,000 in punitive damages.

But I Was Just Venting…

A final thought from the judgement – one that takes into account the medium and the dynamic of Facebook. 

I would find that the nature of the medium, and the content of Ms. Van Nes’ initial posts, created a reasonable expectation of further defamatory statements being made. Even if it were the case that all she had meant to do was “vent”, I would find that she had a positive obligation to actively monitor and control posted comments. Her failure to do so allowed what may have only started off as thoughtless “venting” to snowball, and to become perceived as a call to action – offers of participation in confrontations and interventions, and recommendations of active steps being taken to shame the plaintiff publically – with devastating consequences. This fact pattern, in my view, is distinguishable from situations involving purely passive providers. The defendant ought to share in responsibility for the defamatory comments posted by third parties, from the time those comments were made, regardless of whether or when she actually became aware of them.

So go ahead. Vent all you want. But your responsibility may extend further than you think…proceed with caution.

Revenge Porn: In Ontario, You’ll Pay With More Than Karma

Doe 464533 v N.D. is a January 2016 decision from the Ontario Superior Court of Justice that makes a strong statement that those who engage in revenge porn will pay with more than just karma points!

The case involved an 18-year-old girl, away at university but still texting, phoning, emailing and otherwise connecting with her ex-boyfriend. Though the formal relationship had ended in spring, they continued to see each other “romantically” through the summer and into that autumn.  These exchanges included him sending multiple intimate photos and videos of himself, and requesting the same of her. 

After months of pressure, she made an intimate video, but was still uncomfortable sharing it.  She texted making clear her misgivings and he convinced her to relent, reassuring her that no one else would ever see the video. Eventually and despite her misgivings she sent the video to him.

Shortly thereafter, she learned that her ex had, on the same day he received it, posted the video to an online website.  He was also sharing it with some of their high school classmates.  She was devastated and humiliated by the discovery, leading to emotional and physical distress that required ongoing counselling, as well as suffering academically and socially. 

The video was online for approximately three weeks before his mother (hearing of the incident from the victim) forced him to remove it.  As the Judge points out, “[t]here is no way to know how many times it was viewed or downloaded during that time, or if and how many times it may have been copied onto other media storage devices…or recirculated.”

The damage is not, of course, limited to that three-week period – it is persistent and ongoing.  She continues to struggle with depression and anxiety.  She lives with the knowledge that former classmates and community members are aware of the video (and in some cases have viewed it), something that has caused harm to her reputation. In addition, she is concerned about the possibility that the video may someday resurface and have an adverse impact on her employment, her career, or her future relationships.

 

The police declined to become involved due to the age(s) of those involved, but she did bring a civil action against him. 

She was successful on her claim of breach of confidence.

She was successful on her claim of intentional infliction of mental distress.

But where it gets really interesting is in Justice Stinson’s assessment of the invasion of privacy claim.

Building upon the recognition of a tort of intrusion upon seclusion in Ontario, he returns to that analysis to locate the injury here as one not of intrusion but of public disclosure of embarrassing facts.  

Normally, the three factors necessary to show such a tort would be:

  1. The disclosure must be a public one. 
  2. The facts disclosed must be private; and
  3. The matter made public must be one which would be offensive and objectionable to a reasonable man of ordinary circumstances.

It is incontrovertible that the video was publicly disclosed. The subject matter of the video – apparently her masturbating – is certainly private.  The first two elements are made out. 

Here is where the judge wins my heart – he refuses to layer sexual shame on an already victimized plaintiff.  Instead of focussing on the subject of the video (her masturbating), he modifies the final requirement so that the requirement is that either the matter publicized or the act of publication itself would be highly offensive to a reasonable person.

In this case, it is the behaviour of the ex that is offensive:

…the defendant posted on the Internet a privately-shared and highly personal intimate video recording of the plaintiff. I find that in doing so he made public an aspect of the plaintiff’s private life. I further find that a reasonable person would find such activity, involving unauthorized public disclosure of such a video, to be highly offensive. It is readily apparent that there was no legitimate public concern in him doing so.

Justice Stinson issues an injunction directing the ex to immediately destroy any and all intimate images or recordings of the plaintiff in whatever form they may exist that he has in his possession, power or control.  A further order permanently prohibits him from publishing, posting, sharing or otherwise disclosing in any fashion any intimate images or recordings of her.  Finally, he is permanently prohibited from communicating with her or members of her immediate family, directly or indirectly.

As for damages, the judge mentions that her claim is limited by procedure to $100,000.    He then considers the following:

  • ·         The circumstances of the victim at the time of the events, including factors such as age and vulnerability. The plaintiff was 18 years old at the time of the incident, a young adult who was a university student. Judging by the impact of the defendant’s actions, she was a vulnerable individual;
  • ·         The circumstances of the assaults including their number, frequency and how violent, invasive and degrading they were. The wrongful act consisted of uploading to a pornographic website a video recording that displayed intimate images of the plaintiff. The defendant’s actions were thus very invasive and degrading. The recording was available for viewing on the Internet for some three weeks. It is impossible to know how many times it was viewed, copied or downloaded, or how many copies still exist elsewhere, out of the defendant’s (and the plaintiff’s – and the Court’s) control. As well, the defendant showed the video to his friends, who were also acquaintances of the plaintiff. Although therewas no physical violence, in these circumstances, especially in light of the multiple times the video was viewed by others and, more importantly, the potential for the video still to be in circulation, it is appropriate to regard this as tantamount to multiple assaults on the plaintiff’s dignity;
  • ·         The circumstances of the defendant, including age and whether he or she was in a position of trust. The defendant was also 18 years of age. He and the plaintiff had been in an intimate – and thus trusting – relationship over a lengthy period. It was on this basis, and on the basis of his assurances that he alone would view it, that he persuaded her to provide the video. His conduct was tantamount to a breach of trust; and
  • ·         The consequences for the victim of the wrongful behaviour including ongoing psychological injuries. As described above, the consequences were emotionally and psychologically devastating for the plaintiff and are ongoing

He awards:

General damages:  $50,000

Aggravated damages (where injury was aggravated by the manner in which it was done):  $25,000

Punitive damages:  $25,000         

With pre-judgement interest and her costs for the action, the full award is $141,708.03

Is it enough to make up for the violation?  No, but I can’t imagine any amount would be.  I hope it’s enough to make the next malicious ex think twice before engaging in this type of behaviour.

On top of that, she gets validation.

She gets recognition that NOTHING she did was inappropriate or offensive.

The judge commends her for earning her undergraduate degree despite these events, as well as for her courage and resolve in pursuing the remedies to which she is entitled. Further, he lets her know that through that courage, she has set a precedent that will allow others who are similarly victimized to seek recourse.

 

 

 

 

 

Where and When is it Reasonable to Expect Your Messages to be Private (and what protection does it offer anyway)?

When you text message someone, do you have a reasonable expectation of privacy in that message?

R. v. Pelucco was a 2015 BC Court of Appeal decision involving a warrantless search of text messages found in a cell phone.  The question was whether the sender had a reasonable expectation of privacy in those messages.   The majority concluded that when legal and social norms were applied, a sender would ordinarily have a reasonable expectation that the messages would remain private. Justice Groberman writing for the majority, concluded that the lack of control once the message had been sent was a relevant factor in assessing objective reasonableness, but not determinative.

I’ve written about this decision previously here.

iphone-5-delete-text-messages-5.jpg

What about when you message someone privately using an online platform? 

In R v Craig, released 11 April 2016, police obtained private online messages between Mr. Craig, E.V.  and several of E.V’s friends from Nexopia, a Canada-based social network site targeted at teens. 

Mr. Craig (22) and E.V.(13) originally met via (private) messaging each other on Nexopia.  Messaging continued, as did offline meetings that ultimately resulted in him (illegally) providing her with alcohol and having sexual relations (to which she could not legally consent, being 13) with her.  When two girls from E.V.’s school overheard a conversation with E.V. regarding her sexual encounter with Mr. Craig, they reported it to a school counsellor. The counsellor subsequently called the police, and the police investigation commenced. He was charged and convicted of sexual touching of a person under the age of 16sexual assault, and internet luring (communicating with a person under the age of 16 years for the purpose of facilitating the commission of an offence under s. 151 with that person). 

When the police interviewed E.V., she provided Mr. Craig’s name and logged on to her Nexopia account to print out messages between them, including a photo of Mr. Craig.    A friend of E.V. also provided pages from her own account containing messages with Mr. Craig in which he admitted to having sex with E.V. 

Police obtained a search warrant for messages on the Nexopia servers under the usernames of E.V., several of her friends, and Mr. Craig.  A number of the documents seized from Nexopia were not disclosed to the defence pursuant to a Criminal Code presumptively forbidding production of complainant or witness records when the charge is sexual assault or sexual interference.  A “record” is one that contains “personal information for which there is a reasonable expectation of privacy.” 

Craig argued that there was no reasonable expectation of privacy in those messages -- that the messages were sent, received and stored on Nexopia’s servers, and thus had never been private.  Accordingly, the defence should be able to access them. 

The threshold for reasonable expectation was articulated as the expectations of the sender at the time the message was sent.  In this case, the messages were “personal communications between friends and confidantes, and were not intended for wider circulation beyond the small circle of friends.”  Accordingly, there was a reasonable expectation of privacy in the messages and they were protected from having to be disclosed to Mr. Craig.

Mr. Craig then sought to exert his own reasonable expectation of privacy over (some of) the Nexopia messages.  The trial judge disagreed, finding that Mr. Craig had no reasonable expectation of privacy in the messages, even those he had authored and sent himself because he had no control over them after sending. 

On appeal, the “control” test was rejected:

While recognizing that electronic surveillance is a particularly serious invasion of privacy, the reasoning is of assistance in this case. Millions, if not billions, of emails and “messages” are sent and received each day all over the world. Email has become the primary method of communication. When an email is sent, one knows it can be forwarded with ease, printed and circulated, or given to the authorities by the recipient. But it does not follow, in my view, that the sender is deprived of all reasonable expectation of privacy. I will discuss this further below. To find that is the case would permit the authorities to seize emails, without prior judicial authorization, from recipients to investigate crime or simply satisfy their curiosity. In my view, the analogy between seizing emails and surreptitious recordings is valid to this extent. [para 63]

Instead, the Court of Appeal found that Mr. Craig DID have an objectively reasonable expectation of privacy in the messages seized by the police, on the basis of both:

  • An emerging Canadian norm of recognizing an expectation of privacy in information given to third parties;

  • The nature of the information itself, since it exposed intimate details of his lifestyle, personal choices, and identifying information;

 (The appeal continued on to find that not only did Mr. Craig have an expectation of privacy in the messages, but that his s. 8 Charter rights against unreasonable search and seizure had been violated.   HOWEVER, the violation was not egregious or intention, it had no or negligible impact on Mr. Craig’s interests, and accordingly admission of the messages into evidence would not bring the administration of justice into disrepute.  In fact, they noted, the case dealt with serious charges involving offences against a young teenager, and this too weighed in favour of admitting the evidence.  The appeal was dismissed, with the Court of Appeal finding that there had been no substantial wrong or miscarriage of justice at trial). 

So there you have it:

Yes, you may well have a reasonable expectation of privacy in messages you’ve sent to others, either via text or online platforms. 

Remember though, that doesn’t mean they stay private – it only means that they (and by extension you and your informational dignity and autonomy) must be treated in accordance with Charter protections