The Missing (and Missed) Context in R v Elliot

The decision in Internet harassment case R v. Elliot is a great example of the importance of context in shaping and understanding online interactions – and of what can happen when technology is conflated with context.

Background

The circumstances here are complex, originating in discussions on a hashtagged local political group.  Ms. Guthrie and Mr. Elliot had previously met in person to discuss the possibility of his designing a poster for an event, though ultimately he was not asked to perform the task.  The conflict between Elliot and Guthrie appears to have begun after a web-based game was released—the object of which was to punch an image of “Feminist Frequency” scholar Anita Sarkeesian until bruised and bloody. They differed on the controversial decision to expose the identity of the anonymous game designer, exposing him to the opprobrium of both his local community and online feminist community at large.  From there, relations between Elliott and many members of the feminist community became increasingly fractured and angry. 

To show criminal harassment in this case requires establishing:

  • ·        That Elliot engaged in conduct –  repeated communication via twitter
  • ·         That caused Ms. Guthrie and Ms. Reilly to feel harassed
  • ·         That he was aware of their harassment;
  • ·         The harassment caused them to fear for their safety; and
  • ·         That their fear was reasonable in the circumstances.

Breaking Down the Decision

In his long and carefully reasoned decision, Justice Knazan begins with an examination of the technological context in which it is rooted – the twitter platform.  The background on twitter does not include technical specs – rather, it is derived from those involved in the case.  A glossary of twitter terms is provided.  Next, the different means of communication on the platform are elucidated – a direct message, a public message, replying to a message, mentioning someone in a message, retweeting, etc.  – along with the (potential) audience for each. 

First, the difficulty of examining the communication in its entirety was discussed.  The communications took place over an extended period of time, and complainants did not have recollection of the exact text of each individual one.  The police detective, along with software and the twitter platform, was able to assemble a collection of tweets, but there were issues.   

The Court did contemplate the importance of context, with the Judge writing that

I cannot fully understand all the circumstances within the meaning of s. 264 with respect to the proven tweets without seeing the tweets that precede them on the printed-out page. When other tweets appear on the page of the printed tweets, which are in the end the exact product of the Sysomos search, they are then as much a part of the evidence as the original tweets. Their provenance and date are proven just as much as the main tweet that led to the result that Det. Bangild obtained. There is no difference between them and the searched-for tweets, even if no witness has confirmed that they were sent.

Using this expansive view, the court determines that conduct on twitter can constitute communication for the purposes of this case.

Ms. Guthrie

Turning to the charges, the court first examines Mr. Elliot’s interactions with Ms. Guthrie in order to determine whether the requirements of the charge are met.

  • Yes, there was repeated communication
  • Yes, the repeated communication made her feel harassed
  • No, Mr. Elliot did not know that she was harassed BUT he was reckless in that he was aware of a risk of harassment yet continued his behaviour despite that; and
  • Ms. Guthrie was fearful

It is on the issue of whether that fear was reasonable in all the circumstances where Ms. Guthrie’s claim fails.  Despite prefacing his analysis with the recognition that:

That Ms. Guthrie is a woman is relevant. Crown counsel submits that “a reasonable person, especially, a woman, would find Mr. Elliott’s tweets and behaviour concerning and scary.” Women are vulnerable to violence and harassment by men, and Ms. Guthrie advocates for understanding and change. I must judge the reasonableness of Ms. Guthrie’s fear in all the circumstances and on the evidence

nevertheless, the circumstances are misapplied.  Looking at the range of tweets and conversation trails, Ms. Guthrie’s active decision to “block” Mr. Elliott on twitter, her request to him to stop contacting her, and his continued participation in group conversation in which she was involved, the judge suggests that while under normal circumstances this would be sufficient to establish reasonableness, in this case it is not.  Instead, focussing on the technological context and reviewing the history between the Elliot and Guthrie, he (mis)interprets her anger and frustration about Elliot’s participation on certain hashtags and from this concludes that Ms. Guthrie’s belief that Elliot's activities indicated an obsession with her was unreasonable in the circumstances.   

With no tweets that explicitly show sexual or violence threats, no tweets that the judge interprets as demonstrating the irrationality perceived by Ms. Guthrie, the charge of criminal harassment of Ms. Guthrie fails, because the fear she experienced was not considered reasonable in all the circumstances.

Ms. Reilly

Mr. Elliott communicated directly with Ms. Reilly. He identified her explicitly when publicly complaining about her request that he stop following her feed, called her “fucking nuts”, and told her that “This is Twitter” and that her request that he stop replying to her posts offended his “sensibilities”. He replied to her retweets by asking how she would feel if he was so delusional as to ask her not to retweet him. He also communicated with Ms. Reilly indirectly during the period, by mentioning her in other tweets.

In this situation, the court found that:

  • Yes, there was repeated communication
  • Yes, the repeated communication made her feel harassed; and
  • Yes, Mr. Elliot did know that she was harassed

Here, it was the requirement that Ms. Reilly be fearful as a result of the harassment that derailed the case. 

A key feature in Ms. Reilly’s interactions with Mr. Elliott was fear for her physical safety – at one point he made reference to the public location where she and others were meeting.  This led her to fear that he was at that location, and she subsequently checked the room to ensure he was not.  This led to an ongoing concern about his ability to monitor meetings by the group and about physical safety in those instances.  Ms. Reilly had even filed a complaint with Twitter in September 2012, stating among other things that “…I am part of a ladies group that meets Mondays, and he is ‘tweet eavesdropping/stalking’ this group, which also leads many of us to be concerned for our safety in real life, as this has now begun to feel like a real life threat.”

The judge appears to have been put-off by her behaviour on twitter – Justice Knazan notes that “Ms. Reilly’s retweeting of forceful, insulting, unconfirmed and ultimately inaccurate attacks [against Elliot] suggesting pedophilia – combined with her tentative, hypothetical concerns that he could possibly move from online to offline harassment, and her knowledge that he never came to the Cadillac Lounge and never again referred to her whereabouts – raises doubt in my mind to whether she was afraid of Mr. Elliott.”

It was this doubt, ultimately, that compromised this charge – the judge was not satisfied beyond a reasonable doubt that the communications resulted in Ms. Reilly’s fearing for her safety. 

The Overlooked Context

Now let’s look at the larger issues of context. 

Early in the decision, Justice Knazan quotes from the McLaughlin/L’Heureux-Dube minority concurring judgement in R. v. R.D.S.:

41     It is axiomatic that all cases litigated before judges are, to a greater or lesser degree, complex. There is more to a case than who did what to whom, and the questions of fact and law to be determined in any given case do not arise in a vacuum. Rather, they are the consequence of numerous factors, influenced by the innumerable forces which impact on them in a particular context. Judges, acting as finders of fact, must inquire into those forces. In short, they must be aware of the context in which the alleged crime occurred.

42     Judicial inquiry into the factual, social and psychological context within which litigation arises is not unusual. Rather, a conscihttp://gawker.com/what-is-gamergate-and-why-an-explainer-for-non-geeks-1642909080ous, con­text­ual inquiry has become an accepted step towards judicial impartiality. In that regard, Professor Jennifer Nedelsky's "Embodied Diversity and the Challenges to Law" (1997), 42 McGill L.J. 91, at p. 107, offers the following comment:

What makes it possible for us to genuinely judge, to move beyond our private idiosyncracies and preferences, is our capacity to achieve an "enlarge­ment of mind". We do this by taking different perspectives into account. This is the path out of the blindness of our subjective private con­ditions. The more views we are able to take into account, the less likely we are to be locked into one perspective .... It is the capacity for "enlarge­ment of mind" that makes autonomous, impartial judgment possible.

 So what is this larger context?

Both Ms. Guthrie and Ms. Reilly were active in online and offline feminist communities.  Mr. Elliot too was politically active – remember that the original connection here was his volunteering to assist in designing something for an event in which the women were involved. 

These events are the context in which they live and work.  And they are the context within which the experiences and fears of Ms. Guthrie and Ms. Reilly must be understood. 

There is no indication in the decision that these issues were put forward by the Crown.  Perhaps they were.  Either way, they certainly do not seem to have been factored into “all the circumstances” within which these charges are being analyzed and assessed. 

The focus on the twitter platform and desire to properly parse and understand the technological context within which these charges arose is admirable.  But the technological is not and is never the only context within which interactions take place.  By ignoring the broader social and political context, this decision misses an important opportunity to address a growing problem—how to balance freedom of speech with women’s right to participate fully both onlione and offline without fear

 

 

 

 

 

solitary, poor, nasty, brutish and short

Reviewing data from the Family Online Safety Institute led Larry Magid to contemplating cyberbullying, among other things:  

But what's even more disturbing is that young adults -- mostly digital natives -- "are more likely than any other demographic group to experience online harassment." Nearly two-thirds of that 18-29 age-group "have been the target of at least one of the six elements of harassment that were queried in the survey. Among those 18-24, the proportion is 70 percent." It's even worse for young women.

You just *know* that why it’s disturbing is because it happens online….whereas I’d bet that these numbers aren’t any different than the numbers of kids who’ve been bullied and ostracized and harassed for years – it’s just that when it happens online suddenly someone wants to know about it. 

I’ll admit, the online aspect does make it more pervasive – if I hadn’t been able to go home at night, to get away at least in part….well, I doubt I’d have made it to adulthood.  With social media and other tools, the harassment can reach out and smack you, no matter where you are.  There are no safe spaces anymore. 

But let’s not pretend this is new or that it has its genesis in technology – up to 70% have experienced bullying between 18-24?  Bet the numbers are the same or worse in high school. 

Hobbes described the life of man as “solitary, poor, nasty, brutish, and short.”  For far too many of us that describes adolescence and young adulthood spent with our “peers”….

 

Peeple: the Commodification of Social Control?

From www.forthepeeple.com

Meet Peeple

We are a concept that has never been done before in a digital space that will allow you to really see how you show up in this world as seen through the eyes of your network.

Peeple is an app that allows you to rate and comment about the people you interact with in your daily lives on the following three categories: personal, professional, and dating.

Peeple will enhance your online reputation for access to better quality networks, top job opportunities, and promote more informed decision making about people.

My first interest in reputation in online spaces came from a particular kind of knowledge –knowledge that any girl who went to high school has -- that “reputation” and “dating” are never a good combination. Such evaluations are never as objective or truthful as they purport to be, and never without a cost to those who are being assessed/rated.  Maybe everyone knows this, but I’m inclined to think that some of us—those who by virtue of our Otherness are inevitably the object of critical review—internalize that knowledge at a much deeper level.

Given this, I confess that I smiled ruefully when I saw a photo of the two founders of Peeple—the self-described “positivity app launching in November 2015” that purports to enable users to rank people the way other apps (think Yelp) rank restaurants and, say, public restrooms. Peeple’s founders are blondish, youngish, and conventionally attractive.

Normal
0





false
false
false

EN-US
JA
X-NONE

 
 
 
 
 
 
 
 
 


 
 
 
 
 
 
 
 
 
 
 


 <w:LatentStyles DefLockedState="false" DefUnhideWhenUsed="false"
DefSemiHidden="false" DefQFormat="false" DefPriority="99"
LatentStyleCount="371"…

                    Nicole McCullough and Julia Cordray

 

 

I’m not noting their appearance to be dismissive…but I am suggesting (fairly or not) that those who are least likely to have been socially marginalized and ostracized are also perhaps most likely to believe that an app designed to rate and comment on other people could “spread love and positivity.”

Frenzied media coverage has raised many of the most obvious problems with this business idea including:

  • Users can set up profiles for others without  the consent of the person being rated
  • Ratings are inherently subjective
  • There aren’t credible safeguards for accuracy or protections from bias
  • It will be up to a combination of automated software and human site administrators to determine if feedback is “positive” or “negative”, whether to publish it or remove it, etc.
  • It presumes, without evidence, that crowd-sourced opinions are reliable
  • The fundamental concept is an invasion of privacy and threat to reputation
  • The approach objectifies human beings and commoditizes interpersonal relationships.

These are all important concerns, but I’d like to take a step back and look at the larger overarching potential impact of Peeple in terms of creating a state of perpetual surveillance that itself enforces and reinforces particular (mainstream) expectations of behaviour.

This project brings to mind the Panopticon—an architectural concept for institutional buildings, designed so that inmates/inhabitants can be observed from a central point without knowing whether they are being watched at any given moment. It’s based on philosopher Jeremy Bentham’s assertion that “the more constantly the persons to be inspected are under the eyes of the persons who should inspect them, the more perfectly will the purpose X of the establishment have been attained. Ideal perfection, if that were the object, would require that each person should actually be in that predicament, during every instant of time.” (Jeremy Bentham, The Panopticon Writings by Mweran Bozovic at Letter 1). 

Philosopher Michel Foucault later elaborated upon Bentham’s notion of the Panopticon, seeing in it a metaphor for the exercise of power in modern societies.  He explains that “…it arranges things in such a way that the exercise of power is not added on from the outside, like a rigid, heavy constraint, to the functions it invests, but is so subtly present in them as to increase their efficiency by itself increasing its own points of contact. (See Michel Foucault, Discipline and Punish: The Birth of the Prison).

What does this have to do with Peeple?  What overarching control could there be when the app itself clearly states that it is simply sharing feedback?  These reports aren’t happening in a vacuum – inevitably ratings are made with reference to a shared community standard  – setting and reinforcing community norms and reviewing whether or not individuals’ have appropriately met or performed those standards. 

Traditionally, surveillance within the panopticon was intended to impose and enforce chosen norms/rules/behaviours.  Its goal was the production of “docile bodies” – to remove the need for policing of behaviour via force, replacing it instead with the creation of a state of vulnerability induced by the perception of perpetual visibility that resulted in individuals self-policing their own behaviours towards the desired outcome. 

With Peeple, we run that same risk of creating docile bodies and enforcing desired behaviours – knowing that information is collected and shared will (perhaps inevitably) influence the behaviour of an individual who is subject to those reviews. Anyone who wants to continue active and productive participation in a community must be aware of this information repository and the standards that it maintains and enforces.  The collection and sharing of reputation becomes in essence a form of social control. 

Worryingly, in the case of Peeple, it’s a form of social control that is both privately administered and inherently commodified.

Speaking of the Right to be Forgotten, Could We Please Forget This Fearmongering?

In the wake of the original Right to be Forgotten (RTBF) decision, citizens had the opportunity to apply to Google for removal from their search index of information that was inadequate, irrelevant, excessive and/or not in the public interest.  Google says that since the decision it has received more than 250,000 requests, and that they have concurred with the request and acted upon it in 41.6% of the cases

In France, even where Google accepted/approved the request for delisting, it implemented that only on specific geographical extensions of the search engine – primarily .fr (France) although in some cases other European extensions were included.  This strategy resulted in a duality where information that had been excluded from some search engine results was still available via Google.com and other geographic extensions.  Becoming aware of this, the President of CNIL (France’s data protection organization) formally gave notice to Google that it must delist information on all of its search engine domains.  In July 2015 Google filed an appeal of the order, citing the critiques that have become all-too-familiar – claiming that to do so would amount to censorship, as well as damaging the public’s right to information.

This week, on 21 September 2015, the President of CNIL rejec ted Google’s appeal for a number of reasons:

  • In order to be meaningful and consistent with the right as recognized, a delisting must be implemented on all extensions.  It is too easy to circumvent a RTBF that applies only on some extensions, which is inconsistent with the RTBF and creates a troubling situation where informational self-determination is a variable right;

  • Rejecting the conflation of RTBF with information deletion, the President emphasized that delisting does NOT delete information from the internet.  Even while removed from search listings, the information remains directly accessible on the source website.

  • The presumption that the public interest is inherently damaged fails to acknowledge that the public interest is considered in the determination of whether to grant a particular request.  RTBF is not an absolute right – it requires a balancing of the interest of the individual against the public’s right to information; and

  • This is not a case where France is attempting to impose French law universally – rather, CNIL “simply requests full observance of European legislation by non European players offering their services in Europe.”

With the refusal of its (informal) appeal, Google is now required to comply with the original CNIL order.  Failure to do so will result in fines that begin in the $300,000 range but could rise as high as 2-5% of Google’s global operating costs.


At what point it is no longer reasonable to expect any privacy at all?

Legal tests around privacy speak of a “reasonable expectation” of privacy.  As we become more and more aware of the multiple forms that tracking takes and the multiplicity of them with which we engage in a given day, how do we retain a reasonable expectation of privacy?  On top of that, government surveillance and private sector data mining are increasing and intersecting, and this too raises the question of the reasonableness of any expectation of privacy.   Is there is some kind of knowledge saturation point after which holding any expectation of privacy is presumptively unreasonable?  

The Case:  R. v. Pelucco, 2015 BCCA 370 (CanLII), http://canlii.ca/t/gkrd1

The accused was arranging, through text messages, to sell one kilogram of cocaine to Mr. Guray, when the police arrested Mr. Guray and seized his cellphone. Posing as Mr. Guray, they used his cellphone to arrange via text message to meet the accused and then arrested him. Police found drugs in his vehicle, and obtained a search warrant to search his residence, where more drugs were found. He was charged with three drug offences. At trial, he successfully applied to have all evidence excluded, contending that Mr. Guray had been unlawfully arrested, and that the search of the text messages on Mr. Guray’s cellphone violated his own right to be secure against unreasonable search and seizure. The Crown appealed, arguing that because the accused had no control over Mr. Guray’s cellphone, he had no reasonable expectation of privacy in the text messages from him that were recorded on it.

Justice Goepel, in his dissenting opinion in R v. Pelucco takes on Mr. Pelucco’s expectation of privacy in text messages that he had sent, as well as the larger question of whether any expectation of privacy could be reasonable in any circumstances.  Shutting down the suggestion that increased public knowledge and experience should somehow negate the reasonable expectation of privacy, he states that “the expectation of privacy is not meant to be a factual description of whether Canadians expect to be free from interference from the state, such that the state could reduce subjective expectations of privacy solely through adopting sufficiently invasive techniques.”  Building on this, and citing Tessling and Patrick, he puts forward

…the self-evident principle that the government cannot create an intrusive spying regime, diminishing Canadians’ expectation of privacy as a result, and claim that its conduct does not offend the constitution because Canadians, due to the serious invasion of privacy, no longer expect their affairs to remain private from government agents.

Admittedly these points are made in dissent, but this analysis is not key to his dissent, nor does it stand in opposition to the majority decision.  Although the dissent and majority do ultimately differ in their interpretations of the reasonableness of Mr. Pelucco’s expectation of privacy, that difference is not grounded in a limiting of reasonableness due to common expectations. 

Both majority and dissent, then agree that Mr. Pelucco himself had a (subjective) expectation of privacy, although they differ when it comes to the question of whether his individual belief was in fact objectively reasonable – the majority working from the assumption that a sender will ordinarily have a reasonable expectation that a text message will remain private in the hands of its recipient, while the dissent concluded instead that once the text message was received by Mr. Guray, Mr. Pelucco lost the ability to control what was done with or to it and therefore couldn’t reasonably have had an expectation of privacy. 

In the end, the case is resolved as follows à (1) it is objectively reasonable that a person would expect that a text message that they sent will remain private in the hands of its recipient; (2) in the actual facts of this situation there is nothing unusual or remarkable about the content or circumstances of the text conversation that might contradict or contraindicate this expectation of privacy;  and thus (3) the expectation remains.  

I am pleased that the criminal aspect of this has not swayed the analysis – that the Wong recognition that whether or not persons have a reasonable expectation of privacy does not depend on whether those persons were engaged in illegal activities continues to hold.   Pleased too at this recent statement and the recognition of another principle of interpretation –that State surveillance and intrusion cannot cloak itself in constitutionality merely by being so prevalent that it is held to diminish or erase any/every individual’s reasonable expectation of privacy.

The Balance Inherent in the Right to be Forgotten

Trust me, I’d love to stop writing about the RTBF.  I’m not even sure how it came to take up so much real estate on this blog and in media generally. Especially since, as I’ve said before, it isn’t really anything new!  Nevertheless, the RTBF continues to rankle as the original decision reverberates through search engine companies and various countries around the globe.

A New York Times article on 5 Aug 2015 sets out the original decision and examines the changes wrought by the decision, positing that the RTBF will ultimately spread outside the EU boundaries and become normalized in multiple jurisdictions including the US. 

Emma Llansó, a free expression scholar at the Center for Democracy and Technology, is quoted criticizing the RTBF within the context of the US saying:

“When we’re talking about a broadly scoped right to be forgotten that’s about altering the historical record or making information that was lawfully public no longer accessible to people, I don’t see a way to square that with a fundamental right to access to information”

The article provides strong arguments on the other side as well. Marc Rotenberg of the Electronic Privacy Information Center  that says that “global implementation of the fundamental right to privacy on the Internet would be a spectacular achievement” and a positive development for users.  Asked about the allegation that freedom of speech is compromised by the removal of information, he notes that there are ways to limit access to private information that do not conflict with free speech and in fact that Google already has a process for global removal of some identifiable private information, like bank account numbers, social security numbers and sexually explicit images uploaded without the subject’s consent (“revenge porn”) that hasn’t attracted the same concerns. 

As for concerns about international implementation, Jonathon Zittrain of the Berkman Centre for Internet and Society at Harvard reminds us that this too is already in practice—when Google receives a takedown notice for linking to copyright infringing content, it removes those links from all of its sites across the world.

PERSPECTIVE

In any discussion of this issue it’s important to understand that RTBF was not intended to be an absolute right – rather it is inherently a process of balancing competing interests.  Indeed, after the original RTBF decision, Google instituted a process by which individuals could make RTBF requests.  Their own data shows that since the process was instituted in May 2014, roughly 41 percent of the one million requests it has received have been successful.  It is also worth noting that the original information “removed” in the successful requests doesn’t disappear – rather, the original source is no longer indexed by Google or shown in search engine results.

A recent decision in British Columbia shows that, although the RTBF hasn’t been formally implemented in Canada, the balancing of rights is active and seems to be working out just fine.

Niemela v. Malamas arises from a situation where former clients allegedly made defamatory comments about Mr. Niemela on a variety of sites.  Although the comments ceased, Mr. Niemela found that his law practice was affected by these comments.  Accordingly, he began an action against those who he believed had made the comments and the sites upon which the comments were published.  He was successful in this claim, and thus obtained injunctions requiring the removal of 146 posts from various sites. 

Where it becomes particularly interesting, however, is that he also filed suit against Google for publishing defamatory statements about him because individuals gained access to the information through being able to view snippets of the comments in the Google search results. Google asked that this action be dismissed, and ultimately it was—because the court determined that on all the facts the “snippets” were the product of an algorithm and that there was nothing to indicate that Google was actively involved in publishing the statements.

It doesn’t end there, however.  When the first suit concluded, Google voluntarily removed links to the 146 sites from its Canadian search engine results.  Mr. Niemela, however, was not satisfied with this – he wanted Google to remove the links from its search engine results worldwide. 

In considering this request, the court set out a three-part test that Mr. Niemela must meet:

  1. that there is strong evidence that the words are defamatory;
  2. that a failure to grant the injunction will result in irreparable harm; and
  3. that the balance of convenience favours granting the injunction.

On the facts, it was the opinion of the court that while there was strong evidence of defamation, the case failed at the second step – (i) because the majority of searches on Mr. Niemela were made on the Google Canada; (ii) it was not obvious that the damage Mr. Niemela alleged was caused by the defamatory comments at all; and (iii) such an order would not be internationally enforceable since it was against US policy on defamation and freedom of speech.

This is a great example of how to approach RTBF – all the facts are considered in context to assess the role of the search engine results in perpetrating the injury and to ascertain whether ON BALANCE an order to remove the information (and the extent of such removal) is warranted. 

In the end, what the RTBF requires is a balancing of the benefits of online information availability against the privacy of individuals. 

Really – does this strike you as compromising an important public record?  As undercutting freedom of expression?  Even of facilitating user vanity at the expense of public information?  Or is it the kind of approach that should be built into the process of digitizing and making public information about individuals in order to ensure that in our excitement about technological capacity we don’t compromise individual autonomy?

 

not gospel -- just information

The Internet has, in recent years, become a vast repository for booking photographs, in part because of large websites like mugshots.com, which post them and then charge hundreds of dollars for their removal

 Our problem is not (and has never been IMO) that information is available – even perpetually available – online.  Our problem instead is that we (generically) cannot seem to wrap our heads around the idea that just because something is on the internet doesn’t make it real, or true, or even – should the other two criteria be met -- relevant.  It’s just information.

Arguments about free speech, about the right to know, about the importance of generating discussion get juxtaposed with those about the presumption of innocence, about the right to due process and concerns about proportionality and community response. 

In other arenas, we’ve had discussions about whether there is in fact a ‘Right to be Forgotten’.  Whether individuals should have the right to request that information about themselves that is outdated, irrelevant and (potentially) damaging be removed from search engine results.  In doing so, are we altering the public record to our detriment?  

The problem, it seems to me, is not solved by deciding whether or not information should be originally posted, nor whether it should be removed in some kind of timely fashion.  Rather, the problem is with the presumption that any and every piece of information that can be compiled is equally important or, in fact, important at all.  In non-virtual spaces we have no trouble weighting the value to be placed on a given piece of information – we look at context, at the when and how and where and we understand that something said as a frustrated 14 year old may be less indicative of that person’s views, personality or competence than a published position paper or even an adult statement in similar circumstances. 

Why, then, are we unable to apply that same judicious weighting to information we find online?  Why are we advising students and young adults that they may need to revisit blog posts from tears ago and “amend” them or add some kind of statement qualifying or even denying the statements therein contained.  Why the fear mongering, the concern that any and every thing about you that has ever been put online (by you or anyone else) must be mercilessly monitored lest it disqualify you from a job or other opportunity?  For that matter, why is it ok for companies to have business models predicated on encouraging and then exploiting personal statements yet we condemn individuals for making those personal statements?  Not to mention a business model that generates revenue by posting information online and then charging for its removal.

Let’s absolutely have the conversations about this information – about whether and when to put information up or to remove it, about who should have access to information and how they should be able to access it.   Let us also, however, stop treating any and all information as being of equal (and heavy) weight.  Let us return to looking at information critically, assessing its worth before we use it to make decisions.  Let’s actively engage with information rather than passively consuming it.  And instead of lecturing people and insisting that they have some kind of quasi-moral informational self-determination obligation, let us instead re-view what and how that information is being used, not to mention commodified.

 

My Home Is My Castle – Unless You’re Making Art

Thinking over the recent finding that a photographer who took one year’s worth of pictures of the family who lived in the building across from him through their window.  Done surreptitiously, no consent or knowledge of the photography taking place – in fact, the Fosters only found out about the series when Arne Svenson exhibited “The Neighbours” in a local gallery and they were recognized.

Seems simple – your home is your castle, this guy was taking pictures of them in their private home without their knowledge or consent – but the Appellate Court found that this did not constitute either stalking or an invasion of privacy.  Why?  Because it is “art”.

But “the invasion of privacy of one’s home that took place here is not actionable ... because the defendant’s use of the images in question constituted art work” and were not used for advertising or in trade.

Is an invasion of privacy determined  by the uses to which the products of that invasion are put?  I’m willing to concede that the use to which the product of an invasion of privacy are put could/should certainly be factored in to a determination of damages or remedy.  But surely the invasion and the uses to which its product are put should be addressed separately?

This is a US case, so it’s hard to know how a Canadian court would deal with the same situation.

home is castle.jpg

I would hope that the issues would be dealt with separately – first a consideration of whether there has been an invasion of privacy in collecting the information, and second an examination of the use/disclosure of the information. 

In examining the collection of information – the year of taking candid photos of their life inside the apartment – I would hope that the focus of the inquiry would be on the expectation of privacy of the Foster family.  There can be no question that they believed themselves to be in the privacy of their own home – the surreptitious photography of their actions is unquestionably outside their expectations.  We might even look to the Supreme Court of Canada’s approach in R v Clarke for clarity.  In that case, which dealt with a man masturbating at the window of his illuminated living room, the court explored whether acts committed in one’s own home could constitute an act “in a public place” by reason of visibility.  The Supreme Court of Canada concluded that a “public place” was to be defined as “any place to which the public have access as of right or by invitation, express or implied”.  “Access” means “the right or opportunity to reach or use or visit” and not the ability of those who are neither entitled nor invited to enter a place to see or hear from the outside, through uncovered windows or open doors, what is transpiring within.  Regardless of whether the photographer was able to see inside the Fosters’ apartment or not, it is clearly a private space within which he intruded.


Cui Bono? Can Profile Amalgamation Benefit Users? Or Merely Profit Companies?

A recent piece in Medium asks the question Why Don’t Recommendations Look At the Bigger Picture?  Why aren’t larger sites using cross-category marketing?  When a user links a Spotify account to a Facebook account, why doesn’t this lead to Spotify making recommendations based on comments, links, and other information shared on Facebook?  As the author explains “What I find galling is not that I have so much information about myself on the web — I put it all out there — but that no one can use it in any meaningful, smart way to serve me good content.”

This point of view is very much in line with the presumptions of targeted marketing – the notion that getting more and better recommendations will benefit users as well as platforms.  She brings up the incident where Target inadvertently “outed” a pregnant teen to her father and dismisses its relevance just as quickly.  She considers this incident to represent a marketing “failure” by the company rather than looking at the issue of privacy and/or protecting personal data. 

Maybe before we complain about companies not commodifying our preferences and profiles *competently*, we should take a minute to consider whether they ought to do it *at all*.

Should all our profiles be amalgamated and mined? 

If one links a Spotify account to a Facebook account, surely that provides implicit consent to linking those profiles for mining purposes?  Perhaps….but can either organization be sure whether the user fully understood to what they were consenting? 

This assumes of course that what is being collected and mined is, in fact, the user’s own information.  If others share an Amazon or iTunes account, how might that skew a profile? With whom might that profile be shared?  What (erroneous) inferences could be drawn from such a profile, and how might that impact the user?  Would we even know about the impacts—let alone their origins—such that errors could be corrected?

Profiles can be generated using information gleaned outside of “authorized” sharing.  On 25 March the FTC ruled on the Jerk.com case, a website that presented users with personal profiles of themselves labeled “Jerk” or “Not a Jerk,” purportedly posted by other users. The FTC found that, In fact, information in the profiles was harvested from Facebook and the “jerk” labels were added by site personnel.  When profiles are being amalgamated for the purposes of mining, how can we be assured that this kind of falsely “user-generated” information will not be available?

We all love getting a gift or recommendation from someone who just “gets” us (as demonstrated by an astute choice). But…what my mother should “get” about me is not the same as what my lover “gets” about me or what friends “get”.  Even within the category of friends, different aspects of myself are emphasized in certain contexts, and “getting” me will vary accordingly.  I am NOT homogenous and neither are my preferences or my profiles. 

Would it be convenient to have astute recommendations offered to me?  Again, perhaps…but I can’t help but feel that it would profit the company making the recommendations far more than it would benefit me.  And given that, I’d rather avoid the amalgamation and mining of profiles.  This is definitely a situation where the costs and risks seem to outweigh the (negligible) benefits.


Customer Service, Extortion, and Reputation: KlearGear.com

good reputation.jpg

 

UPDATE:  Protecting Brand Image or Gaming the System? Consumer 'Gag' Contracts in an Age of Crowdsourced Ratings and Reviews by Lucille M. Ponte

The growing power of reputation is indisputable.  As society becomes more dispersed –geographically and spatially (expanding to online spaces) – we increasingly deal with others we do not know well, or sometimes know at all. In order to facilitate these dealings and build the requisite trust necessary for e-commerce and other social, economic and political interactions, that uncertainty between strangers must be addressed. It is more than authentication of identity, it’s “I don’t want to know who you are so much as I want to know how I should treat you, and whether I should trust you.”

In these relationships information about individuals, organizations and institutions that is available online becomes extremely important. 

The various roles and resonances of reputation are being played out very publicly in the case of KlearGear.com—a dismal tale of poor customer service, faux consumer advocacy, and the power of erroneous credit reports making life miserable for consumers.

Recently news sites and bloggers have been buzzing about KlearGear’s treatment of customer Jen Palmer. Ms. Palmer’s partner ordered Christmas presents for her from KlearGear.com in 2008.  Though paid for, the gifts never arrived, and despite multiple efforts she was unable to contact anyone at KlearGear to resolve the problem.  Eventually PayPal cancelled the transaction and Ms. Palmer expressed her frustration on for-profit consumer complaint site RipoffReport.com. 

Three years later the Palmers were contacted by KlearGear.com demanding that the negative comments from RipoffReport.com be removed within 72 hours, or face a $3,500 “fine”. Failure to pay the “fine” would be reported to credit bureaus and have a negative impact on their credit rating.

KlearGear.com cited a non-disparagement clause in their Terms of Sales providing that

In an effort to ensure fair and honest public feedback, and to prevent the publishing of libelous content in any form, your acceptance of this sales contract prohibits you from taking any action that negatively impacts kleargear.com, its reputation, products, services, management or employees.

When Ms. Palmer attempted to take down her comment, she says she was notified that RipoffReport would only allow her to do so if she paid a $2,000 fee (!).  Subsequently KlearGear.com has attempted to collect the $3,500 fine and followed through on the threat to report it as a delinquent account to credit bureaus. Palmer’s attempts to challenge the report have been unsuccessful and as of this writing, it remains on her record.

Reputation fallout

As someone who has taught contract law, I can’t help but observe that from a purely contract standpoint, it is certainly arguable that KlearGear has no claim here.  Since the order was never fulfilled and the payment was refunded there would seem to be no contract, and thus no term that could be enforced. 

Even more egregious, as TechDirt found, it appears that the clause KlearGear cited wasn’t even part of the Terms at the time the Palmer order was placed! 

Reputation is at the heart of this case:

KlearGear: KlearGear’s recognition of the importance of branding and reputation is evident through their inclusion of the non-disparagement clause.  Whether such a clause is or should be enforceable is debatable, but at the very least its inclusion signals an awareness of the importance of reputation management.

RipoffReport.com: Another acknowledgement of the importance of reputation, as well as an exploitative attempt to profit from it, is evident in RipoffReport.com’s refusal to remove the post in question without payment a $2,000 fee.  RipoffReport’s own Terms of Use claim that in order to create a complete record, posts on RipoffReport will not be removed. This statement is clearly at odds with their attempt to profit from the Palmer’s alarm over the threats from KlearGear.  Legitimate consumer protection sites are predicated on mobilizing the power of reputation in order to protect consumers and empower consumers to protect themselves by providing access to information with which to make educated decisions, not extortion.

Credit Reporting Agencies: Credit bureaus and reports are, of course, a long-standing use of reputation. Various organizations contribute input their financial transactions with individual consumers and their performance obligations during those transactions.  This information is collected in a central database and this amalgamated information can be consulted in order to assess the credit-worthiness of an individual consumer.  Credit reports can be a double edged sword – while they nominally provide an important resource, they may also (as in the Palmer case) report and perpetuating inaccurate information. Palmer asserts that the negative credit report resulting from this incident still stands and has resulted in the denial of loans.

 BEST PRACTICES SIDEBAR:  All individuals should periodically review their credit report for inaccuracies and errors.  The Office of Consumer Affairs has information about how to do so. 

Reputation: the double-edged sword

Ironically, in their attempt to guard against negative reviews and reputation damage, KlearGear has managed to attract dramatically more negative attention than Ms. Palmer’s original RipoffReport post ever could have.  The attempt to enforce the Terms of Sale clause drew attention, and then those investigating the report discovered that Ms. Palmer’s original claim was true – that it was impossible to contact anyone from KlearGear.  Reports have also emerged calling attention to KlearGear improperly advertising TRUSTe and BBB certifications that they do not hold. KlearGear’s negative Better Business Bureau record, the debate over whether the extortionate clause should be part of a contract (and whether it was part of an actual contract between the Palmers and KlearGear) create even more negative attention and reputation damage to the company. 

Reputation is a powerful force in our modern culture—exceedingly influential and extraordinarily vulnerable.

UPDATE: as of September 19 KlearGear.com has gone into “social media lockdown”, deleting its Facebook presence and locking its Twitter account, and RipoffReport is defending its practice of refusing to remove reports at authors’ request and $2,000 “VIP Arbitration” charge.

 

UPDATE 25 November 2013:  Public Citizen is representing the Palmers, and has now sent Kleargear a demand letter requesting (1) that they remove the erroneous credit bureau notice; (2) that they pay $75000 in damages for the effects of the notice; and (3) that they commit to no longer using the non-disparagement clause against clients.  Will be interesting to see what Kleargear's response is!

 

UPDATE 21 May 2014:  Despite judgement against them, and despite a clause in the Kleargear ToU that explicitly invokes the laws of the State of Michigan, Kleargear's parent company is now arguing (publicly, but not yet in court) that since they are resident in France they were not served properly and thus the judgement does not apply.  You have to wonder how much further they'll go to avoid taking responsibility for their actions...

 

UPDATE June 2014:  Despite their posturing about not being served properly, Kleargear did not raise any formal objections, and accordingly a default judgement has been issued in favour of the Palmers in the amount of $306,750.00

First Do No Harm (to Research Interests)

The Council of Canadian Academics released its report Accessing Health And Health-Related Data in Canada on 31 March 2015.  A strong and comprehensive piece of work, this report—which was requested by the Canadian Institutes of Health Research (CIHR)— represents the efforts of a 14-member expert panel, chaired by Andrew K. Bjerring, former President and CEO of CANARIE Inc. In its report, the panel assesses the current state of knowledge surrounding timely access to health and health-related data- key for both health research and health system innovation.

Despite the work of multiple experts, and input from additional experts (including from the Office of the Privacy Commissioner of Canada) who acted as reviewers, the approach to privacy taken by the report is a disappointing one.

The starting point seems to be the presumption that data must be shared and that its sharing is of great value, while privacy concerns are primarily an issue of regulation of data management rather than active protection of individual privacy.  For instance, the value of associating multiple data related to the same individual is privileged over the privacy risks of such association and data mining, with the report suggesting linking the pieces of information prior to de-identification in order to preserve the data mining potential – a stance which privileges research outcomes over the protection of an individual from data mining.   It is difficult to reconcile this with a key finding that “the risk of potential harms resulting from access to data is tangible but low”.

The sharing of health data today in Canada is a fait accompli – the Canada Health Act mandates that medical professionals (doctors, physiotherapists, pharmacists etc.) turn over this sensitive information.  The question is whether de facto de-identification and information sharing is sufficient to protect privacy, and, in fact, whether that protection is even the end goal.  This report and its suggested approaches are aimed more at managing privacy concerns (e.g., via development of a privacy review board similar to a research ethics review) than about actual privacy for Canadians.

Conflict Between Self-Reporting Taxation System and Warrantless Information Sharing

The most recent federal omnibus budget bill included a clause that will allow the Canada Revenue Agency (CRA) to share – without a warrant – personal information of taxpayers with law enforcement agencies when it appears to pertain to a variety of offences. 

 The Canada Revenue Agency (CRA) administers tax laws for the Government of Canada and for most provinces and territories, and administers various social and economic benefit and incentive programs delivered through the tax system.

The Canada Income Tax Act s. 241 governs the confidentiality of taxpayer information, heavily restricting the sharing of taxpayer information.  The section primarily permits the sharing of information for the purposes of benefits and taxation programs.  There is, however, a limited power to share information for extremely serious criminal offences where the official has and sets out in writing reasonable grounds to believe that the information provides evidence of one of the listed offences. 

The Taxpayer Bill of Rights adopted by the Canada Revenue Agency includes a right to privacy and confidentiality.  That right provides that:

Only employees who need your information to administer programs and legislation have access to your information.

We also take other steps to protect your information and make sure it is kept confidential. For example, we follow government-wide and internal policies on the security of information and privacy. We also regularly review our internal processes to make sure your information is safe.

Canada’s taxation system relies on taxpayers to perform their own self-assessment, providing it thereafter to the CRA for processing.  A key component in encouraging honesty in that self-assessment are these guarantees of privacy and confidentiality – the idea that taxation information will be kept in its own silo and not indiscriminately shared with other government departments is the correlative of expecting honesty and transparency from taxpayers. 

Will the public be aware of the increased scope of information sharing to which their personal tax information may now be subject? 

Can information provided via self-assessment truly be expected to reveal evidence of crime(s)?  Are CRA representatives qualified to make sure determinations, especially in the absence of any form of oversight or warrant requirement?  Or is this, in fact, simply another information grab, intended to improve profiling by enlarging the available data pool rather than actually identifying crime or criminals?  It’s not enough for the justification(s) for information sharing to sound on the surface as though they will increase security and safety – they should actually lead to it, or else the privacy and confidentiality of taxpayers should remain inviolate.


 



Reclaiming YourSelf

“I felt that my silence implied that I *should* be ashamed….”

I LOVE this project, both the explanatory video and the photo shoot to which it refers.  Danish journalist Emma Holten, who had been victimized by revenge porn, on the importance of consent. 

We have seen the results of public shaming of the sexuality of girls and women – we’ve seen it in the suicide of Amanda Todd, the death of Rehteah Parsons.  In the way(s) others use the threat of releasing/sharing such photos to attempt to extort and manipulate girls and women.  And Holten is correct that this is grounded in misogyny, in the hatred and objectification of women. 

It is grounded too in the underlying attitude that female bodies and sexuality are wrong.  If these images, those naked bodies were not presumptively “shameful”, their revelation could not leveraged as a threat.  The judgments that perpetuate the sharing of such photos (“you shouldn’t have been such a whore”) reinforce and reiterate that shame. 

Holten’s response, to refuse to be shamed about her body and sexuality is a powerful one.  The decision to participate in a photo shoot and release those photos publicly – to actively share images of her body, to refuse to feel shamed about her sexuality – is an important one.  By refusing to allow herself to be subverted or silenced, she instead takes the site/sight of her “shame” and transforms it, making of it not only a moment of resistance but a response and refutation.  A celebration and a reclamation.

Crowdsourcing Facts – Truth or Truthiness? Does Facebook’s new policy run the risk of conflating majority perspectives with truth?

On 20 January, Facebook announced a new feature that will allow reporting of hoaxes and misleading news.    This new reporting feature is intended to reduce the distribution of such stories, though Facebook emphasizes that the stories will simply be flagged/annotated as having been reported as containing false information.  The stories will not, however, be removed, and Facebook won’t review the content to decide whether it’s accurate or not.

One wonders what the impact of having a story flagged/annotated as having been reported as false will be, especially if the annotation carries with it some indicia of how frequent such reports have been.   It seems likely that the mere knowledge that something has been flagged will be sufficient to trigger skepticism/uncertainty in a reader, with the degree of that skepticism roughly related to the number of reports should that be available. 

Why does this matter?  Well, it’s entirely subjective – there are no guidelines as to what “false” means, and combined with Facebook’s decision not to review the content, it seems likely that posts with contentious subject matter will bear a much higher likelihood of being reported under this system.  What better way to undermine the authority of a report that to have it annotated/identified as somehow false or suspect?  And let’s face it, which perspectives are the most likely to *be* contentious – it seems equally (if not more) likely therefore that the voices and perspectives that are going to be marginalized in this way will not be those of the mainstream white middle-class majority.

I’m NOT advocating for Facebook to take a more active role in reviewing and/or removing stories.  Like many, I’ve watched Facebook’s frustrating inaction when it comes to removing groups and pages that are clearly in contravention of their policies.  This isn’t a cry for Facebook to take on (even nominally) a greater role in policing content.  It is instead an expression of concern – a concern I’ve shared before – concern that mainstream/majority perspectives may be conflated with truth and/or objectivity in such a way that we don’t improve the accuracy of our newsfeeds so much as we homogenize it…

25 YEARS: 6 December 1989

Wow.  2014.  If there were ever any doubt that violence against women continues to be an issue this year has been pretty damn determined to knock any illusions out of us. That violence continues to be embedded into multiple aspects of our society and, above all, in women’s experiences. Our illusions of progress are hard-won I think, and often people are pretty determined to hang on to them.

I remember in grade 9, a friend had an… unpleasant experience after accepting a seemingly innocent ride on the back of a new motorcycle.  I recall how shaken she was when she returned.  I also remember talking to my mom about it that evening.  While it’s been decades since it happened, my recollection is that she was sympathetic and supportive about that one instance, but when I ventured the opinion that about a third to half my friends had gone through similar unwanted experiences to one degree or another, she couldn’t believe it.  Didn’t want to believe it I understand, in retrospect.  But I remember above all the frustration of naming something about the world in which I lived and being told that it had to be exaggerated. 

That disbelief is omnipresent, isn’t it?

In May, one of the responses to the Isla Vista shootings, and the misogynistic public manifesto of the shooter himself, was the creation on Twitter of #YesAllWomen – a hashtag appended to tweets detailing the many ways in which women’s experiences are shaped by misogyny, sexism and fear.  It was an overwhelming moment for many of us – not just the women who were speaking their truths or learning that others shared their fears and/or experiences, but also for people who’d never had to consider such experiences.  As I witnessed the growth of #YesAllWomen and the responses to its messages I felt as though there was the potential for something beyond a viral twitter moment to result from this. 

The Atlantic said of the phenomenon simply that “…the vast majority of men who explore it with an open mind will come away having gained insights and empathy without much time wasted on declarations that are thoughtless” an insight eloquently echoed in Neil Gaiman’s tweet “the #yesallwomen hashtag is filled with hard, true, sad and angry things. I can empathise & try to understand & know I never entirely will.”

Jezebel elevated the conversation swirling around the hashtag to an even loftier level stating “…now with trends like the #YesAllWomen hashtag, we are uprooting everyday sexism, the ideas that perpetuate systematic marginalization, outright violence towards women, rape culture, and demonization of women who deign to stand up for themselves, forcing it out and showing just how pervasive and destructive it is.”

It has certainly been a powerful trend, the hashtagging of truth.  Forcing it out and showing just how pervasive and destructive it is. 

In the wake of the Jian Ghomeshi scandal, another harrowing hashtag emerged: #BeenRapedNeverReported.  As its co-creators explained:

Our anger grew.

Not so much about what Ghomeshi was said to have done—we have heard those stories all too often while doing our jobs as journalists—but by the reaction of much of the public.

Why, people scoffed on the internet, screamed on social media, argued in bars, had these women not reported him to the police? Why had they waited years to come out against him? Why were they not making their names public? Could this be a conspiracy of "scorned" women and cast-off girlfriends and Ghomeshi groupies whom he had ignored?

We both knew full well why these women—initially four, now more than double that—stayed mute. As did every woman—and man—who has ever been raped, sexually assaulted, abused, molested, messed with.

We are the silent, shamed majority, each with a horrifying, humiliating I-should-have-known-better it-was-all-my-fault memory and myriad reasons why we keep it all to ourselves, not even telling our closest friends. Sometimes we keep these stories secret for a lifetime, empowering our attackers to revictimise us every time we blame ourselves for having worn the wrong clothes, or having had one too many drinks, or being in the wrong place at the wrong time.

It was also becoming clear through the heated debates on social media that even people we considered enlightened, still don't get it. They have no clue of how common it is for women to be flashed or to have a man masturbate before her while looking her up and down late at night on a subway train, both frightening violations whose impact is often minimised.

So many social media spaces, so many voices speaking up to tell their truths—trying to share and to educate. 

#YouKnowHerName

http://ibelieveyouitsnotyourfault.tumblr.com/

#believewomen

 

As heartening and inspiring as this swell of solidarity is, I don’t want to classify this as a “tipping point”.  It isn’t a moment of profound change, an “aha” that is magically transforming the world, eradicating misogyny and violence against women.  It is my hope, however, that these spaces and the conversations that take place within them are at least the beginning of the disruption of disbelief. 

Because here’s where we stand today.

28 Ontario women had lost their lives at the hands of violent men in 2014 according to a Huronia Transition Homes reporting up to Nov. 26.  That number is already higher by at least two. 

By the numbers

17 minutes: How often a woman in Canada has intercourse against her will

80: Percentage of sexual assaults that happen in victims’ own homes

62: Percentage of victims physically injured in attacks

98: Percentage of charges laid for the least severe form of assault

2: The number of years sexual assault offenders are sentenced to jail on average

 

Making Sense of the Statistics

It’s time we believed each other.  It’s time we understood what our disbelief allows to perpetuate, and resolve instead to act.  Let’s resolve to go forward hearing and trusting the voices of those who have been victimized and exploited.  Let’s listen when someone tells us they’re afraid or hurt.  It’s time to trust these voices…and our own.

 

 

A “Black Box” for your Body?

UPDATE:  Allstate was recently granted a US patent for a "driving-behavior database that it said might be useful for health insurers, lenders, credit-rating agencies, marketers and potential employers." The program is just in the patent stage for now, but the company says: "the invention has the potential to evaluate drivers' physiological data, including heart rate, blood pressure and electrocardiogram signals, which could be recorded from steering wheel sensors."  Similarly Jaguar/Land Rover has a number of projects in process which are intended to monitor and assess driver focus and  brain activity; analyze physiological data; and incorporate predictions of expected behaviour, all in the name of making driving safer (or so we're told). 

 

 

Permission has been granted in a Canadian personal injury case has been to introduce FitBit data into evidence – the collected fitness tracking information will be run through an analytics program in order to compare it to activity levels of the general population.  The idea, in this instance, is to be able to quantitatively demonstrate the effects that the plaintiff’s injury has had on her abilities and activity.

Sounds simple right?  She has the data, she’s volunteering it, and being able to ground her claim with data is likely to be quite persuasive.  

Why should we worry about this? What is concerning is the broader potential implication—situations where the data is *not* volunteered but subpoenaed.  Consider situations where the activity data isn’t used to support an individual claim, but rather to undermine or even dismiss someone’s entitlement to relief. 

Let’s face it – we have seen and continue to see a trend towards admitting as evidence social network posts and similar information, even where “privacy settings” have been utilized in an effort to keep the information private.  This information has been ruled to be presumptively public no matter the expression of a contrary intention by the poster.  Given that, it seems likely that the same presumption of publicness will apply to the data generated by FitBits, heart rate monitors, pedometers, smartphone fitness apps, and other devices that collect and record personal fitness/performance information.

The notion of the quantified self is an alluring idea, but ultimately it is a new iteration of an old quandary. It starts with the assertion the more information we can collect and track, the more we can know.  Building on this, we see the growth of the common belief that with this information we can better control our bodies, perfect them. 

Essentially, this represents a normalization of surveillance—only at the level of individual body processes, and it is we who are surveilling ourselves.

Concern about surveillance and its effect on those who are surveilled is not new nor is it unreasonable.  The use and misuse of surveillance data has led to unexpected disclosures, and, in some cases, to the reinforcement of stereotypes and inequalities. 

In our fascination with the quantified self, it is important to ensure that the information we choose to collect—for our own purposes—does not become the equivalent of an airplane’s “black box”, a public record out of our control and potentially used against us. 



Is IP address personal information (in Europe)?

Are IP addresses personal information?  On 28 October, the German Federal Court of Justice referred the question to the European Court of Justice (they who gave us the contentious Google Spain decision).

The case stems from the fact that when users visit German government sites, the site collects their IP addresses along with other information.  This information is logged and stored “in order to track down and prosecute unlawful hacking”. 

For once Canada can consider itself well ahead of the curve.   The Office of the Privacy Commissioner of Canada is clear that “An Internet Protocol (IP) address can be considered personal information if it can be associated with an identifiable individual.”  A 2013 report from that Office “What an IP Address Can Reveal About You” goes further into the subject, ultimately concluding that

…knowledge of subscriber information, such as phone numbers and IP addresses, can provide a starting point to compile a picture of an individual's online activities, including:

·         Online services for which an individual has registered;

·         Personal interests, based on websites visited; and

·         Organizational affiliations.

It can also provide a sense of where the individual has been physically (e.g., mapping IP addresses to hotel locations, as in the Petraeus case). 

This information can be sensitive in nature in that it can be used to determine a person’s leanings, with whom they associate, and where they travel, among other things.  What’s more, each of these pieces of information can be used to uncover further information about an individual. 

The Federation of German Consumer Organizations has raised concerns that classifying IP address as personal information could create delays and onerous administrative and consent requirements for internet use in Europe, or alternatively could necessitate a reconsideration of some of the provisions of the EU Data Protection Directive.  It would be interesting to hear from a similar Canadian body as to their experiences….or perhaps the CJEU should consider some kind of case study in order to include practical experience into their considerations.

Does Knowledge Create a Duty to Warn?

A young model who was contacted, solicited by a “modelling company” via the site Model Mayhem and was subsequently drugged, sexually assaulted and filmed, has received permission to pursue her case.

“Jane Doe” was raped in 2011.  Months later, she was contacted by the FBI and learned that other women had been victimized the same way by the same people in the past.  In fact, not only had this happened to other women, but the owners of Model Mayhem were aware of this and the ongoing criminal investigation.

Not, of course, that she found out about the criminal investigation or the prior knowledge through advisory warnings or any attempt to safeguard users.  The information only became available in the course of a court battle between Internet Brands (who acquired Model Mayhem in 2008) and the original owners.  Defending themselves against a claim for money owed on the acquisition, Internet Brands complained that the owners had failed to disclose the fact of the ongoing investigation, something that could expose Internet Brands to civil liability.  Protecting themselves, sure.  Protecting site users?  Not so much. 

So Jane Doe began an action against Internet Brands for $10 million for failure to warn.  In response, Internet Brands invoked s.230 of the Communications Decency Act which provides immunity from liability for providers and users of an "interactive computer service" who publish information provided by others. 

In a September 2014 decision, Justice Clifton distinguished between the intent of the clause – to protect sites from being held responsible for user-generated content – and the substance of Jane Doe’s claim. That substance focusses on the special relationship between the site and its users and the fact that Internet Brands had knowledge about the criminal investigation and decided not to disseminate any warning to users.  Accordingly, Clifton decided that the case should go forward, leaving the lower court to determine whether the site had a duty to warn that it failed to meet. 

It is expected that this decision will be appealed.  Nevertheless, there is a moment here, an opportunity to further discuss some very important issues.  I have said before that we must ensure we don’t get sidetracked by technology and instead focus on the fundamental issue(s) in making law and policy....#nowisthetime

 

 

 

When privacy and dignity collide

Recently I was going through a series of training modules based on the Accessibility for Ontarians with Disabilities Act…a requirement for a major project I’m working on.  The principles and ideas aren’t new to me, but in the process I was struck by the assumptions seemingly built into the examples.

For instance – a university registrar’s office is contacted via the Bell Relay Operator, (a service that enables calls between people who are hearing and those who are hearing and/or speech impaired). The operator conveys a request for the personal information contained in a student file. The correct response, according to the module, is to provide the information requested.  Why? Because Bell Relay operators are intermediaries who are governed by a strict confidentiality agreement, and thus the phone call should be understood as a request from the student directly.  To question the use of an intermediary violates the dignity of the student. 

I’ve gotta say, while I understand concerns about asking people to provide (excessive) information, I also have concerns about just assuming that personal information can or should be provided to a third party without explicit consent to do so. Is it really damaging in such a case to request confirmation that it is actually the student making the request (albeit via the relay operator) before handing it over?  Sort of feels to me as though doing the same confirmation of identity and entitlement to personal information associated with any direct call requesting that information is dignity-affirming rather than reducing.

It seems as though there are competing ideas of dignity here.  I mean, I understand and respect that people shouldn’t be asked “what’s wrong with you” or have to otherwise “justify” their entitlement to accommodation.  At the same time though, I wonder if the emphasis on not asking but instead relying on protections built in (i.e., the confidentiality requirements for relay operators) doesn’t, at least in some ways, imply that there is something somehow shameful about using assistive devices and hence we shouldn’t draw attention to them.  Isn’t it possible to ascertain that personal information is appropriately protected?  Isn’t doing so another part of protecting/respecting dignity and autonomy?  Surely there is a way to honour the expectations of both accessibility and privacy? 

just wondering….