piercing anonymity in the name of accountability

UPDATE:  on Feb 6th Yelp filed an appeal against the unmasking order in Virginia Supreme Court

 

On 10 January 2014. A Virginia court upheld a request by a local carpet cleaning company and ordered Yelp to disclose the identities of several Yelp users who had uploaded negative reviews of the company.    In this particular case, the request and decision were grounded in a combination of the Virginia law that allows courts to unmask those behind online identities if there is "legitimate, good-faith" belief they violated the law and a low threshold in applying said law.

 

On a larger scale, though, the issue dramatizes an ongoing tension between the requirements for accountability and for freedom of expression.

 

Abusers hiding behind anonymity

Recent incidents with revenge porn sites and cyberbullying demonstrate that it can be very hard to shut down cyber bullying, cyber-harassment, cyber-stalking and other forms of online abuse due to the difficulty of identifying the abuser.  In this scenario, anonymity can enhance the abuser’s power and perhaps even encourage extremities of behaviour which would be less likely if visibly attached to an offline (or even just a fixed) identity.   This hypothesis suggests that identifiability translates into accountability.

 

On the other hand, there is the principle, dating back to the Federalist Papers and beyond, that anonymity is important and necessary in order to allow true freedom of expression—unhindered by fear of repercussions. 

 

The courts weigh in

In September 2012, the Supreme Court of Canada released its decision in A.B. v. Bragg Communications Inc., a case where a young woman who had been bullied asked the court to allow her to file a defamation suit against her abusers and yet protect her own identity.  Writing for a unanimous court, Justice Abella found that:

 

If we value the right of children to protect themselves from bullying, cyber or otherwise, if common sense and the evidence persuade us that young victims of sexualized bullying are particularly vulnerable to the harms of revictimization upon publication, and if we accept that the right to protection will disappear for most children without the further protection of anonymity, we are compellingly drawn in this case to allowing A.B.’s anonymous legal pursuit of the identity of her cyberbully.

 

Users seeking protection behind anonymity

These extremes on the continuum make the issue seem clear – or at least clearer.  But what about when we’re not talking about cyberbullying or revenge porn?

 

What about online reviews on sites like Yelp?  A 2013 study indicates that such sites are achieving increasing importance, with surveys indicating that more than 75% of people say they rely on reviews from such sites as much or more than personal recommendations.    In an attempt to encourage freedom to review honestly, in 2008 eBay changed their policies so that sellers could no longer assign negative reviews to buyers on the site.   A recent (criminal) case in Ottawa demonstrates that freedom to post without (online) repercussions may not be sufficient. In this instance, a negative review of a restaurant was posted on RestaurantTHing.com, resulting in an escalating series of responses and retaliations from the restaurant owner. These ranged from posting of the customer’s personal information online, to posing as the customer to create a fake dating profile and send inappropriate emails to the customer’s co-workers and employers.  Likening the behaviour to cyberbullying, the court sentenced the perpetrator to 90 days for criminal defamation. 

Could the harassment have been prevented had the customer used a pseudonym or posted her review anonymously?  Was it the fact that the customer used her own name that opened her up these attacks, or were they the result of an extreme overreaction on the part of the restaurant owner?  Will future customers feel themselves unable to truthfully comment for fear of reprisals?  Will the ability to review anonymously or under a pseudonym offer sufficient protection to keep reviews trustworthy?  And if so, will a system where courts order sites like Yelp to disclose the identities of reviewers ultimately remove any protection that anonymity might provide, undermining the credibility of reviews and review sites?

 

No easy answers

Issues in our evolving digital culture have major implications in real life. We want to hold people accountable while still making it possible to say the scary things, to voice unpopular opinions free from reprisal. 

Setting policies and developing laws that balance the transparency of identifiability against the protective powers of anonymity/pseudonymity isn’t easy. This issue is an important one and one that will continue to be battled on a case-by-case basis…

 

 

Opting-Out of the Gmail/Google+ email crossover

This week (Thursday 9 January 2014) Google rolled out a Gmail/google+ cross-platform function that will allow you to send email to people despite not having their email address.  The trick is that Google+ has up to date contact information, and so Gmail will “helpfully” send the email for you.

Concerns have been  (and should be) raised – about the project itself, and about the decision on Google’s part to create this with a default setting that allows everyone on Google+ access to the information, although users can restrict the circles of information to which this applies or opt-out entirely.

There is a quick way to opt-out:

  1. Open Gmail on a computer.
  2. Click the gear in the top right corner
  3. Select Settings.
  4. Scroll down through the General tab to the Email via Google+ section
  5. Click the drop-down menu and choose Anyone on Google+, Extended circles, Circles or No one.
  6. Click Save Changes at the bottom of the page.

 

Writing the Right Law, Not Just a Law

In the wake of Rehteah Parsons death on 7 April 2013, Nova Scotia became the first Canadian jurisdiction to pass cyberbullying legislation, with the Cyber Safety Act becoming law on 10 May 2013.  At the time of its introduction multiple concerns were raised about the breadth of the Act, and the vagueness of its definitions, as well as the constitutionality of its approach.  This week the first formal complaint under the Act was made, by Nova Scotia legislator and actress Lenore Zann, after a teen posted a nude still from an acting performance to Twitter. 

The quick turnaround of Nova Scotia’s Act was a result of the widespread alarm and concern that erupted when the general public were made aware of the alleged rape, harassment and eventual suicide of Rehteah Parsons.  It is always tempting in the wake of a shocking event to react quickly to prevent future recurrences.  Unfortunately, the notion that the best response is to quickly draft a law to deal with the issue and move on is incorrect.  

To put it bluntly, it’s important to write the right law, not just a law.   

 

internet-law.jpg

A missed opportunity

For an example of how this approach can go terribly wrong, consider the Video Privacy Protection Act in the United States.  It was drafted and passed by the US Congress in 1988, after a media outlet acquired and published Robert Bork’s videotape rental history during his nomination to the US Supreme Court. The Act protects the privacy of information about rentals of “pre-recorded video cassette tapes or similar audio visual material.” 

The Act is an excellent example of the failure of law to deal effectively with technology.  In the wake of a perceived violation of privacy legislation was drafted and passed to deal with that particular technology and prevent that particular kind of violation from recurring. So although the Electronic Privacy Information Centre describes the Act as “one of the strongest protections of consumer privacy against a specific form of data collection,” because of its specific link to a particular (and increasingly out-dated) medium it also constitutes a lost opportunity to apply similar, meaningful privacy protections against data collection more broadly in the US. 

 

What is Cyberbullying?

The recent wave of “cyberbullying” legislation poses the same risk—leaving the issue inadequately addressed with an ill-conceived legislative response to a poorly understood issue. These concerns were raised at the time the Bill was introduced, now that a complaint has come forward, it’s a good opportunity to re-examine the issue.

Conventionally, bullying is defined as having three components: 

  • Aggressive behavior that involves unwanted, negative actions.
  • A pattern of behavior repeated over time
  • An imbalance of power or strength.

Dan Olweus, psychologist and noted author on the topic says, “A person is bullied when he or she is exposed, repeatedly and over time, to negative actions on the part of one or more other persons, and he or she has difficulty defending himself or herself."

Experts disagree over whether these same characteristics are present in cyberbullying.  Certainly traditional bullying requires a physical contact, or sharing physical space, while cyberbullying does not.  Because there is no requirement for co-location, cyberbullying may arise in multiple online spaces.  This—combined with the persistence of information in online spaces—means that cyberbullying can be seen or participated in by many more people, in many more places.  Online spaces can also facilitate anonymity or pseudonymity among those engaged in the bullying, which further increases the power differential and vulnerability experienced by someone being bullied.

After a Nova Scotia teen tweeted a topless photo (a still shot found online, from an episode of “The L Word” in which Ms. Zann was an actor) with the question “What happened to the old Lenore?”, the legislator contacted his parents, his school principal, and local school board as well as the police and CyberSCAM units, alleging that his actions constituted cyberbullying. 

The new Nova Scotia law defines cyberbullying as follows:

"cyberbullying" means any electronic communication through the use of technology including, without limiting the generality of the foregoing, computers, other electronic devices, social networks, text messaging, instant messaging, websites and electronic mail, typically repeated or with continuing effect, that is intended or ought reasonably be expected to cause fear, intimidation, humiliation, distress or other damage or harm to another person's health, emotional well-being, self-esteem or reputation, and includes assisting or encouraging such communication in any way;

 

Discussing her complaint, Ms. Zann pointed to a three-hour discussion that took place on Twitter between herself, the youth himself and others, claiming that "It's not necessarily the image itself, but the fact that someone is tweeting that at me and saying, 'Hey Lenore Zann, where's the old Lenore now,' and calling it porn and things like that. I found that really humiliating and harassing."  Interestingly, it is also worth noting that under the Act’s expansive definition, the tone and content of Ms. Zann’s responses during that encounter could arguably open her to similar allegations of cyberbullying.

 

Meaningful protections

Ms. Zann’s invoking of humiliation classifies this incident as cyberbullying in the broad terms of the Cyber Safety Act, but it raises serious questions as well. Is this the sort of situation cyberbullying legislation was created to address?  Should politicians be able to use cyberbullying legislation to stifle questions and criticisms?  Are they the vulnerable persons such legislation was meant to protect?

These are emerging quandaries of digital life that have yet to be put to the test. As we grapple with new cultural and legal issues presented by interactions in online spaces—striving to balance protection of the vulnerable with freedom of expression, balance privacy with safety—complexities and long-term repercussions must be considered.

It isn’t enough to just do something, it’s important to do the right thing.

 

 

 

 

 

 

 

 

What a Turkey: Enforcing Community Norms

Over American Thanksgiving weekend 2013, the twitterverse was abuzz over a clash between reality TV producer Elan Gale and fellow airline passenger “Diane”—a confrontation live-tweeted by Mr. Gale himself.   Mr. Gale first recounts Diane’s conversation with a flight attendant and is clearly disapproving of Diane’s behaviour.  The confrontation then takes off with Mr. Gale sending notes (thoughtfully pictured in his twitter updates) to Diane in her seat castigating and insulting her.  The tale ends with both passengers getting off the flight, whereupon Diane slaps Gale. 

Though Mr. Gale’s accounts of the confrontation were initially greeted with great delight, subsequent discussion delved into other issues including the his own behaviour, readings of the incident that focussed on underlying misogyny and  privilege implicit in Mr. Gale’s behaviour and concern about the ethics  of live-tweeting private conversations. 

On 2 December, Gale admitted that the whole fracas (and indeed Diane herself) was his own invention and had not happened.   Why a blog post about it then?  Well, because the “event” and people’s responses to it shines a light on several interesting aspects of online privacy, identity, and reputation. It is an opportunity to take a closer look at the phenomenon of publicly exposing the behaviour of others, and/or online “shaming.” Is this a new assault on privacy rights? Or, in fact, extensions of traditional practices of social regulation, created and employed by communities to reinforce chosen social norms. 

Online shaming

Daniel Solove has written about the “dog poop girl” -- an incident where photos of a girl allowing her dog to poop on the train and not cleaning it were taken on a metro train and posted to the internet.  Solove uses this incident as an example of how privacy is not a binary public/private switch but needs to be understood in a more nuanced way.  Just because this happened on a public train, he says, doesn’t automatically make it public.  Instead, privacy issues need to be assessed by looking at the situation in its full context, including the way(s) that a particular incident’s character may be changed by stripping it of context, disseminating it and making it “permanent and widespread.” 

The genesis of the Holla Back project in 2005 shows a similar appeal to community shaming.  A woman riding the subway one afternoon had a man sit down across from her, take his penis out of his pants, and begin masturbating.  Uncomfortable, she took a picture of him.  She reported the incident to a police officer, but she posted the photo and an account of the incident on Flickr and Craigslist.  The photo and report were reproduced in the New York Daily Mail the next day, which lead more than two dozen people to come forward with similar complaints about the offender who was then arrested and charged with public lewdness.   Interestingly, the perpetrator has spoken publicly about what he portrays as the inappropriateness of her actions in publicizing the photo and his actions. 

what a turkey.jpg

What these incidents have in common is the aspect of public shaming – what could have been a fleeting moment is instead recorded and made publicly available.  In both cases, public outrage erupted – people disapproved of these actions and articulated that disapproval (albeit targeting the individual as much as if not more than the action itself). 

This is not a phenomenon unique to the internet.   Rather, it is simply the technologization of the traditional means of establishing and maintaining mainstream community norms. 

Community norms and their enforcement

Norms are social constructions, the product of tacit negotiation and collective awareness. They are enforced both implicitly and explicitly, not necessarily via central authorities but within a community.  Social norms are promulgated and enforced through social interactions, and a key regulatory power of the community is shaming.  It is important to note, too, that shaming tends to target the individual, not the action.

While shaming can involve a direct confrontation with the norm-breaker, more often it is achieved via gossip.  Although traditionally disdained and dismissed as idle talk or rumour, gossip serves as an information exchange that promulgates, maintains and enforces social norms.  Shame acts to stigmatize those who transgress norms, and the threat of shame deters other individuals from such transgressions.

Identity and Reputation: Shaming the shamer

The fact that these incidents of shaming use technology or take place online does not make this a new phenomenon – community norms are evident in the response to the “dog-poop girl” or the subway flasher incidents.  Those responses do not occur in a vacuum, and it should not be presumed that they are of no effect.  Rather, when the community responds so strongly with disapproval, social norms about appropriate behaviour are articulated, enforced and reinforced.  This is exactly the traditional function of gossip and information sharing, and technologization does not change that.

In the end, the scenario that Elan Gale concocted and disseminated did generate community outrage, but not as he intended.  Mr. Gale’s efforts to focus disapproval on “Diane” backfired and online disapproval and shaming turned towards him, even more so now that he has revealed the whole thing was a hoax. Further evidence, perhaps, of the fact that community norms are socially negotiated. Interpretation and response to particular action cannot always be reliably enforced or predicted but rather occurs organically through community response.

Hoax or not, this incident and others like it show us the way in which the techno-social – the intersection of social norms and emerging technologies – can teach us lessons about ourselves and our world.  Some of the lessons may be new – but not all.  Some of them are as old as the hills.

 

 

Changing Our Default Settings: it’s time for a cognitive change

Privacy and “leaving the door open” online:

On 8 Nov 2013 a federal judge in Vermont ruled that information that is available through a P2P server is information in which there can be no right of privacy (United States v. Thomas, 2013 U.S. Dist. LEXIS 159914 (D. Vt. November 8, 2013).  

The ruling came in a challenge over the admissibility of information that had been gleaned via automated searches of P2P streams – the defendants claimed that the information had been illegally taken from their computers and therefore should be inadmissible.  The judge did not agree, finding instead that information on a P2P network is de facto public information – if something can be accessed via the Internet then the “door has been left open” and it is considered public. 

This perception of information on the Internet (or accessible via the Internet) as public rather than private is not restricted to analyses of P2P.  In a previous post I explored the legal treatment of information on Facebook, finding that Canadian courts have tended to allow information from social media profiles to be admitted as relevant and available, regardless of whether privacy settings have been used or not.

The not-so-subtle bias of “default” settings

Perhaps, notionally, default settings are only a starting point and able to be later changed or fine-tuned by users, but in actual fact they exert quite a powerful force. 

This is particularly true in the relatively new world of social media networks. People who when moving into a new residence would not hesitate to change the locks, put curtains on windows, and grow a hedge for privacy may not have the corresponding experience or confidence to take similar measures online.

“Default settings” in the technology world imply a standard configuration and/or best practice. Even Facebook CEO Mark Zuckerberg has asserted that default settings are reflective of broader “social norms” and aim to reflect current standards and values. 

In fact, there are powerful business incentives at work in all aspects of design and implementation of social networking sites. The trend for social network companies to set default settings that favour information sharing helps maximize the commercial potential for these businesses—especially given that users are unlikely to change default settings.  In practice, default settings exert normative force.

Placing the exclusive onus on the end-user to be hyper-vigilant is unrealistic and unfair. The presumption that complying with suggested “norm”-based defaults indicates a waiving of privacy expectations is incompatible with the privacy interests of ordinary internet users.

Time for a change

What is called for is a change in our cognitive defaults as a society when it comes to the publicness of information and the Internet.  It is clearly no longer enough to examine individual sites and technologies to see if they can be considered sufficiently private.  Rather, we must invert our cognitive defaults, change our way of thinking so that privacy is our default assumption rather than an exception.

Only when privacy rather than publicity is the expectation, the “norm”, will we have a chance to cultivate a truly privacy-protective environment including user-centric site defaults, respect for user choices, and—in cases like United States v. Thomas—a high threshold applied when finding that expectation of privacy has been waived.

 

. 

 

 

Social Media Employment Background Checks: Sounding the Call for Regulation

Digital “footprints” on the internet may have an impact both pre-employment and post-employment, and these impacts may disproportionally affect non-mainstream groups whose information is being assessed against standards that are undisclosed and unregulated.  

A recent study (released 21 November) by Alessandro Acquisti and Christina M. Fong of Carnegie Mellon University explores this phenomenon.  Starting with actual information revealed on social media sites, the team created resumes, professional network profiles, and social network profiles.  The resumes were submitted to 4,000 real job opportunities with US employers.  The online profiles were then tweaked by the researchers to be revealing of either religion (Muslim or Christianity) or sexual orientation (homosexual or heterosexual) of the individual, while otherwise equivalent to each other.   

Interestingly, the study did not find that sexual orientation created significant differences in interview requests, but across the US the “Muslim” candidate received 14% fewer interviews than did the “Christian” applicant.  The variation by religious affiliation was especially pronounced when correlated with conservative political indicators by geographical region (areas that favoured conservative candidates in the last national election).  An online component of the study using the same (manipulated) profiles produced similar responses. 

Further, the study suggests that between one in three and one in ten employers were searching online for information about job candidates.

magnifying glass.jpg

This number is at the low end of the scale, but not inconsistent with previous research.  For instance, a 2007 survey of 250 US employers found that 44% of employers used Social Security numbers to check into the backgrounds of job candidates.   2006 survey data from ExecuNet demonstrates a similar pattern, with 77% of executive recruiters using web search engines to research candidates and 35% of those stating that they had ruled candidates out based on the results of those searches.  In 2009, Harris Interactive research showed 45% of employers doing background checks that included social media, while a 2012 Career Builder study showed that two in five employers used social media to check out prospective employees, and of those who did not do so, 11% indicated they planned to start. 

Although the Carnegie Mellon study was focussed on the effect of two narrow characteristics, the authors expressed concern that particular identifiers may not be the only factor exerting an influence on employment decisions. The mere fact a candidate chooses to post such information online may itself lead to inferences and conclusions by prospective employers. 

Acquisti & Fong note that prospective employers who inquire about religious affiliation during an interview open themselves to liability under federal or state equal employment opportunity laws—and also that the US Equal Employment Opportunity Commission has publicly cautioned against the use of online searches to investigate protected characteristics. 

Similarly, in Canada there is no explicit liability in the act of searching, but rather in the issue of whether hiring decisions are being made based on inappropriate criteria.  In other words, it isn’t just a matter of information found in such a search, but also in the (potentially unfair, possibly gendered, classed or sexualized) inferences that may be drawn from the search.

Though the study focussed on pre-employment checks, the issue of online searches does not become moot after an applicant has been hired.  PIPEDA applies to personal information about any federal employee, and other jurisdictions may also cover such information under some legal framework.  This protection is important because online searches may be a tool in disciplinary investigations. 

Self-censorship or meaningful regulation?

The conventional wisdom, of course, is always that individuals must take responsibility for their personal information and should carefully control what information is available online. 

This study is another confirmation that employer (and other institutional) use of online background searches, including social media sites, is an ongoing and increasingly normalized part of the employment relationship.  Given that this information is being accessed and used in pre- and post-employment situations, it is clear that such practices should be examined and regulated. This is necessary to ensure that, at the very least, only information that is correct and relevant will be used, and that the individuals impacted are aware of its collection and use. Mechanisms for the challenge, correction and redress of misinformation need to be established.

This is an emerging and accelerating challenge to individual privacy rights.  Policing the misuse of personal information should not be left as an exclusively individual responsibility – systemic utilization of such information requires a systemic policy and response. 

 

THE REPUTATION FILES: Pseudonymity, Exposure, and Impact(s)

The tale of Belle de Jour has taken a new turn, with an ex-boyfriend turning to the courts and media with his claims that his association with the sex worker/academic/blogger has destroyed his career and that her claims of having been a sex worker are fabricated.

The site Diary of a London Call Girl debuted in 2003, purporting to be a record of the experiences of a young woman as she began and then continued to work for an escort agency.  The author identified herself as Belle du Jour, and as the site become more popular and the franchise grew to include published books and a spin-off television series there was much public speculation as to Belle’s “real” identity. 

That speculation ended in November 2009, when Dr. Brooke Magnanti outed herself as Belle de Jour in the Sunday Times.  Dr. Magnanti is a research scientist at Bristol University in the UK, and has stated that she was employed by an escort agency for 14 months while completing her Ph.D. thesis. 

Now she is being sued in a Scottish court by Owen Morris with whom she had a 6 year relationship.  Writing as Belle du Jour, she bestowed upon Mr. Morris the pseudonym “The Boy” – he claims that when she revealed herself publicly, his identity too was exposed.   Accordingly Mr. Morris is suing her for damages and loss of earnings on the basis that she cost him his job and RAF career, breached his privacy, and defamed him. 

She says: Dr. Magnanti

The use of the pseudonym `Belle de Jour` allowed Dr. Magnanti to journal her experiences and later to commodify the popularity of her site while continuing separately to build her own academic career.  Clearly, the use of a pseudonym to keep these “selves” separate not only allowed Dr. Magnanti to develop professionally, but to build strong parallel reputations.  It is worth noting that both the university and her publisher have been publicly supportive of her. Bristol University has stated that Dr. Magnanti’s past is irrelevant to her university position, and her publisher has lauded her for the courage it had taken to come forward. 

Brooke-Magnanti-007.jpg

An entry on her blog the day of the public revelation spoke about the importance of reconciling the different aspects of her personality, and denied that her offline self was any more “real” than her pseudonymous online self.  Both in her decision to unmask herself and in her subsequent comments, Dr. Magnanti demonstrates many key aspects of the interrelationship between privacy, identity and anonymity.  Choosing to control information about herself via the use of a pseudonym allowed her the privacy and freedom to develop both selves, both reputations, without confusion or dissonance emerging between them. 

He Says: Owen Morris

In contrast with Dr. Magnanti’s focus on authenticity, Mr. Morris’s lawsuit strikes a dissonant chord.  While alleging defamation and breach of his privacy, he is also challenging her backstory, claiming that the blog was begun before her time in London and that in fact the clients and characters she detailed were fabrications built upon her sex life with him. 

This dichotomy is interesting –his claims rest on the idea that being associated with someone of her reputation (as sex worker, not as academic) has damaged him and his reputation professionally and personally.  Simultaneously, however, he seeks to undermine the validity of that reputation at all, refusing the notion that he had ever been involved with a sex worker to begin with.

Rebuttal

Dr. Magnanti, in a blog post dated 11 August 2013, responds to Mr. Morris’s claims in the media and in court.  She lists significant documentary evidence that she is prepared to produce, including entries from his own journal where he acknowledges he knows that she was a sex worker, tax records showing earnings and appropriate taxes paid upon those earnings, diaries of her engagements during those years and more. 

At the close of her entry, Dr. Magnanti states that:

It matters because this is a concerted and direct attack on my work as a writer. Is it libel to say someone wasn't a sex worker? Well, it's libel to say someone was lying. When I was anonymous, being real was my main - my only - advantage. Mr. Morris and the Mail on Sunday have made some frankly nonsense claims, and I will be going to town on this.

Because I know people do not trust the word of a sex worker, that is why I saved everything.

Reputation Wars

What then is the real issue here? Whose reputation is at risk and why?

Sex work is not illegal per se in the UK.  Accordingly, the question of whether or not Dr. Magnanti’s experiences are real or fictionalized seems like a red herring. 

Was Mr. Morris damaged by Dr. Magnanti’s activities as a sex worker?  Was he damaged by her revelation of her “real” identity in 2009?  Have either of these in fact led to him suffering personal and professional harm that should be redressed?

Or is it Mr. Morris’s pride that has been damaged?  His insistence that he never knowingly slept with a sex worker seems to speak to a particular notion of masculinity that might be seen as “diminished” by such an association. 

This case revolves around many complexities of identity and reputation.

The case is in the Scottish courts and to my knowledge no decision has yet been reached.  It will be interesting to see what approach is taken in assessing whether Mr. Morris has indeed suffered tortious injuries, and in articulating what those injuries might in fact be. 

PROTECTING CONSUMER PRIVACY: Corporate Intrusions Increasingly Being Taken Seriously

On 29 October 2013, the Federal Court of Canada released its decision in Chitrakar v Bell TV. In a victory for privacy rights under PIPEDA legislation the court decided against Bell TV and awarded damages of $21,000 after the company obtained a customer’s credit bureau report without his knowledge or consent. 

internet-law.jpg

The award marks a significant advance for consumer privacy rights by taking the violation of those rights more seriously and awarding exemplary damages – the first time this has been done under PIPEDA and a clear signal from the court that organizations are expected to conform to the spirit as well as the letter of their privacy responsibilities. 

The case revolves around the following sequence of events: Bell ran a credit bureau report on new customer Mr. Rabi Chitrakar, 1 December 2010, when he ordered satellite television service.  When the equipment was delivered on 31 December 2010, Mr. Chitrakar signed what he understood to be a Proof of Delivery form.  Bell later inserted his signature on their standard TB Rental Agreement which includes a clause consenting to Bell doing a credit check.  After Mr. Chitrakar discovered that a credit check had been done, he filed a complaint with Bell in March 2011 and continued to seek an explanation from Bell, who gave him what the court characterizes as “the royal runaround”.  In finding for Mr. Chitrakar the Federal Court assessed the award at $10,000, with another $10,000 in exemplary damages due to Bell’s conduct and $1,000 for costs. 

PIPEDA, the Personal Information Privacy and Electronic Documents Act, allows a complainant to proceed to the Federal Court of Canada after the Privacy Commissioner’s investigation and report.  When this is done, the court has the power to order an organization to correct practices that do not comply with the law, and to publish notices of the changes it expects to make. It can also award compensation for damages suffered.  This award was significantly higher than previous awards under PIPEDA. 

The first award of damages under s. 16 of PIPEDA took place in 2010, in Nammo, a case dealing with erroneous information provided on a credit check.  Despite describing the provision of false credit information as “intrusive, embarrassing and humiliating as a brief and respectful strip search” only $5000 was awarded. 

Clarifying “proof of harm”

In making its determination in Nammo, the Court referred to the Supreme Court of Canada decision in Vancouver (City) v Ward, where damages were awarded despite there being no maliciousness, no intention to harm, and no harm shown.  The decision to make an award was based on reasoning that awards of damages serve multiple persons, including compensation, vindication and deterrence and accordingly the recognition that even where no harm need be compensated for, damages could still be warranted where the aims of vindication and/or deterrence were met. 

In the Chitrakar case the damage award was not based on proof of harm.  Rather, the court writes:

[t]he fixing of damages for privacy rights’ violations is a difficult matter absent evidence of direct loss.  However, there is no reason to require that the violation be egregious before damages will be awarded.  To do so would undermine the legislative intent of paragraph 16(c) which provides that damages be awarded for privacy violations including but not limited to damages for humiliation.

I’ve written before about the US requirement for proof of harm in such cases, juxtaposing it against the formula set out in the Ontario Court of Appeal’s Jones v Tsige decision (an important decision because it established a common law tort of privacy in Ontario) and arguing that the breach of privacy is in itself the harm, and that the meaning and purpose of damages being available for privacy breaches are compromised when proof of (additional) harm is required.

The Chitrakar case underscores the emerging recognition that a privacy violation is in itself harmful and deserving of redress, this time articulated by the Federal Court of Canada in a decision under PIPEDA. This is further evidence of an increased awareness that privacy is an important part of the right of dignity and autonomy and that redress for violations is justified once the infringement has established, regardless of whether any further harm has ensued.

This decision is good news—for advocates of privacy rights, for scholars of the interpretation and application of privacy law, and for ordinary consumers trying to protect their personal privacy and dignity.

**thanks to David T.S. Fraser for finding the decision and putting it on Google Docs.

 

Case Study: Scientific American/Biology Online, reputation, expression, and the “urban whore”

The outset

When author Philip Hensher was recently asked by Professor Andrew Webber to write a book introduction for free, he declined to do so.  Frustrated, Webber called Hensher “priggish and ungrateful” on Facebook.  As the Guardian reports on 11 October 2013, this led to a storm of support for Hensher and lively discussion fuelled the growing frustration of authors who are increasingly expected to donate their time and skills for free. 

The outrage

Contrast this with the commotion that same weekend over at Scientific American online.  Dr. Danielle Lee was asked by one of Scientific American’s partner publications, Biology Online, to write a blog piece for them for free.  When she declined, Dr. Lee (who writes for Scientific American as The Urban Scientist) was asked angrily by the site’s editor "Are you an urban scientist or an urban whore?"

A bad enough situation, it seems to me, especially when we take note of the particularly sexualized way anger was expressed at Dr. Lee, reminiscent of the Kathy Sienna or Anita Sarkeesian attacks.  For more discussion of these attacks and discussions around gender in online spaces, see Sarkeesian’s TEDTalk on sexual harassment and cyber-mobs or Jessica Megarry’s blog from August 2013.

The outcry

Dr. Lee blogged about the experience on her Scientific American blog, and then over the weekend—controversially—Scientific American added insult to injury by removing her post, tweeting that "@sciam is a publication for discovering science. The post was not appropriate for this area & was therefore removed."

In a widely shared open letter Isis the Scientist criticized Scientific American’s actions, calling out not only the relationship between Biology Online and Scientific American but also challenging the assertion that the post was not about science and/or not appropriate.  She argues eloquently:

You see, science is about discovery, yes. But, more importantly, at its core science is about discovery with integrity. It’s about accepting data for what they are, even when they challenge our view of the world. It’s about reporting your conclusions, even when they are not popular and create conflict. Science is about chasing the truth and uncovering more of that truth with each new discovery. Not obscuring it.  I became a scientist because science is about honesty and curiosity and that little moment of excitement when you’re holding something brand new and you can’t wait to show it to the world.

I have a vision of what science should look like. When I close my eyes, I see a community where we are fascinated by the world around us. Our core value is, indeed, discovery, [t]he more senior of us extend our hand to raise up those more junior than us.  We mentor them, care for them, love them, and protect them. We respect and value that our diversity makes us stronger. We empower those folks to feel like super heroes, because they are. They really, truly are. More so than any character, these folks have the power to shape our future for the better.

What you’ve taught me today is that you do not share my values. You may post glossy, sexy pictures of science, but you are not interested in discovery. You do not value truth, honesty and integrity – the core values that I hold most dear as a scientist.  Most importantly, you did not empower my friend.  You shut her down when she shared that she had not been respected. You put the dollar before the scientist.

Scientific American’s decision to remove Dr. Lee’s post was roundly criticized by members of the scientific blogging community. Subsequently the post was restored on 14 October, along with a revised explanation indicating that the editorial decision to delete the post was based Scientific American’s inability to “…quickly verify the facts of the blog post and consequently for legal reasons we had to remove it.” In other words—to protect the reputation of the individuals involved and the site itself.

(Biology Online also published a notice that the offending editor had been fired, reiterating the collegial aims of their site, and thanking those who made them aware of the situation.)

The outcome

Reputation and trust cuts both ways.

What makes this debate something more than an internecine squabble in the blogosphere?  This was not just a polarized discussion of bloggers versus sites.  Nor was it about Dr. Lee herself. 

A significant part of what took place was negotiated at the level of public reputation and trust.  The response by the “blogosphere” made it clear not only was that Scientific American’s removal of Dr. Lee’s post was unacceptable, but that this behaviour in general was unacceptable and thus that Biology Online and perhaps all of its partners were now suspect and should be avoided as a result.  As Mika notes:

Trying to make it as a writer in the current era is as ridiculous with all the “We’ll pay you with exposure” or “Intern for 2 years and maybe we’ll hire you at minimum wage.” Working for free isn’t working. Biology isn’t my beat, but if it’s yours, beware: Biology-Online is not worth your effort.

Why do I say anything, when so many others have already said it? Because the practice of science is rough, figuring out which career matches the lifestyle you want to have, navigating industry-academia balances, and everything else. If we can share lessons with each other, it’s a bit easier to cope. Now you’ve been warned off a predatory site, know this isn’t considered normal or acceptable behaviour, and won’t be blindsided quite as hard if something like this happens to you.

These sites depend on content to drive traffic, and that traffic to drive advertising revenue.  Here is the real reputation issue at the heart of this controversy: when disputes come up that undermine the overall legitimacy of such a site within the self-same community it targets that can be fatal to the site’s very existence.

With that kind of negative momentum building, it is hardly surprising that both Biology Online and Scientific American backed down on their previous positions, mumbling mea culpas as they went…

 

Playing the Privacy Blame Game, or the Fallacy of the “stupid user”

Meet the “Stupid User”

We’ve all heard it.

Whenever and wherever there are discussions about personal information and reputation related to online spaces—in media reports, discussions, at conferences—it’s there, the spectre of the “stupid user.”

Posting “risky” information, “failure” to use built-in online privacy tools, “failure” to appropriately understand the permanence of online activities and govern one’s conduct and information accordingly—these actions (or lack of action) are characteristic of the “stupid user” shibboleth. 

These days when the question of online privacy comes up it seems like everyone is an expert.  Conventional wisdom dictates that that once we put information online, to expect privacy is ridiculous.  “That ship has sailed,” people explain, information online is information you’ve released into the wild. There is no privacy, you have no control over your information, and – most damning of all – it’s your own fault! 

Here is a sampling of some recent cautionary tales,

·         Stupid Shopper:  After purchasing an electronic device with data capture capabilities, a consumer returns it to the store.  Weeks later, s/he is horrified to discover that a stranger purchased the same device from the store and found the consumer’s personal information still on the hard drive. Surely only a “stupid user” would fail to delete their personal information before returning the device, right?

·         Stupid Employee: A woman is on medical leave from work due to depression and receiving disability benefits.  While off work, after consultation with her psychiatrist, she engages in a number of activities intended to raise her spirits, including a visit to a Chippendale’s revue, a birthday party, and a tropical beach vacation.  Her benefits are abruptly terminated and the insurance company justifies this by indicating that upon viewing photos on her Facebook page showing her looking cheerful they considered her to not be depressed and able to return to work.  I mean, really – if you’re going to post all these happy pictures, surely you were asking for such a result?  Stupid not to protect yourself, isn’t it?

·         Stupid Online Slut: An RCMP Corporal is suspended and investigated when sexually explicit photographs in which he allegedly appears are posted to a sexual fetish websiteSurely anyone who is in a position of responsibility should know better than to take such photos, let alone post them online.  How can we trust someone who makes such a stupid error to do his job and protect us?

How Are These Users “Stupid”?

The fallacy of the stupid user is based on the misconception that individuals bear exclusive and primary responsibility for protecting themselves and their own privacy. This belief ignores an important reality–our actions do not take place in isolation but rather within a larger context of community, business, and even government. There are laws, regulations, policies and established social norms that must be considered in any examination of online privacy and reputation.

Taking context into consideration, let’s examine these three cautionary tales more closely:

·         Consumer protection: Despite the existence of laws and policies at multiple levels regulating how the business is required to deal with consumers’ personal information, the focus here was shifted to the failure of the individual customer to take extra measures in order to protect their own information.  Any consideration of whether the law governing this circumstance is sufficient or the failure on the part of the store to meet its legal responsibilities, or even follow its own stated policies, is sidetracked in favour of demonizing the customer.

·         Patient privacy: An individual, while acting on medical advice, posts information and photos on Facebook—which has a Terms of Use that specifically limits the uses to which information on the site may be used—and loses her disability benefits due to inferences drawn by the insurance company based on that information and those photos.  There are multiple players (employer, insurance company, regulators, as well as the employee) and issues (personal health information, business interests, government interests) involved this situation–but the focus is exclusively on the user’s perceived lack of judgment.  We see little to no consideration of the appropriateness of the insurer’s action. No regard for the fact that social networks have a business model based on eliciting and encouraging disclosure of personal information in order to exploit it, as well as architecture specifically designed to further that model.  Instead, all attention focuses on the individual affected and her responsibilities—the user’s decision to put the information online.

·         Private life: Criminal law, a federal employer, administrative bodies, and the media—all these were implicated when an RCMP officer was suspended and subjected to multiple investigations as well as media scrutiny after sexually explicit photographs in which he allegedly appears were posted on a membership-only sexual fetish website. In this case yet again the focus is on the individual, ignoring the fact that even were he to have participated in and allowed photographs to be taken of legal, consensual activities in off-work hours, there is no legal or ethical basis for these activities to be open to review and inspection by employers or the media. 

RE-THINKING THE “STUPID USER” ARCHETYPE

Powerful new tools for online surveillance and scrutiny can enable institutions—government and business—to become virtual voyeurs. Meanwhile, privacy policies are generally written by lawyers tasked with protecting the business interests of a company or institution. Typically multiple pages of legal jargon must be reviewed and “accepted” before proceeding to use software and services – it’s worth pointing out that a recent study says reading all the privacy policies a person typically encounters in a given year would take 76 days!

Not only are they long, the concepts and jargon in these Terms and Conditions are not readily accessible to the layperson. This contributes to a sense of vulnerability and guilt, making the average person feel like a “stupid user”. Typically we cross our fingers and click “I have read the terms and conditions, accept.”

My “Stupid User” theory is more than a difference of opinion about privacy and responsibility.  It’s not restricted to (or even about) expressions of advice or concern. There are, obviously, steps everyone can and should take to secure their information against malicious expropriation/exploitation of personal information. That said, not doing so – whether by virtue of conscious choice or failure to understand or use tools appropriately – does not and must not be considered as license for the appropriation and exploitation of personal information.

Rather than blame the apocryphal “Stupid User”, criticism must instead be aimed squarely at the approach and mind-set that focuses on the actions, errors, omissions, and above all, responsibility of the individual user to the exclusion of recognizing and identifying the larger issues at work.  This is especially important when those whose actions and roles are being obfuscated are in fact the very same entities who have explicit legal and ethical responsibilities to not abuse user privacy.

the violation IS the harm!

A class action suit filed against Google, Vibrant Media and the Media Innovation Group over tracking cookies and targeted ads was dismissed in a Delaware court in October 2013.  While accepting and agreeing that the companies in question had collected user personal information by circumventing browser settings and then sold that information to ad companies, the Judge felt that the plaintiffs had not shown that they had suffered harm due to these practices, and thus the action could not be sustained.

This is not, by any stretch of the imagination, the first time that the harm requirement has prevented individuals from holding to account those who invade their privacy.  Claims for harm, so-called speculative harm and for emotional distress as a result of the injury have all been attempted and dismissed.

In Canada, there is no requirement that injury be established in order to bring forward a claim.  Nevertheless, the question of harm necessarily arises at the damages stage.  In the germinal case of Jones v Tsige this issue is discussed, with the Judge recognizing the where no pecuniary loss has been suffered, there are still nominal or moral damages available, intended as at least a symbolic recognition that a wrong has been suffered.  After surveying common law and statutory prescriptions for such situations, the court finally arrives [at paras 87 and 88] at the following formula for dealing with intrusions on seclusion:

In my view, damages for intrusion upon seclusion in cases where the plaintiff has suffered no pecuniary loss should be modest but sufficient to mark the wrong that has been done. I would fix the range at up to $20,000. The factors identified in the Manitoba Privacy Act, which, for convenience, I summarize again here, have also emerged from the decided cases and provide a useful guide to assist in determining where in the range the case falls: 

1.   the nature, incidence and occasion of the defendant’s wrongful act;

2.   the effect of the wrong on the plaintiff’s health, welfare, social, business or financial position;

3.   any relationship, whether domestic or otherwise, between the parties;

4.   any distress, annoyance or embarrassment suffered by the plaintiff arising from the wrong; and

5.   the conduct of the parties, both before and after the wrong, including any apology or offer of amends made by the defendant.

I would neither exclude nor encourage awards of aggravated and punitive damages. I would not exclude such awards as there are bound to be exceptional cases calling for exceptional remedies. However, I would not encourage such awards as, in my view, predictability and consistency are paramount values in an area where symbolic or moral damages are awarded and absent truly exceptional circumstances, plaintiffs should be held to the range I have identified.

California has taken a different approach to the issue.  On 26 September 2013, California Secretary of State Debra Bowen moved forward on a ballot initiative that would amend the California Constitution to recognize a right of privacy in personal information and an attendant presumption of harm when that right is breached.  If the proposal garners 807,615 qualifying signatures by Feb. 24, 2014, the proposition will be included on the November 2014 ballot. 

Regardless of HOW the issue is addressed, it is clear that something must be done.  Both personal privacy and a right of remedy to the courts for redress of violations of that privacy are inherently compromised if they are only actionable when demonstrable pecuniary harm has been suffered. 

Culture notebook: The day web promos got personal

Sometimes something in our culture pulls back the thin curtain between Internet users and those reaping and repurposing those users’ personal information.

The new NBC television show The Blacklist has a released a deliciously creepy promotional gimmick that ties in with one’s Facebook account to generate a very clever, very unsettling interactive experience.

After watching a promotional trailer and a couple of canned videos where the show’s stars seem to be interrogating the viewer, one logs in using Facebook account (accepting the NBC terms and conditions, of course) and then watches as friends’ profile pictures seamlessly pop into the video on the screen.

The context is answering questions about friends (“which of these friends is most paranoid about privacy” is one of the questions).

The campaign is the brainchild of Toronto digital interactive agency, Secret Location.

Why this is significant

This certainly isn’t the first time a television show has had a web tie-in—be they webisodes <shudder> or other variations.   And it’s not the fact that the underlying paranoid tension has been moved online, nor even that the tension is personalized and exploited by this campaign.

Rather this promo exposes another significant transformation in our relationship with surveillance and data mining. 

There’s a weird kind of arc that seems to come with the introduction of new technologies.  They’re created for one purpose and rolled out in furtherance of that purpose.  But once they’re out—in the wild, so to speak—they become fair game and people start to play with them. It’s reminiscent of Rubin’s line from the William Gibson short story The Winter Market that “Anything people build, any kind of technology, it's going to have some specific purpose. It's for doing something that somebody already understands. But if it's new technology, it'll open areas nobody's ever thought of before. You read the manual, man, and you won't play around with it, not the same way.”  Eventually, finally, this play starts to itself become commodified. 

We seem to be arriving to this point with surveillance culture.   Surveillance and data mining have been widely rolled out purportedly for security purposes.  But once the tech is out there, people started to play with it. There are sites that amalgamate public camera feeds, as well as Puppycam, Pandacam, and Condorcam that draw significant crowds. Then there are artists like the Surveillance Camera Players, and works by Banksy that variously involve, invoke and incorporate surveillance cameras.  Artist Hasan Elahi has turned his experience of being tracked and erroneously added to the “no-fly” list His startling and hilarious TED talk FBI, Here I am describes his ongoing project of self-surveillance, documented on his website.

Banksy

Banksy

Along that continuum we see a change – from the pure fun and subversion implicit in the Surveillance Camera Players, the educational intentions of the San Diego Zoo’s cams, and on to the commodification of the feed of something like puppycam, which is advertising supported based on viewer hits. 

With this latest jump, our relationship to surveillance and data mining has again changed.  Now the model is not simply commodified – somehow the omnipresence of these things in our lives has become so normal that the advertising hook here isn’t based on the *fact* of the surveillance, but rather on the “fun” of seeing what results from omnipresent surveillance and data mining.

Online Privacy Rights: making it up as we go?

In the September 2013 Bland v. Roberts decision, the Fourth US Circuit Court of Appeals ruled that “liking” something on Facebook is free speech and as such should be afforded legal protection. This is good news, and while there has been extensive coverage of the decision, there are important implications for employers and employees that have not yet been fully explored.

The question is how far can an employer go in using information gleaned from social media sites against present and future employees?

Bland v. Roberts: about the case

The case was brought by employees at a Virginia Sheriff’s office whose jobs had been terminated.  The former employees claimed that their terminations were retaliation for them “like”-ing the campaign page of the Sheriff’s (defeated) opponent during the election.  Even though the action was a single “click”, the Court determined that it was sufficiently substantive speech to warrant constitutional protection.

Social media checks v. rights of employees

This decision has major implications for the current practice of social media checks of potential and current employees.

More and more that more and more employers are conducting online social media background checks in addition to criminal record and credit bureau checks (where permitted).  A 2007 survey of 250 US employers found that 44% of employers used social media to examine the profiles of job candidates.  Survey data from ExecuNet in 2006 shows a similar pattern, with 77% of executive recruiters using web search engines to research candidates and 35% stating that they had ruled candidates out based on the results of those searches.

Legal and ethical implications of social media checks

Federal and provincial human rights legislation in Canada stipulates that decisions about employment (among other things) must not be made on the basis of discrimination for protected grounds. Employers and potential employers are required to guard against making decisions based on discriminatory grounds.  These have been refined through legislation and expanded by court decisions to include: age, sex, gender presentation, national or ethnic identity, sexual orientation, race, and family status.   

surveillance.jpg

Social media checks can glean information actually shared by a user (accurate or not), but also can fuel inferences (potentially unfair, gendered, classed or sexualized) drawn from online activities. 

For example, review of a given Facebook page may show (depending on the individual privacy settings applied):  statuses, comments from friends and other users, photographs (uploaded by the subject and by others), as well as collected “likes” and group memberships.  These can be used to draw inferences (accurate or not) about political views, sexual orientation, lifestyle and various other factors that could play into decisions about hiring, discipline or a variety of other issues concerning the individual. 

Online space is still private space

The issue of social media profile reviews is becoming an increasingly contentious one. An employer should have no more right to rifle through someone’s private online profile than through one’s purse or wallet. With the Bland v. Roberts ruling and its recognition of Facebook speech as deserving of constitutional protection, important progress has been made in establishing that online privacy is a right and its protection is a responsibility.