Ontario Privacy in Public Spaces Decision: The Need to Recognize Privacy as a Dignity Baseline, Not an Injury-Based Claim

An Ottawa woman has successfully argued for a privacy right in public spaces.  After video of her jogging along the parkway was included in a commercial, she sued for breach of privacy and appropriation of personality

"The filming of Mme. Vanderveen's likeness was a deliberate and significant invasion of her privacy given its use in a commercial video," the judge added. 

While pleased with the outcome, I’m a little uncomfortable with the presentation (and not sure whether that’s about the claimant or the media).  It appears that the privacy arguments here were grounded in “dignity”, and particularly in self-image.  That is, at the time the video was taken, the claimant was (or felt herself to be) overweight and had only recently taken up jogging after the birth of her children.  She testified that she thought the video made her look overweight and it caused her anxiety and discomfort. As her lawyer stated, “[s]he’s an incredibly fit person. And here’s this video — she looks fine in it — except that when she sees it, she doesn’t see herself. That’s the dignity aspect of privacy that’s protected in the law.”

pheobe running.jpg

In response, the company appears also to have focussed on self-esteem and injury.  “They made the argument that if they don’t use someone’s image in a way that is embarrassing or if they don’t portray someone in an unflattering light — here it is just her jogging and it’s not inherently objectionable — that they should be allowed to use the footage.  In contrast, the claimant argued that how someone sees themself is more important than how a third person sees them.”

Why does this bother me?  For the same reason that the damage threshold bothers me….because invasion of privacy is an injury in and of itself. 

By focussing on her self-image and dignity, we’re left to wonder whether, if another individual had been filmed without their consent, had tried to cover their face when they saw the camera (as did the claimant here) and yet was included in the video, would a court come to the same result?  Or is there some flavour of “intention infliction of emotional suffering” creeping into this decision?  When the judge states that “I find that a reasonable person, this legally fictitious person who plays an important role in legal determinations, would regard the privacy invasion as highly offensive and the plaintiff testified as to the distress, humiliation or anguish that it caused her” what “injuries” are implicitly being normalized?  The source of the injury seems to be that of being (or believing oneself to look) overweight – is (and should) size be conflated with humiliation?  The judge concludes that while “Mme Vanderveen is concerned about the persona that she presents and about her personal privacy I find that she is not unusually concerned or unduly sensitive about this” but I find myself wondering about the social context.  Would a man claiming the same distress/humiliation/anguish in this situation have been taken as seriously?   

The judge found that "[t]he photographer was not just filming a moving river, he or she was waiting for a runner to jog along the adjacent jogging trail to advertise the possibility of the particular activity in Westboro."  Because of the desire to capture someone running, part of the damages included an estimate of what it would have cost to hire an actor to run along the river.  This is where the privacy breach takes place – the deliberate capture of an individual’s image, and its use without their knowledge or consent for commercial purposes.

The issue isn’t how she felt about herself, nor whether she like(d) the way she looks in the video – it is the act of making and using the video of her in the first place.  When we focus on the injury to her dignity, we risk misdirecting the focus, making it about the individual rather than about the act of privacy invasion. 

Individuals shouldn’t have to display their wounds in order to be considered worthy of the protection of law.  Rather, law should be penalizing those who do not take care to protect and respect privacy.  That’s how we respect dignity – by recognizing it as an inherent right possessed by persons, with a concurrent right not to have that privacy invaded. 

Speaking of the Right to be Forgotten, Could We Please Forget This Fearmongering?

In the wake of the original Right to be Forgotten (RTBF) decision, citizens had the opportunity to apply to Google for removal from their search index of information that was inadequate, irrelevant, excessive and/or not in the public interest.  Google says that since the decision it has received more than 250,000 requests, and that they have concurred with the request and acted upon it in 41.6% of the cases

In France, even where Google accepted/approved the request for delisting, it implemented that only on specific geographical extensions of the search engine – primarily .fr (France) although in some cases other European extensions were included.  This strategy resulted in a duality where information that had been excluded from some search engine results was still available via Google.com and other geographic extensions.  Becoming aware of this, the President of CNIL (France’s data protection organization) formally gave notice to Google that it must delist information on all of its search engine domains.  In July 2015 Google filed an appeal of the order, citing the critiques that have become all-too-familiar – claiming that to do so would amount to censorship, as well as damaging the public’s right to information.

This week, on 21 September 2015, the President of CNIL rejec ted Google’s appeal for a number of reasons:

  • In order to be meaningful and consistent with the right as recognized, a delisting must be implemented on all extensions.  It is too easy to circumvent a RTBF that applies only on some extensions, which is inconsistent with the RTBF and creates a troubling situation where informational self-determination is a variable right;

  • Rejecting the conflation of RTBF with information deletion, the President emphasized that delisting does NOT delete information from the internet.  Even while removed from search listings, the information remains directly accessible on the source website.

  • The presumption that the public interest is inherently damaged fails to acknowledge that the public interest is considered in the determination of whether to grant a particular request.  RTBF is not an absolute right – it requires a balancing of the interest of the individual against the public’s right to information; and

  • This is not a case where France is attempting to impose French law universally – rather, CNIL “simply requests full observance of European legislation by non European players offering their services in Europe.”

With the refusal of its (informal) appeal, Google is now required to comply with the original CNIL order.  Failure to do so will result in fines that begin in the $300,000 range but could rise as high as 2-5% of Google’s global operating costs.


First Do No Harm (to Research Interests)

The Council of Canadian Academics released its report Accessing Health And Health-Related Data in Canada on 31 March 2015.  A strong and comprehensive piece of work, this report—which was requested by the Canadian Institutes of Health Research (CIHR)— represents the efforts of a 14-member expert panel, chaired by Andrew K. Bjerring, former President and CEO of CANARIE Inc. In its report, the panel assesses the current state of knowledge surrounding timely access to health and health-related data- key for both health research and health system innovation.

Despite the work of multiple experts, and input from additional experts (including from the Office of the Privacy Commissioner of Canada) who acted as reviewers, the approach to privacy taken by the report is a disappointing one.

The starting point seems to be the presumption that data must be shared and that its sharing is of great value, while privacy concerns are primarily an issue of regulation of data management rather than active protection of individual privacy.  For instance, the value of associating multiple data related to the same individual is privileged over the privacy risks of such association and data mining, with the report suggesting linking the pieces of information prior to de-identification in order to preserve the data mining potential – a stance which privileges research outcomes over the protection of an individual from data mining.   It is difficult to reconcile this with a key finding that “the risk of potential harms resulting from access to data is tangible but low”.

The sharing of health data today in Canada is a fait accompli – the Canada Health Act mandates that medical professionals (doctors, physiotherapists, pharmacists etc.) turn over this sensitive information.  The question is whether de facto de-identification and information sharing is sufficient to protect privacy, and, in fact, whether that protection is even the end goal.  This report and its suggested approaches are aimed more at managing privacy concerns (e.g., via development of a privacy review board similar to a research ethics review) than about actual privacy for Canadians.

Reclaiming YourSelf

“I felt that my silence implied that I *should* be ashamed….”

I LOVE this project, both the explanatory video and the photo shoot to which it refers.  Danish journalist Emma Holten, who had been victimized by revenge porn, on the importance of consent. 

We have seen the results of public shaming of the sexuality of girls and women – we’ve seen it in the suicide of Amanda Todd, the death of Rehteah Parsons.  In the way(s) others use the threat of releasing/sharing such photos to attempt to extort and manipulate girls and women.  And Holten is correct that this is grounded in misogyny, in the hatred and objectification of women. 

It is grounded too in the underlying attitude that female bodies and sexuality are wrong.  If these images, those naked bodies were not presumptively “shameful”, their revelation could not leveraged as a threat.  The judgments that perpetuate the sharing of such photos (“you shouldn’t have been such a whore”) reinforce and reiterate that shame. 

Holten’s response, to refuse to be shamed about her body and sexuality is a powerful one.  The decision to participate in a photo shoot and release those photos publicly – to actively share images of her body, to refuse to feel shamed about her sexuality – is an important one.  By refusing to allow herself to be subverted or silenced, she instead takes the site/sight of her “shame” and transforms it, making of it not only a moment of resistance but a response and refutation.  A celebration and a reclamation.

How Many People Need to Know What Muppet You Are ??

I can’t be the only one whose Facebook feed has been flooded with quiz results.  Which Star Wars character are you?  Which sandwich?  Which country?  Which author is your soul mate?  Which random piece of stuff best represents you?

1328-1-facebook-quiz.jpg

I won’t pretend to be above the fray – I couldn’t resist finding out what Muppet I was (Miss Piggy, for the record) – but it seems as though the floodgates have been opened of late.  Every day I learn more and more about the alter egos of friends and acquaintances.  Only, the thing is, I’m not the only one learning these things.  Nor is the audience restricted to those the individual user chooses to share the results with. 

Back in 2012, the WSJ analyzed the top 100 Facebook apps (at the time) to see what personal information they collected.  The results are sobering --- despite the fact that Facebook’s Terms for Developers state that apps can collect only the information they need, this study showed that the norm is in fact that apps collect all sorts of personal information that cannot conceivably be necessary in order for the app to function, and that they collect it not only from the individual but sometimes from their friends as well.   A similar study performed by the University of Virginia found that of the top 150 applications on Facebook (at that time), 90 percent were demanding (and being given access to) information they didn't need in order for the app to function.

It doesn’t stop with the app developers either.  Know why app developers want to collect as much information as possible?  They say it’s for personalization and customization in order to make your experience of the app as positive as possible, and who am I to say that’s not part of it?  It is not, however, the whole of it.  Not even close.  All this information – self-reported personal and behavioural information – is the lifeblood of the targeted advertising market.  A market that according to the Direct Marketing Association’s study brings in $156 billion in 2012. 

Please know that I’m not complaining – it’s fun to see the results, to learn new things about friends or even just to have silly things to laugh about.  I have to wonder though, how many people are aware of just how widespread is the audience with whom they’re sharing this information, let alone how that information is likely collected and commodified; and whether they would be as eager to participate in these quizzes if they were aware.

Pitfalls of Personalized Advertising

Imagine someone sends you a promotional calendar.  Do you pay any attention to it?

What if it has your name on it?

What if it has your picture on it?

Perks have long been a sales tactic.  At one end of the spectrum are the luxury items -- free tickets, expense account steak dinners and single malt, “training” sessions in exotic locations.  At the other end, there’s still a drive to differentiate, to promote, and to build relationships but instead of luxuries, they turn to personalization. 

When it comes to privacy, personalization can go tragically wrong. 

In the past week, we’ve seen a couple of egregious examples of personalization gone wrong – Office Max sending one of its customers promotional mail that included the address line “Daughter Killed in Car Crash  and Bank of America offering a credit card to Lisa Is A Slut McIntire.   

It’s reminiscent of the revelations last year about how Target (and others) collect and analyze customer information, leading to situations where they are marketing to a profiled pregnant teenager before her father even knew she was pregnant!

Or how about when Wired UK sent out uber-personalized covers to selected subscribers and opinion makers, back in 2011.   One recipient’s personalized cover apparently included the following information:  name, age and birthdate, address, previous address, parents address and (apparently mined from his twitter account) the fact that he had met up with his ex-boyfriend earlier in the month. 

Thing is, we read these news stories or we hear about the incidents and they are intrusive and frightening but they are also distant.  Far removed from us.  Companies elsewhere profiling people we don’t know.    I’ve talked before about the stupid user, the way that line of thinking offloads responsibility onto the individual user rather than on the organization(s) who are exploiting the information – one of its other effects is the insidious way it encourages individuals to buy into it, to presume that a user whose privacy is invaded in this way has brought it upon themselves, has somehow “allowed” this to happen to them, a mindset that implicitly promises that the rest of us are still safe.  Simultaneously highlighting risks and reinforcing the stupid user mindset.

Of course, whether the companies are near or far, whether their victims are known to us or strangers shouldn’t matter.  Doesn’t matter, really.  Though that doesn’t change the fact that when the distance is bridged, when it’s someone or somewhere we know, it hits closer to home. 

This week, I talked to someone who received a calendar in the mail from a printing company with whom his organization had dealt in the past.   A simple promotion, but an opportunity to show off the company’s product and bring the company name to the forefront of a customer’s mind.    To raise their offering out of the ordinary, the company had personalized the calendar.  Again, a fairly simple idea – we’ve all seen the hats, the logo t-shirts and golf shirts, the monogrammed pens.  So this time, the company went one step further – they personalized the calendar not only with his name, but with a picture of him.  A picture that he says they must have gotten from his Facebook even though he’s not Facebook friends with anyone at the company. 

It’s not telling your parents that you’re pregnant.  Or mistakenly name-calling or revealing agonizing personal details in a label.  Nor is it splashing your personal information all over a magazine cover.   Indeed, he says it’s not that bad.  That he probably didn’t have strong enough privacy settings (or any privacy settings) on his photos. 

You see how insidious that stupid user thinking is?    An invasion of privacy and he’s already taking responsibility for it, bringing up the issue of privacy settings.  He doesn’t want to shame the company, make a complaint, or look for compensation.  Despite his discomfort with the invasion, he still holds himself accountable. 

Is that fair?  ‘Cause that’s what happens when we buy into stupid user – we blame each other.  We blame ourselves.  And the companies that mine our personal information, that crawl our online presence(s), that pull our personal photos off Facebook and use them for marketing purposes (in contravention of Facebook’s own Terms of Use) – they get to keep doing it. 

Opting-Out of the Gmail/Google+ email crossover

This week (Thursday 9 January 2014) Google rolled out a Gmail/google+ cross-platform function that will allow you to send email to people despite not having their email address.  The trick is that Google+ has up to date contact information, and so Gmail will “helpfully” send the email for you.

Concerns have been  (and should be) raised – about the project itself, and about the decision on Google’s part to create this with a default setting that allows everyone on Google+ access to the information, although users can restrict the circles of information to which this applies or opt-out entirely.

There is a quick way to opt-out:

  1. Open Gmail on a computer.
  2. Click the gear in the top right corner
  3. Select Settings.
  4. Scroll down through the General tab to the Email via Google+ section
  5. Click the drop-down menu and choose Anyone on Google+, Extended circles, Circles or No one.
  6. Click Save Changes at the bottom of the page.

 

Changing Our Default Settings: it’s time for a cognitive change

Privacy and “leaving the door open” online:

On 8 Nov 2013 a federal judge in Vermont ruled that information that is available through a P2P server is information in which there can be no right of privacy (United States v. Thomas, 2013 U.S. Dist. LEXIS 159914 (D. Vt. November 8, 2013).  

The ruling came in a challenge over the admissibility of information that had been gleaned via automated searches of P2P streams – the defendants claimed that the information had been illegally taken from their computers and therefore should be inadmissible.  The judge did not agree, finding instead that information on a P2P network is de facto public information – if something can be accessed via the Internet then the “door has been left open” and it is considered public. 

This perception of information on the Internet (or accessible via the Internet) as public rather than private is not restricted to analyses of P2P.  In a previous post I explored the legal treatment of information on Facebook, finding that Canadian courts have tended to allow information from social media profiles to be admitted as relevant and available, regardless of whether privacy settings have been used or not.

The not-so-subtle bias of “default” settings

Perhaps, notionally, default settings are only a starting point and able to be later changed or fine-tuned by users, but in actual fact they exert quite a powerful force. 

This is particularly true in the relatively new world of social media networks. People who when moving into a new residence would not hesitate to change the locks, put curtains on windows, and grow a hedge for privacy may not have the corresponding experience or confidence to take similar measures online.

“Default settings” in the technology world imply a standard configuration and/or best practice. Even Facebook CEO Mark Zuckerberg has asserted that default settings are reflective of broader “social norms” and aim to reflect current standards and values. 

In fact, there are powerful business incentives at work in all aspects of design and implementation of social networking sites. The trend for social network companies to set default settings that favour information sharing helps maximize the commercial potential for these businesses—especially given that users are unlikely to change default settings.  In practice, default settings exert normative force.

Placing the exclusive onus on the end-user to be hyper-vigilant is unrealistic and unfair. The presumption that complying with suggested “norm”-based defaults indicates a waiving of privacy expectations is incompatible with the privacy interests of ordinary internet users.

Time for a change

What is called for is a change in our cognitive defaults as a society when it comes to the publicness of information and the Internet.  It is clearly no longer enough to examine individual sites and technologies to see if they can be considered sufficiently private.  Rather, we must invert our cognitive defaults, change our way of thinking so that privacy is our default assumption rather than an exception.

Only when privacy rather than publicity is the expectation, the “norm”, will we have a chance to cultivate a truly privacy-protective environment including user-centric site defaults, respect for user choices, and—in cases like United States v. Thomas—a high threshold applied when finding that expectation of privacy has been waived.

 

. 

 

 

Playing the Privacy Blame Game, or the Fallacy of the “stupid user”

Meet the “Stupid User”

We’ve all heard it.

Whenever and wherever there are discussions about personal information and reputation related to online spaces—in media reports, discussions, at conferences—it’s there, the spectre of the “stupid user.”

Posting “risky” information, “failure” to use built-in online privacy tools, “failure” to appropriately understand the permanence of online activities and govern one’s conduct and information accordingly—these actions (or lack of action) are characteristic of the “stupid user” shibboleth. 

These days when the question of online privacy comes up it seems like everyone is an expert.  Conventional wisdom dictates that that once we put information online, to expect privacy is ridiculous.  “That ship has sailed,” people explain, information online is information you’ve released into the wild. There is no privacy, you have no control over your information, and – most damning of all – it’s your own fault! 

Here is a sampling of some recent cautionary tales,

·         Stupid Shopper:  After purchasing an electronic device with data capture capabilities, a consumer returns it to the store.  Weeks later, s/he is horrified to discover that a stranger purchased the same device from the store and found the consumer’s personal information still on the hard drive. Surely only a “stupid user” would fail to delete their personal information before returning the device, right?

·         Stupid Employee: A woman is on medical leave from work due to depression and receiving disability benefits.  While off work, after consultation with her psychiatrist, she engages in a number of activities intended to raise her spirits, including a visit to a Chippendale’s revue, a birthday party, and a tropical beach vacation.  Her benefits are abruptly terminated and the insurance company justifies this by indicating that upon viewing photos on her Facebook page showing her looking cheerful they considered her to not be depressed and able to return to work.  I mean, really – if you’re going to post all these happy pictures, surely you were asking for such a result?  Stupid not to protect yourself, isn’t it?

·         Stupid Online Slut: An RCMP Corporal is suspended and investigated when sexually explicit photographs in which he allegedly appears are posted to a sexual fetish websiteSurely anyone who is in a position of responsibility should know better than to take such photos, let alone post them online.  How can we trust someone who makes such a stupid error to do his job and protect us?

How Are These Users “Stupid”?

The fallacy of the stupid user is based on the misconception that individuals bear exclusive and primary responsibility for protecting themselves and their own privacy. This belief ignores an important reality–our actions do not take place in isolation but rather within a larger context of community, business, and even government. There are laws, regulations, policies and established social norms that must be considered in any examination of online privacy and reputation.

Taking context into consideration, let’s examine these three cautionary tales more closely:

·         Consumer protection: Despite the existence of laws and policies at multiple levels regulating how the business is required to deal with consumers’ personal information, the focus here was shifted to the failure of the individual customer to take extra measures in order to protect their own information.  Any consideration of whether the law governing this circumstance is sufficient or the failure on the part of the store to meet its legal responsibilities, or even follow its own stated policies, is sidetracked in favour of demonizing the customer.

·         Patient privacy: An individual, while acting on medical advice, posts information and photos on Facebook—which has a Terms of Use that specifically limits the uses to which information on the site may be used—and loses her disability benefits due to inferences drawn by the insurance company based on that information and those photos.  There are multiple players (employer, insurance company, regulators, as well as the employee) and issues (personal health information, business interests, government interests) involved this situation–but the focus is exclusively on the user’s perceived lack of judgment.  We see little to no consideration of the appropriateness of the insurer’s action. No regard for the fact that social networks have a business model based on eliciting and encouraging disclosure of personal information in order to exploit it, as well as architecture specifically designed to further that model.  Instead, all attention focuses on the individual affected and her responsibilities—the user’s decision to put the information online.

·         Private life: Criminal law, a federal employer, administrative bodies, and the media—all these were implicated when an RCMP officer was suspended and subjected to multiple investigations as well as media scrutiny after sexually explicit photographs in which he allegedly appears were posted on a membership-only sexual fetish website. In this case yet again the focus is on the individual, ignoring the fact that even were he to have participated in and allowed photographs to be taken of legal, consensual activities in off-work hours, there is no legal or ethical basis for these activities to be open to review and inspection by employers or the media. 

RE-THINKING THE “STUPID USER” ARCHETYPE

Powerful new tools for online surveillance and scrutiny can enable institutions—government and business—to become virtual voyeurs. Meanwhile, privacy policies are generally written by lawyers tasked with protecting the business interests of a company or institution. Typically multiple pages of legal jargon must be reviewed and “accepted” before proceeding to use software and services – it’s worth pointing out that a recent study says reading all the privacy policies a person typically encounters in a given year would take 76 days!

Not only are they long, the concepts and jargon in these Terms and Conditions are not readily accessible to the layperson. This contributes to a sense of vulnerability and guilt, making the average person feel like a “stupid user”. Typically we cross our fingers and click “I have read the terms and conditions, accept.”

My “Stupid User” theory is more than a difference of opinion about privacy and responsibility.  It’s not restricted to (or even about) expressions of advice or concern. There are, obviously, steps everyone can and should take to secure their information against malicious expropriation/exploitation of personal information. That said, not doing so – whether by virtue of conscious choice or failure to understand or use tools appropriately – does not and must not be considered as license for the appropriation and exploitation of personal information.

Rather than blame the apocryphal “Stupid User”, criticism must instead be aimed squarely at the approach and mind-set that focuses on the actions, errors, omissions, and above all, responsibility of the individual user to the exclusion of recognizing and identifying the larger issues at work.  This is especially important when those whose actions and roles are being obfuscated are in fact the very same entities who have explicit legal and ethical responsibilities to not abuse user privacy.

the violation IS the harm!

A class action suit filed against Google, Vibrant Media and the Media Innovation Group over tracking cookies and targeted ads was dismissed in a Delaware court in October 2013.  While accepting and agreeing that the companies in question had collected user personal information by circumventing browser settings and then sold that information to ad companies, the Judge felt that the plaintiffs had not shown that they had suffered harm due to these practices, and thus the action could not be sustained.

This is not, by any stretch of the imagination, the first time that the harm requirement has prevented individuals from holding to account those who invade their privacy.  Claims for harm, so-called speculative harm and for emotional distress as a result of the injury have all been attempted and dismissed.

In Canada, there is no requirement that injury be established in order to bring forward a claim.  Nevertheless, the question of harm necessarily arises at the damages stage.  In the germinal case of Jones v Tsige this issue is discussed, with the Judge recognizing the where no pecuniary loss has been suffered, there are still nominal or moral damages available, intended as at least a symbolic recognition that a wrong has been suffered.  After surveying common law and statutory prescriptions for such situations, the court finally arrives [at paras 87 and 88] at the following formula for dealing with intrusions on seclusion:

In my view, damages for intrusion upon seclusion in cases where the plaintiff has suffered no pecuniary loss should be modest but sufficient to mark the wrong that has been done. I would fix the range at up to $20,000. The factors identified in the Manitoba Privacy Act, which, for convenience, I summarize again here, have also emerged from the decided cases and provide a useful guide to assist in determining where in the range the case falls: 

1.   the nature, incidence and occasion of the defendant’s wrongful act;

2.   the effect of the wrong on the plaintiff’s health, welfare, social, business or financial position;

3.   any relationship, whether domestic or otherwise, between the parties;

4.   any distress, annoyance or embarrassment suffered by the plaintiff arising from the wrong; and

5.   the conduct of the parties, both before and after the wrong, including any apology or offer of amends made by the defendant.

I would neither exclude nor encourage awards of aggravated and punitive damages. I would not exclude such awards as there are bound to be exceptional cases calling for exceptional remedies. However, I would not encourage such awards as, in my view, predictability and consistency are paramount values in an area where symbolic or moral damages are awarded and absent truly exceptional circumstances, plaintiffs should be held to the range I have identified.

California has taken a different approach to the issue.  On 26 September 2013, California Secretary of State Debra Bowen moved forward on a ballot initiative that would amend the California Constitution to recognize a right of privacy in personal information and an attendant presumption of harm when that right is breached.  If the proposal garners 807,615 qualifying signatures by Feb. 24, 2014, the proposition will be included on the November 2014 ballot. 

Regardless of HOW the issue is addressed, it is clear that something must be done.  Both personal privacy and a right of remedy to the courts for redress of violations of that privacy are inherently compromised if they are only actionable when demonstrable pecuniary harm has been suffered. 

Online Privacy Rights: making it up as we go?

In the September 2013 Bland v. Roberts decision, the Fourth US Circuit Court of Appeals ruled that “liking” something on Facebook is free speech and as such should be afforded legal protection. This is good news, and while there has been extensive coverage of the decision, there are important implications for employers and employees that have not yet been fully explored.

The question is how far can an employer go in using information gleaned from social media sites against present and future employees?

Bland v. Roberts: about the case

The case was brought by employees at a Virginia Sheriff’s office whose jobs had been terminated.  The former employees claimed that their terminations were retaliation for them “like”-ing the campaign page of the Sheriff’s (defeated) opponent during the election.  Even though the action was a single “click”, the Court determined that it was sufficiently substantive speech to warrant constitutional protection.

Social media checks v. rights of employees

This decision has major implications for the current practice of social media checks of potential and current employees.

More and more that more and more employers are conducting online social media background checks in addition to criminal record and credit bureau checks (where permitted).  A 2007 survey of 250 US employers found that 44% of employers used social media to examine the profiles of job candidates.  Survey data from ExecuNet in 2006 shows a similar pattern, with 77% of executive recruiters using web search engines to research candidates and 35% stating that they had ruled candidates out based on the results of those searches.

Legal and ethical implications of social media checks

Federal and provincial human rights legislation in Canada stipulates that decisions about employment (among other things) must not be made on the basis of discrimination for protected grounds. Employers and potential employers are required to guard against making decisions based on discriminatory grounds.  These have been refined through legislation and expanded by court decisions to include: age, sex, gender presentation, national or ethnic identity, sexual orientation, race, and family status.   

surveillance.jpg

Social media checks can glean information actually shared by a user (accurate or not), but also can fuel inferences (potentially unfair, gendered, classed or sexualized) drawn from online activities. 

For example, review of a given Facebook page may show (depending on the individual privacy settings applied):  statuses, comments from friends and other users, photographs (uploaded by the subject and by others), as well as collected “likes” and group memberships.  These can be used to draw inferences (accurate or not) about political views, sexual orientation, lifestyle and various other factors that could play into decisions about hiring, discipline or a variety of other issues concerning the individual. 

Online space is still private space

The issue of social media profile reviews is becoming an increasingly contentious one. An employer should have no more right to rifle through someone’s private online profile than through one’s purse or wallet. With the Bland v. Roberts ruling and its recognition of Facebook speech as deserving of constitutional protection, important progress has been made in establishing that online privacy is a right and its protection is a responsibility.

Privacy settings and the law: making user preferences meaningful

The scope and meaning of privacy settings on Facebook and other social media are still being negotiated. Some recent developments and new case law are helping to clarify when and where “private” means “protected.”

“Kids today”

A May 2013 Pew Internet survey of teens, social media and privacy indicates that the prevailing wisdom that “kids don’t care about privacy” is wrong.  Indeed, it showed 85% of teens aged 12-17 using privacy settings to some extent, with 60% setting their profile to private and another 25% having a partially private profile. A 2010 study on Facebook users and privacy indicates that adults and teens show equal awareness of privacy controls and that adults are even more likely to use privacy tools.  Social media services are providing tools to protect privacy and users are proactively utilizing those tools.

Unfortunately, those privacy settings aren’t always sufficient to protect personal information:

·         Degree of difficulty: for example, within Facebook, (as I write this) the default setting is “public”, and though granular controls are provided, some find those controls so confusing that information winds up being de facto public. 

·         Also, though nominally taking place within the Facebook environment, information that is shared with third party apps moves outside the control of Facebook. Note, this is a rapidly evolving situation—Facebook frequently updates privacy tools in an effort to balance the interests of users, advertisers, and its business model.

 

“Private” means private: Even more concerning has been the failure of courts to respect privacy settings.  In cases where no privacy settings have been applied, courts have admitted personal information gleaned from Facebook as evidence. For example, in a 2010 Canadian case, [Shonn's Makeovers & Spa v. M.N.R., 2010 TCC 542 ] the court considered an individual’s profile information on Facebook (information which is de facto public) as a decisive factor in ruling the plaintiff was a contractor rather than employee. 

Profiles & privacy: recent court decisions

We’ve seen an unsettling trend of courts admitting information gathered online—even when an individual user has applied privacy settings—categorizing such information as inherently public in spite of their proactive efforts to protect information. 

·         The Ontario Superior Court, for instance, determined in Frangione v. Vandongen et al (2010) ONSC 2823:  (1) that since it is accepted that a person’s Facebook profile may well contain information that is relevant to a proceeding; therefore (2) even when dealing with a profile that has been limited using privacy tools, it is still appropriate for the court to extrapolate from the nature of the social media service that relevant information may be present.    

·         Similarly, in British Columbia the court concluded in Fric v. Gershmann (2012) BCSC 614 that information from a private Facebook profile should be produced.  This decision was grounded in three conclusions:  (1) that the existence of a public profile implied the existence of a private profile that could contain relevant information; (2) that since the plaintiff was relying on photographs of her prior to the accident it was only fair that the defendant have access to photographs after the accident; and (3) that the fact that the profile was limited to friends was somewhat irrelevant given that she had over 350 “friends”, thus suggesting publicness. 

Canadian courts have tended to allow information from social media profiles to be admitted as relevant and available, regardless of whether privacy settings have been used or not.   

Given this, it is particularly significant that in August 2013 the New Jersey District Court recently found [Ehling v Monmouth-Ocean Hospital Service Corp et al Civ No. 2:11-cv-03305] non-public Facebook wall posts are protected under the Federal Stored Communications Act.  There, Ms. Ehling had applied privacy settings to restrict her information to “friends only” and a “friend” took screenshots of her content and shared it with another.  The protection is being extended to the information despite the fact that the third party did not access the information via Facebook itself. 

•  •  •

As the law evolves to meet the challenges of new online risks and rewards, a balance needs to be found in providing meaningful respect for privacy and personal information. It is important for “users”—in other words, all of us—to be able to participate in social, cultural, and commercial exchanges online without the sacrificing the widely recognized right to privacy we have offline.

 

Dark shadows: reputation, privacy, and online baby pictures

In her article of 4 September “Why We Don’t Put Pictures of Our Daughter Online”, Amy Webb starts out with the understanding that parents who post information and pictures about their children are contributing to the eventual data shadow of that child.

She talks about the ever-increasing impact of the data shadow – the consequences of such information, from its availability to future friends and acquaintances all the way to potential employers and educational institutions having access to the data and the inferences they may draw.  

Alas, from this eminently sensible recognition, Webb’s approach quickly devolves into a couple of different (and contradictory) approaches.

Slate’s piece on why you shouldn’t post
photos of children includes a photo of a cute child you shouldn’t post.

 

Slate’s piece on why you shouldn’t post photos of children includes a photo of a cute child you shouldn’t post.

 

First, she argues that “[t]he easiest way to opt-out is to not create that digital content in the first place.”  I would challenge the assumption that opting out is the best answer.  The existence of a data trail need not be an inherently or exclusively negative thing.  Think of the issues that women seeking to leave marriages encountered historically and may still encounter --  the lack of a credit history in the name of the individual woman.  Financial matters being left to the husband can result in these women becoming “invisible “and this invisibility in turn may mean that the women are left without financial resources to draw on.  Opting out (I originally wanted to dub this approach as a kind of digital asceticism or cyberAmish, but found that not only were both terms already in use, but both actively encouraged the use of technology, not the avoidance of it) doesn’t stop digital presence from being important -- having this kind of presence is increasingly an important (perhaps even necessary) precursor to participation in all sorts of arenas.  Instead, it becomes a misguided, even comical scenario reminiscent of  Christopher Walken and Sissy Spacek raising Brendan Fraser in the claustrophobic  “safety” of a Cold War era bomb shelter (Blast from the Past, 1999).

Questions about how to authenticate identity are increasingly moving away from how an individual authenticates themselves and towards how other people’s relationships and perceptions of another act to authenticate that other.   Lack of digital shadow leaves an individual unable to authenticate herself, and thus denied access to useful resources, communities, relationships and information. 

Interestingly, despite her cry for opting out, it is clear that Webb believes deeply in the importance of a digital presence and does not see that changing.  She writes about the strategy of creating a “digital trust fund” for her daughter – how, after reviewing potential baby names to ensure no (current) negative associations or conflicts,

[w]ith her name decided, we spent several hours registering her URL and a vast array of social media sites. All of that tied back to a single email account, which would act as a primary access key. We listed my permanent email address as a secondary—just as you’d fill out financial paperwork for a minor at a bank. We built a password management system for her to store all of her login information…

The disconnects within her purported technical sophistication are many. 

It’s charmingly naïve to assume that Webb's "permanent" Verizon email address, the password management system or the logins set up now will still be operative by the time her daughter reaches an age where her parents deem it appropriate to allow her access to the digital identity they’ve set up.  (There is also a secondary question about the accountability of sites that are allowing her to set up accounts in another person’s name despite the controls they claim to have in place, but that’s another issue entirely.)

Even should those still be extant, what is the likelihood of the selected social media sites (or social media at all) remaining relevant?  Is this really a digital trust fund or the equivalent of the carefully hoarded newspapers and grocery store receipts that I had to wade through and then discard when we packed up my grandmother’s place?

Finally, her confidence that these steps will erase any baby digital footprints is misguided.  She writes that “[a]ll accounts are kept active but private. We also regularly scour the networks of our friends and family and remove any tags.” Nevertheless, it took less than 15 minutes for a moderately techie friend of mine (using nothing more than Google and opposable thumbs) to not only locate the name of Webb’s daughter, but the likely genesis of that name (the romantic overseas venue where Webb’s husband popped the question, and a family middle name.) 

Bizarrely, Webb’s well-intentioned effort to shield her daughter from potential future embarrassment of online baby pictures does not extend to self-reflection about the act of documenting, in detail, her meticulous gathering of data on her data including spreadsheets tracking her urination and poop. Webb wonders, “It’s hard enough to get through puberty. Why make hundreds of embarrassing, searchable photos freely available to her prospective homecoming dates? If Kate’s mother writes about a negative parenting experience, could that affect her ability to get into a good college?” completely without irony.

Am I saying that Webb has failed to sufficiently protect her daughter’s identity?  Not really.  I’m saying that in the modern world it is virtually impossible to keep information locked down.  It’s a waste and a distraction from the real issue.

So…let’s stop relying on opting out and/or protecting identity as a way to insure against the creation of data shadows and focus on guarding against negative repercussions from the content(s) of those shadows.  Instead of accepting that the use (and negative repercussions from that use) of online data are inevitable once the data exists, let us turn attention to establishing rights and protections for personal data. Why should information be considered presumptively public and broadly accessible?  What relevance does a blog post from a 14 year old have to a decision about whether that individual would be a good employee?  Should photographs of activities outside the sphere of school be considered as part of the “record” of a student applicant to academe? 

Webb says that "[k]nowing what we do about how digital content and data are being cataloged, my husband and I made an important choice before our daughter was born.”   Maybe the issue isn’t how the content and data are being catalogued – maybe it’s about how they’re being used.  Indeed, maybe if we stopped being distracted with futile efforts to “opt out” we might focus on forging effective and meaningful information controls. 

 

BYOD: "bring your own device" & privacy


margin notes:

  • The phone you carry with you every day might not be "yours". It reports on where you go and the information on it—personal or not— may not be private.

 

  • You wouldn't expect your employer to have the right to watch you when you use a toilet they own, why is it okay to watch you what you do on the phone? 

 

  • We are increasingly expected to be available/accessible via technology for work 24/7—as the lines between personal and professional time are blurring, individual privacy is being sacrificed. 
privacyeraser.jpg

The days of 9-5 jobs seem to be long gone for many of us. 

Emails, phone calls, consultations with clients or with team members who are dealing with clients – these are increasingly a regular feature of life whether we are in the office or out of it, during “office hours” or not.  As Dr. Melissa Gregg notes

For those in large organisations, mobile and wireless devices deliver new forms of imposition and surveillance as much as they do efficiency or freedom, and with email increasingly considered an entrenched part of organisational culture, ordinary workers are finding it necessary to develop their own tactics to manage a constant expectation that they will be available through the screen, if not in person.

 

Given the constant expectation of availability, employees are increasingly using smartphones, tablets and the like.   It is important to note, however, that just as the work day is now bleeding into personal time, so too do personal communications use and work communications use become increasingly blended.  Whether the smartphone or tablet is issued by an employer or belongs to the employee, the fact remains that often work and personal communications take place on the same device(s).    This phenomenon is discussed under the term “BYOD” (Bring Your Own Device).   In this piece, that term will be used whether the device is in fact supplied by the employer or is a device owned by the employee but being used for work purposes.

This collapse of the professional and the personal creates issues and concerns for both parties to the relationship.

For the employees, it is the risk of exposing personal information to the employer as well as the possibility that the employer might be able to use such information for disciplinary or other purposes.  In a recent online survey of employees in the US,UK and Germany, MobileIron found while 80% of respondents were using personal devices for work, on average only about 30% of employees “completely trust their employer to keep personal information private and not use it against them in any way.”  As for what information was actually accessible to employers, 41% of those surveyed believed that employers had no access to the information on their device, 15% simply weren’t sure what information was accessible, and fully 44% were confident that employers could see data but were unsure what specific data might be accessed or reviewed.    When asked about the level of concern for various types of information that was or might be on the device, respondents indicated that:

  • Personal email and attachments: 66%
  • Texts: 63%
  • Personal contacts: 59%
  • Photos: 58%
  • Videos: 57%
  • Voicemails: 55%
  • All the information contained in all the mobile apps: 54%
  • Details of phone calls and internet usage: 53%
  • Location: 48%
  • List of all the apps on the device: 46%
  • List of just the apps used for work: 29%
  • The information in the apps used for work: 29%
  • Company email and attachments: 21%
  • Company contacts: 20%

Employers are also at risk. 

Employers are responsible for the security and safeguarding of information, and therefore must in the first place be aware of the issue in the first place.  Workplaces may well have policies in place explicitly forbidding the use of work devices for personal communications, but this does not guarantee the policy will be adhered to.  A survey conducted by Aruba Networks found that approximately 17% of 3,500 EMEA employees failed to declare their personal devices to their IT department – it is impossible for IT departments to ensure proper upgrades and security to devices of which they are not even aware.  This of course presumes that IT departments do have in place procedures for dealing with such devices and guarding against data loss or leakage  – a recent Acronis survey showed that only 31% of companies even mandated a password or key lock on such personal devices, while only 21% wiped company data from the device when an employee leaves the company. 

That same Acronis survey revealed even more gaps in business understanding and treatment of BYOD – first of all, 30% of organizations were still forbidding personal devices from accessing the network.  Of the others, only 40% had any kind of personal device policy in place.   Finally, whether there were actual policies in place or not, over 80% of organizations revealed that they had not developed or provided any training to employees about BYOD privacy risks.  The failure to do so, of course, assists in the perpetuation of the problem, since failure to educate only exacerbates employee ignorance of risks, failure to declare devices to IT, and uncertainty and concern about employer access to information on the device.

 

It should be noted that while the Supreme Court of Canada has not yet had occasion to consider BYOD explicitly, in 2012’s  R v Cole  decision which dealt with a teacher’s work computer on which a file of pornography was discovered the Court was willing to find that the teacher’s subjective expectation of privacy was reasonable in the circumstances (although ultimately the illegally obtained evidence was admitted).  Again in this case while we see a Court wrestling with various public policy issues, writing for the majority, Justice Fish noted that:

[2]    Computers that are reasonably used for personal purposes — whether found in the workplace or the home — contain information that is meaningful, intimate, and touching on the user’s biographical core.  Vis-à-vis the state, everyone in Canada is constitutionally entitled to expect privacy in personal information of this kind.
[3]     While workplace policies and practices may diminish an individual’s expectation of privacy in a work computer, these sorts of operational realities do not in themselves remove the expectation entirely: The nature of the information at stake exposes the likes, interests, thoughts, activities, ideas, and searches for information of the individual user.

What then should an employee do to protect her privacy on a shared workplace/personal use device?  Well, when meeting a friend after work the other day I was surprised when she set down both a Blackberry and a smartphone on the table, and seemed to alternate her use between them.  Eventually she explained that the Blackberry was work-issued and she used it only for that purpose.  The smartphone, on the other hand, was her personal device and she said the monthly plan was a small price to pay to be sure of the security of both her own personal communications and those of her organization. 

It may be a hassle to tote more than one device around with us, but until there are better policies, procedures and understandings in place around BYOD, it may be the best approach.