Privacy settings and the law: making user preferences meaningful

The scope and meaning of privacy settings on Facebook and other social media are still being negotiated. Some recent developments and new case law are helping to clarify when and where “private” means “protected.”

“Kids today”

A May 2013 Pew Internet survey of teens, social media and privacy indicates that the prevailing wisdom that “kids don’t care about privacy” is wrong.  Indeed, it showed 85% of teens aged 12-17 using privacy settings to some extent, with 60% setting their profile to private and another 25% having a partially private profile. A 2010 study on Facebook users and privacy indicates that adults and teens show equal awareness of privacy controls and that adults are even more likely to use privacy tools.  Social media services are providing tools to protect privacy and users are proactively utilizing those tools.

Unfortunately, those privacy settings aren’t always sufficient to protect personal information:

·         Degree of difficulty: for example, within Facebook, (as I write this) the default setting is “public”, and though granular controls are provided, some find those controls so confusing that information winds up being de facto public. 

·         Also, though nominally taking place within the Facebook environment, information that is shared with third party apps moves outside the control of Facebook. Note, this is a rapidly evolving situation—Facebook frequently updates privacy tools in an effort to balance the interests of users, advertisers, and its business model.

 

“Private” means private: Even more concerning has been the failure of courts to respect privacy settings.  In cases where no privacy settings have been applied, courts have admitted personal information gleaned from Facebook as evidence. For example, in a 2010 Canadian case, [Shonn's Makeovers & Spa v. M.N.R., 2010 TCC 542 ] the court considered an individual’s profile information on Facebook (information which is de facto public) as a decisive factor in ruling the plaintiff was a contractor rather than employee. 

Profiles & privacy: recent court decisions

We’ve seen an unsettling trend of courts admitting information gathered online—even when an individual user has applied privacy settings—categorizing such information as inherently public in spite of their proactive efforts to protect information. 

·         The Ontario Superior Court, for instance, determined in Frangione v. Vandongen et al (2010) ONSC 2823:  (1) that since it is accepted that a person’s Facebook profile may well contain information that is relevant to a proceeding; therefore (2) even when dealing with a profile that has been limited using privacy tools, it is still appropriate for the court to extrapolate from the nature of the social media service that relevant information may be present.    

·         Similarly, in British Columbia the court concluded in Fric v. Gershmann (2012) BCSC 614 that information from a private Facebook profile should be produced.  This decision was grounded in three conclusions:  (1) that the existence of a public profile implied the existence of a private profile that could contain relevant information; (2) that since the plaintiff was relying on photographs of her prior to the accident it was only fair that the defendant have access to photographs after the accident; and (3) that the fact that the profile was limited to friends was somewhat irrelevant given that she had over 350 “friends”, thus suggesting publicness. 

Canadian courts have tended to allow information from social media profiles to be admitted as relevant and available, regardless of whether privacy settings have been used or not.   

Given this, it is particularly significant that in August 2013 the New Jersey District Court recently found [Ehling v Monmouth-Ocean Hospital Service Corp et al Civ No. 2:11-cv-03305] non-public Facebook wall posts are protected under the Federal Stored Communications Act.  There, Ms. Ehling had applied privacy settings to restrict her information to “friends only” and a “friend” took screenshots of her content and shared it with another.  The protection is being extended to the information despite the fact that the third party did not access the information via Facebook itself. 

•  •  •

As the law evolves to meet the challenges of new online risks and rewards, a balance needs to be found in providing meaningful respect for privacy and personal information. It is important for “users”—in other words, all of us—to be able to participate in social, cultural, and commercial exchanges online without the sacrificing the widely recognized right to privacy we have offline.

 

Dark shadows: reputation, privacy, and online baby pictures

In her article of 4 September “Why We Don’t Put Pictures of Our Daughter Online”, Amy Webb starts out with the understanding that parents who post information and pictures about their children are contributing to the eventual data shadow of that child.

She talks about the ever-increasing impact of the data shadow – the consequences of such information, from its availability to future friends and acquaintances all the way to potential employers and educational institutions having access to the data and the inferences they may draw.  

Alas, from this eminently sensible recognition, Webb’s approach quickly devolves into a couple of different (and contradictory) approaches.

Slate’s piece on why you shouldn’t post
photos of children includes a photo of a cute child you shouldn’t post.

 

Slate’s piece on why you shouldn’t post photos of children includes a photo of a cute child you shouldn’t post.

 

First, she argues that “[t]he easiest way to opt-out is to not create that digital content in the first place.”  I would challenge the assumption that opting out is the best answer.  The existence of a data trail need not be an inherently or exclusively negative thing.  Think of the issues that women seeking to leave marriages encountered historically and may still encounter --  the lack of a credit history in the name of the individual woman.  Financial matters being left to the husband can result in these women becoming “invisible “and this invisibility in turn may mean that the women are left without financial resources to draw on.  Opting out (I originally wanted to dub this approach as a kind of digital asceticism or cyberAmish, but found that not only were both terms already in use, but both actively encouraged the use of technology, not the avoidance of it) doesn’t stop digital presence from being important -- having this kind of presence is increasingly an important (perhaps even necessary) precursor to participation in all sorts of arenas.  Instead, it becomes a misguided, even comical scenario reminiscent of  Christopher Walken and Sissy Spacek raising Brendan Fraser in the claustrophobic  “safety” of a Cold War era bomb shelter (Blast from the Past, 1999).

Questions about how to authenticate identity are increasingly moving away from how an individual authenticates themselves and towards how other people’s relationships and perceptions of another act to authenticate that other.   Lack of digital shadow leaves an individual unable to authenticate herself, and thus denied access to useful resources, communities, relationships and information. 

Interestingly, despite her cry for opting out, it is clear that Webb believes deeply in the importance of a digital presence and does not see that changing.  She writes about the strategy of creating a “digital trust fund” for her daughter – how, after reviewing potential baby names to ensure no (current) negative associations or conflicts,

[w]ith her name decided, we spent several hours registering her URL and a vast array of social media sites. All of that tied back to a single email account, which would act as a primary access key. We listed my permanent email address as a secondary—just as you’d fill out financial paperwork for a minor at a bank. We built a password management system for her to store all of her login information…

The disconnects within her purported technical sophistication are many. 

It’s charmingly naïve to assume that Webb's "permanent" Verizon email address, the password management system or the logins set up now will still be operative by the time her daughter reaches an age where her parents deem it appropriate to allow her access to the digital identity they’ve set up.  (There is also a secondary question about the accountability of sites that are allowing her to set up accounts in another person’s name despite the controls they claim to have in place, but that’s another issue entirely.)

Even should those still be extant, what is the likelihood of the selected social media sites (or social media at all) remaining relevant?  Is this really a digital trust fund or the equivalent of the carefully hoarded newspapers and grocery store receipts that I had to wade through and then discard when we packed up my grandmother’s place?

Finally, her confidence that these steps will erase any baby digital footprints is misguided.  She writes that “[a]ll accounts are kept active but private. We also regularly scour the networks of our friends and family and remove any tags.” Nevertheless, it took less than 15 minutes for a moderately techie friend of mine (using nothing more than Google and opposable thumbs) to not only locate the name of Webb’s daughter, but the likely genesis of that name (the romantic overseas venue where Webb’s husband popped the question, and a family middle name.) 

Bizarrely, Webb’s well-intentioned effort to shield her daughter from potential future embarrassment of online baby pictures does not extend to self-reflection about the act of documenting, in detail, her meticulous gathering of data on her data including spreadsheets tracking her urination and poop. Webb wonders, “It’s hard enough to get through puberty. Why make hundreds of embarrassing, searchable photos freely available to her prospective homecoming dates? If Kate’s mother writes about a negative parenting experience, could that affect her ability to get into a good college?” completely without irony.

Am I saying that Webb has failed to sufficiently protect her daughter’s identity?  Not really.  I’m saying that in the modern world it is virtually impossible to keep information locked down.  It’s a waste and a distraction from the real issue.

So…let’s stop relying on opting out and/or protecting identity as a way to insure against the creation of data shadows and focus on guarding against negative repercussions from the content(s) of those shadows.  Instead of accepting that the use (and negative repercussions from that use) of online data are inevitable once the data exists, let us turn attention to establishing rights and protections for personal data. Why should information be considered presumptively public and broadly accessible?  What relevance does a blog post from a 14 year old have to a decision about whether that individual would be a good employee?  Should photographs of activities outside the sphere of school be considered as part of the “record” of a student applicant to academe? 

Webb says that "[k]nowing what we do about how digital content and data are being cataloged, my husband and I made an important choice before our daughter was born.”   Maybe the issue isn’t how the content and data are being catalogued – maybe it’s about how they’re being used.  Indeed, maybe if we stopped being distracted with futile efforts to “opt out” we might focus on forging effective and meaningful information controls. 

 

making regulation meaningful

On 26 August 2013, a San Francisco judge approved a $20 million settlement between Facebook and those of its users whose information had been used in Facebook’s “sponsored stories” advertising campaign without their consent.    You remember “sponsored stories” – the program where a user hitting the “like” button was equated to a commercial endorsement, with the risk that the user’s photo and personal information might even appear in an ad.

The suit was filed in April 2011, alleging that Facebook had not adequately informed its users of the program nor given them an opportunity to opt-out of the program.  Although Facebook admits no wrong-doing, the settlement includes not only the $20 million payout but changes to the language of Facebook’s Terms of Use, the creation of a means for users to review the use of their information in sponsored stories advertising and to control ongoing use of that information.  The settlement also provides parental supervision and the opportunity to opt-out entirely for non-adult users.  Education programs will also be developed, notifying users of these new provisions and powers.

Not_facebook_not_like_thumbs_down.png

Although $20 million sounds like a substantial amount of money, it pales in comparison to the approximately $73 million the court estimates that Facebook earned from those sponsored stories ads that used the information in question. 

It should also be noted that the $20 million will be split among the class action attorneys, privacy organizations, and affected users.  Estimates range from 614,000 to 125 million affected users, resulting in individual payouts of no more than $15.  If every affected user applied for the funds, it would amount to 2 cents/user.  In approving the settlement, the judge acknowledged that the individuals would receive only a nominal amount, but indicated that they had failed to prove that they were "harmed in any meaningful way".

Meaningful?  Facebook’s business plan is predicated on collecting and commodifying the personal information of its users.  It does so successfully.  And when, after paying the settlement, they have still made $50+ million by violating fair information principles, this settlement shows itself not as “punishment” but rather merely as the cost of doing business. 

Users need meaningful protections, and that means that violations of the Terms of Use, of the Privacy Policy, or of fair information principles need to be understood as invasive violations and inherently damaging.  As long as infringing on informational self-determination is considered not to constitute meaningful harm, companies will be able to do so with virtual impunity.   

 

I'd Tap That (and I have): #creepyNSA

Like it wasn’t creepy enough when it was about “security”

Since June, when the first Snowden revelations about NSA wiretaps of Verizon customers were published in the Guardian, there have been online jokes about #NSAlovepoems.  However, with the release August 23, 2013, of initial details about “LoveInt” – jargon for the use of NSA listening technology for romantic purposes – it's looking like the jokes were (or could be) true. 

Let’s be clear – this isn’t romantic.  Putting it in terms of “love interests” obscures what’s really going on.  This is STALKING.  Nothing to do with security or counter-terrorism. 

Never ones to let a fertile area lie fallow, the twitterati are all over it.  #LoveInt, #NSAlovepoems and #NSApickuplines are have been providing (nervous) laughter all weekend.  Let’s hope that once the laughter fades, the discomfort remains and leaves behind the understanding that this kind of invasion is never ok – not when it comes to stalkers, and not “for our own good” by government agencies either.

BYOD: "bring your own device" & privacy


margin notes:

  • The phone you carry with you every day might not be "yours". It reports on where you go and the information on it—personal or not— may not be private.

 

  • You wouldn't expect your employer to have the right to watch you when you use a toilet they own, why is it okay to watch you what you do on the phone? 

 

  • We are increasingly expected to be available/accessible via technology for work 24/7—as the lines between personal and professional time are blurring, individual privacy is being sacrificed. 
privacyeraser.jpg

The days of 9-5 jobs seem to be long gone for many of us. 

Emails, phone calls, consultations with clients or with team members who are dealing with clients – these are increasingly a regular feature of life whether we are in the office or out of it, during “office hours” or not.  As Dr. Melissa Gregg notes

For those in large organisations, mobile and wireless devices deliver new forms of imposition and surveillance as much as they do efficiency or freedom, and with email increasingly considered an entrenched part of organisational culture, ordinary workers are finding it necessary to develop their own tactics to manage a constant expectation that they will be available through the screen, if not in person.

 

Given the constant expectation of availability, employees are increasingly using smartphones, tablets and the like.   It is important to note, however, that just as the work day is now bleeding into personal time, so too do personal communications use and work communications use become increasingly blended.  Whether the smartphone or tablet is issued by an employer or belongs to the employee, the fact remains that often work and personal communications take place on the same device(s).    This phenomenon is discussed under the term “BYOD” (Bring Your Own Device).   In this piece, that term will be used whether the device is in fact supplied by the employer or is a device owned by the employee but being used for work purposes.

This collapse of the professional and the personal creates issues and concerns for both parties to the relationship.

For the employees, it is the risk of exposing personal information to the employer as well as the possibility that the employer might be able to use such information for disciplinary or other purposes.  In a recent online survey of employees in the US,UK and Germany, MobileIron found while 80% of respondents were using personal devices for work, on average only about 30% of employees “completely trust their employer to keep personal information private and not use it against them in any way.”  As for what information was actually accessible to employers, 41% of those surveyed believed that employers had no access to the information on their device, 15% simply weren’t sure what information was accessible, and fully 44% were confident that employers could see data but were unsure what specific data might be accessed or reviewed.    When asked about the level of concern for various types of information that was or might be on the device, respondents indicated that:

  • Personal email and attachments: 66%
  • Texts: 63%
  • Personal contacts: 59%
  • Photos: 58%
  • Videos: 57%
  • Voicemails: 55%
  • All the information contained in all the mobile apps: 54%
  • Details of phone calls and internet usage: 53%
  • Location: 48%
  • List of all the apps on the device: 46%
  • List of just the apps used for work: 29%
  • The information in the apps used for work: 29%
  • Company email and attachments: 21%
  • Company contacts: 20%

Employers are also at risk. 

Employers are responsible for the security and safeguarding of information, and therefore must in the first place be aware of the issue in the first place.  Workplaces may well have policies in place explicitly forbidding the use of work devices for personal communications, but this does not guarantee the policy will be adhered to.  A survey conducted by Aruba Networks found that approximately 17% of 3,500 EMEA employees failed to declare their personal devices to their IT department – it is impossible for IT departments to ensure proper upgrades and security to devices of which they are not even aware.  This of course presumes that IT departments do have in place procedures for dealing with such devices and guarding against data loss or leakage  – a recent Acronis survey showed that only 31% of companies even mandated a password or key lock on such personal devices, while only 21% wiped company data from the device when an employee leaves the company. 

That same Acronis survey revealed even more gaps in business understanding and treatment of BYOD – first of all, 30% of organizations were still forbidding personal devices from accessing the network.  Of the others, only 40% had any kind of personal device policy in place.   Finally, whether there were actual policies in place or not, over 80% of organizations revealed that they had not developed or provided any training to employees about BYOD privacy risks.  The failure to do so, of course, assists in the perpetuation of the problem, since failure to educate only exacerbates employee ignorance of risks, failure to declare devices to IT, and uncertainty and concern about employer access to information on the device.

 

It should be noted that while the Supreme Court of Canada has not yet had occasion to consider BYOD explicitly, in 2012’s  R v Cole  decision which dealt with a teacher’s work computer on which a file of pornography was discovered the Court was willing to find that the teacher’s subjective expectation of privacy was reasonable in the circumstances (although ultimately the illegally obtained evidence was admitted).  Again in this case while we see a Court wrestling with various public policy issues, writing for the majority, Justice Fish noted that:

[2]    Computers that are reasonably used for personal purposes — whether found in the workplace or the home — contain information that is meaningful, intimate, and touching on the user’s biographical core.  Vis-à-vis the state, everyone in Canada is constitutionally entitled to expect privacy in personal information of this kind.
[3]     While workplace policies and practices may diminish an individual’s expectation of privacy in a work computer, these sorts of operational realities do not in themselves remove the expectation entirely: The nature of the information at stake exposes the likes, interests, thoughts, activities, ideas, and searches for information of the individual user.

What then should an employee do to protect her privacy on a shared workplace/personal use device?  Well, when meeting a friend after work the other day I was surprised when she set down both a Blackberry and a smartphone on the table, and seemed to alternate her use between them.  Eventually she explained that the Blackberry was work-issued and she used it only for that purpose.  The smartphone, on the other hand, was her personal device and she said the monthly plan was a small price to pay to be sure of the security of both her own personal communications and those of her organization. 

It may be a hassle to tote more than one device around with us, but until there are better policies, procedures and understandings in place around BYOD, it may be the best approach.

The First Rule of Fight Club: Understanding Context in Interpreting Online Information

margin notes

  • Whether we use privacy settings or not, each of us has some culturally/ subculturally developed expectation of privacy and the limits of information sharing.  These govern the expectations of privacy we apply to our online thoughts and behaviours.

 

  • We dress and speak differently with friends than in a job interview – those distinctions get lost when online statements are re-viewed out of context.   Does the act of doing something online rather than offline really transform every utterance into a truthful and reliable reflection of who we are?

 

  • Let’s not criminalize thoughtlessness, nor make it into a weight to be carried for the rest of someone’s life.

 

We have all heard the various cautions about watching what we put online for fear of repercussions. 

When we think of those repercussions, however, we most often think of administrative decisions – the impact on a job seeker or university applicant of a racy photo, troubling tweet or similar artifact.   There are other potential repercussions — more immediate, more serious and more lasting. Some troubling examples:

An Ontario man ranted online that the Children’s Aid Society that had apprehended his son deserved a suicide attack and was charged criminally.
 
A 15-year-old who had tweeted that if George Zimmerman was found not guilty he’d “shoot everyone in Zion…and ill [sic] get away wit [sic] it just like Zimmerman” was arrested and charged with a felony.  Despite law enforcement statements that there was no truth to the statement, the youth has still been criminally charged.
 
An 18-year-old who regularly posts his own rap lyrics and videos was charged with “communicating terrorist threats”after posting rap lyrics that referenced the Boston marathon bombings.  Despite petitions and arguments that locate the statement under the First Amendment protection of freedom of speech, the youth remains incarcerated and has been denied bail. 
 
An 18 year old girl was ordered to remove a Facebook status where she "LOL-ed" her report of her DUI accident.  Despite her statements that she had no intention of minimizing or making fun of the incident, she was sentenced to two days in jail for contempt of court when she failed to do so.
Two Britons on their way to the US to “destroy America” were met at the airport, searched and detained by armed guards.  Despite attempting to explain that “destroy” in this context referred to partying, they were kept overnight and put on a return flight the next day.

In each of these situations, we see statements made on social media being taken out of context by law enforcement and resulting in various degrees of criminal investigation, detention and prosecution. 

 

Context is key

I’ve written before about the problematic presumption that information online is inherently public.  Here I want instead to examine the context within which such information is shared; and then explore the importance of understanding that context in appropriately interpreting the information. 

protect_your_privacy_by_blackjack0919-d4t4qfn.jpg

Ibrahmin suggests that online networks be thought of as “complicit risk communities where personal information becomes social capital which is traded and exchanged.” Thus, if we are to correctly understand the interactions within those spaces, it is imperative that we recognize that these utterances, performances, and risks are undertaken within a particular community and are enacted with a view to acquiring social capital within that particular community. 

While observers may believe that any or all information posted online is inherently public,  research suggests rather that the absence of (or failure to adhere to) current mainstream privacy standards does not indicate an absence of privacy or the desire for privacy altogether.  Indeed, from historical antecedents through to contemporary youth online engagement  we see recognized community norms that facilitate the recognition and protection of privacy even where no physical or spatial privacy is possible. 

One of the fundamental underpinnings of the “if it’s on the Internet its public” attitude is the recognition that it’s never that hard for motivated searchers to find information no matter what precautions or obfuscations are employed by the user.  Questions about the accuracy, reliability or even truthfulness of the information that can be found in this way are left unaddressed by this presumption

Accordingly, as online engagement increases, so too does the collection of information from those spaces by external bodies, be they employers (current or prospective); educational institutions; lawyers; law enforcement bodies or even the State itself.   Where this information is being used by third parties, there is a risk that the information will be misinterpreted or accorded more weight than is deserved. 

Social Media and Law Enforcement

A Lexis Nexis Risk Solutions 2012 survey of 1200 law enforcement professionals reveals the extent to which social media use has permeated law enforcement activities.   At least 50% of the respondents use social media at least weekly for law enforcement purposes, and 67% believe that social media use is of assistance not only in solving crimes but in solving them quickly.  The study shows that social media information and platforms are used for a variety of purposes, including identifying persons, discovering criminal activity in the first place, and gathering evidence.

Research on social media conducted for Public Safety Canada recently included 11 interviews with persons related to law enforcement about their use of social media in February and March 2011.  In their results detailing the way(s) in which social media may be used in information gathering and investigations, respondents discussed Open Source Intelligence gathering (OSINT) – finding the profile(s) of an already identified suspect individual, mapping the interpersonal networks, and collecting other information which can be linked to the individual at issue.  While this may have a positive impact in some cases such as that of Rodney Brardfod, who was being investigated for armed robbery and was exonerated by a Facebook status, the process does result in a largely unregulated collection of personal information and the inferences drawn from information as well as performance and social connection(s) to others.

"we run the risk of sarcasm, artistic expression, mere frustration or hyperbole resulting in the criminalization of individuals who are thoughtless rather than dangerous."

There are also instances where a particular suspect isn’t identified, but a particular incident is at issue and law enforcement agencies use social networks in order to identify a suspect.  In both the Vancouver, BC, Stanley Cup riot and the London, Ontario, riots, law enforcement interacted with SNSs in novel ways.  While participants were posting pictures and stories on Facebook, Twitter and other networks, police were able to follow the action, identify perpetrators, and levy charges more serious than simple participation (in the cases of those who detailed their actions). Of course, this process isn’t restricted to law enforcement agencies -- in the wake of the Vancouver riots numerous Facebook groups were set up by users for the purposes of assisting with identifying perpetrators  while others eschewed Facebook and used the web directly to set up similar sites.

Law enforcement does not simply use SNSs reactively -- it is increasingly the case that social network sites are monitored proactively, as in the case of the NYPD, who actually set up a Facebook team to monitor SNSs on an ongoing basis,  or the recent revelation of the Department of Homeland Security’s program  that included a list of key words and search terms that are monitored prophylactically for security reasons. 

It is unquestioned then that law enforcement can and does use information from social media sites.    My purpose here isn’t to argue that these uses are good or bad – rather, I am arguing that the importance of context in understanding and interpreting this information cannot be overstated.  Identity presentations, connections and interactions are informed by the context in which they exist, as well as existing for the purpose of facilitating interactions and social capital within those spaces. 

In the first example given above, Jesse Hirsch was accepted as a “Facebook Expert” in the Ontario criminal trial of a young man who posted comments on his Facebook threatening a suicide attack against the Children’s Aid Society who had recently apprehended his infant son.  Hirsch testified that Facebook users “routinely embellish what they say as part of an online persona” and the accused was ultimately acquitted.    It is imperative that the role of context in shaping the presentation of information and tone of online be understood.   If law enforcement agencies are unable to do so, recourse should be had to experts who do understand the role of context and performance in online spaces.  Where charges make it to court, counsel must insist on the right to lead evidence contextualizing the posts admitted into evidence. 

The presumptive accuracy and reliability of statements made in online spaces can and should be called into question by appropriately contextualizing the information and its production.    If this is not done, we run the risk of sarcasm, artistic expression, mere frustration or hyperbole resulting in the criminalization of individuals who are thoughtless rather than dangerous. 

 

 

 

 

 

 

 

 

 

 

 

 

Whose Life is it Anyway? Presumptive Publicness

margin notes

  • When we apply privacy settings to online content, that is a clear indication there are definitive expectations of privacy.  Why are the courts dismissing this?  What role does a site's governing documents (Terms of Use and Privacy Policies) play?  If a site asserts that info should only be used within the site, why are employers, governments and others mining sites for information with virtual impunity?

 

  • Why do so many of us accept a default "public" setting?  If privacy settings aren’t available (or aren’t used) does that automatically indicate that information must be public?

 

  • Information being scrutinized, captured, exploited isn’t just what's been posted – it’s also the photos and posts other people put up, as well as comments and conversations that take place on an individual's own page or where they have been tagged. Even more worrying is the potential for inferences to be made and conclusions—true or false—to be drawn.

 

Woke up recently to email from a friend – he’d noticed several acquaintances taking photos from of a recent Facebook album of his and re-posting  without attribution.  He wondered about the legality of it, but also about whether this was the accepted practice on Facebook.   It's an excellent question which required delving into Facebook and its policies.

Facebook’s Statement of Rights and Responsibilities includes the clause that: [y]ou will not post content or take any action on Facebook that infringes or violates someone else's rights or otherwise violates the law.

In most if not all cases, that Statement should mean that anyone lifting someone else’s photos and re-publishing them are not just compromising the author’s rights to the image but also are not complying with Facebook’s terms.  Yet this practice isn’t restricted to this particular friend’s friends….it’s pretty commonplace online, this grabbing and re-publication of images without acknowledgement or attribution.  It’s not just photos – we see information from social media sites re-used and re-viewed in a variety of circumstances. 

Whose data is it?

Why does this happen?  The roots are in a common presumption that information online is public by default.  This isn’t news -- typically when the question of privacy online, especially privacy on social networks like Facebook, comes up it seems like everyone is an expert.  Conventional wisdom tends toward the notion that once we put information on Facebook or really anywhere at all, to expect privacy is ridiculous.  That ship has sailed, someone will explain patiently, information online is information that's been released into the wild. You have no privacy; you have no control over the information at all! 

Are these self-appointed experts correct?  I’d contend that they are not.  Stephen Colbert talks about the distinction between truth and truthiness, a term he coined that he American Dialect Society defines as “the quality of preferring concepts or facts one wishes or believes to be true, rather than concepts or facts known to be true”.  It seems to me that the presumptive publicness of information online is one of those “facts”.  That despite the existence of privacy policies, terms of use and other restrictions on what the information can or should be used for, people somehow prefer to believe that once information is put online it is available to everyone. 

Helen Nissenbaum has written extensively about privacy as “contextual integrity",  breaking down the way(s) appropriateness is assessed in determining whether and where information is divulged.  A breach of privacy, then, is one where either information is inappropriately divulged, or one where the realms within which the information flows are outside the normative understandings embedded in the original sharing of the information. 

The next time you find yourself snorting dismissively about someone’s failure to protect their information and subsequent complaint about its collection and use, stop.  Think for a moment about if that were you.  Think about the context in which you would’ve uploaded the information, the expectations about who would see it and what would or would not be done with the information that informed your decision to make it available.  And from that perspective, ask yourself if that information is *really* public.    In the offline world, joking posturing for a friend or friends would be understood as such – why is it that we presume that performing that same act online makes it presumptively “true” and relevant to an assessment of my overall character or employability?

Blaming the "stupid user"

The assumption that such information is public grounds not just the feeling(s) of entitlement to images and ideas, it also grounds our response(s) when we hear about such invasions taking place.   It normalizes those invasions, places blame on the “stupid user” and obscures the role of the individual organization or site in creating the issue.    Where a business plan is predicated on access to and use of personal information provided by users, is it fair to focus our analysis of “responsibility” solely on the individual?   Shouldn’t we look at the way(s) in which site design, architecture and default settings facilitate and encourage the sharing of information?  At the intentions behind the information sharing – with whom was the information intended to be shared and for what purpose(s)?   Let’s examine the constituting documents of the site – the Terms of Use and/or the Privacy Policy to ensure that they are reasonable and comprehensible to the ordinary user.  Let’s ensure not only that there are privacy tools in place, but that the tools are able to protect privacy in meaningful ways.