ScoreAssured’s unsettling assurances

Hearing a lot of talk about Tenant Assured – an offering from new UK company ScoreAssured.  Pitched as being an assessment tool for “basic information” as well as “tenant worthiness,” TenantAssist scrapes social media sites (named sites so far as Facebook, Twitter, LinkedIn, and Instagram) content – including conversations and private messages – and then runs the data through natural language processing and other analytic software to produce a report. 

The report rates the selected individual on five “traits” – extraversion, neuroticism, openness, agreeableness, and conscientiousness.   The landlord never directly views posts of the potential tenant, but the report will include detailed information such as activity times, particular phrases, pet ownership etc.

Is this really anything new?  We know that employers, college admissions, and even prospective landlords have long been using social media reviews as part of their background check process. 

TenantAssured would say that at least with their service the individual is asked for and provides consent.   And that is, at least nominally, true.  But let’s face it – consent that is requested as part of a tenancy application is comparable to the consent for a background check on an employment application – “voluntary” only if you’re willing to not go any further in the process.  Saying “no” is perceived as a warning flag that will likely result in one not being hired or not getting housing. Declining jobs and/or accommodations is not a luxury everyone can afford. 

Asked about the possibility of a backlash from users, co-founder Steve Thornhill confidently asserted that “people will give up their privacy to get something they want.”  That may be the case…but personally I’m concerned that people may be forced to give up their privacy to get something they urgently need (or quite reasonably want).

But let’s presume for a second that the consent is “freely” given. Problems with this model remain: 

  • Reports may include information such as pregnancy, political opinions, age, etc. – information that is protected by human rights codes.  (Thornhill says, “all we can do is give them the information, it’s up to landlords to do the right thing”)
  • Performed identity – our self-presentation on social media sites is constructed for particular (imagined) audiences.  To remove it from that context does not render it presumptively true or reliable – quite the opposite.
  • Invisibility of standards – how are these traits being assessed?  What values are being associated with particular behaviours, phrases and activities and are they justified?  An individual who is currently working in a bar or nightclub might show activity and language causing them to be receive negative ratings as excessive partiers or unstable, for instance.  In fact, the Telegraph demonstrated this by running reports on their financial journalists (people who, for obvious reasons, tend to use words like fraud and loan rather frequently) and sure enough the algorithm rated them negatively in “financial stability”.
  • Unlike credit bureaus, which are covered under consumer protection laws, there is no regulation of this sector.  What that means, among other things, is that there is not necessarily any way for an individual to know what is included in their report, let along challenge the accuracy or completeness of such a report. 

The Washington Post quite correctly identifies this as an exponential change in social media monitoring, writing that “…Score Assured, with its reliance on algorithmic models and its demand that users share complete account access, is something decidedly different from the sort of social media audits we’re used to seeing.  Those are like cursory quality-control check; this is more analogous to data strip-mining.”

Would it fly here?

Again, we know that background checks and credit checks for prospective tenants aren’t new.  We also know that, in Canada at least, our Information and Privacy Commissioners have had occasion to weigh in on these issues.

In 2004, tenant screening in Ontario suffered a setback when the  Privacy Commissioner of Ontario instructed the (then) Ontario Rental Housing Tribunal to stop releasing so much personal information in their final orders. As a result, names are now routinely removed from the orders, making it significantly more difficult to scrape the records generally.  As for individual queries, unless you know the names of the parties, the particular rental address and file number already, you will probably not be able to find anything about a person’s history in such matters.

Now, with the release of PIPEDA Report of Findings #2016-002, Feb 19, 2016 (posted 20 May 2016), that line of business is even more firmly shuttered.  There, the OPC investigated the existence of a “bad tenant” list that was maintained by the landlord association. The investigation raised numerous concerns about the list:

  • Lack of consent by individuals for their information to be collected and used for such a purpose
  • Lack of accountability – there was no way for individuals to ascertain if any information about them was on the bad tenant list, who had placed it there, and what the information was. 
  • Simultaneously, the landlord association was also not assessing the accuracy or credibility of any of the personal information that it collected, placed on the list and regularly disclosed to other landlords, who then made decisions based upon it.
  • Further, there was no way to ensure accuracy of the information on the list, and no way for individuals to challenge the accuracy or completeness of the information.

It was the finding of the Privacy Commissioner of Canada that by maintaining and sharing this information, the association was acting as a credit reporting agency, albeit without the requisite license from the province.  Accordingly, the Commissioner found that the purpose for which the tenant personal information was collected, used or disclosed was not appropriate under s.5(3) of PIPEDA.  The institution, despite disagreeing with the characterization of it as a credit bureau, implemented the recommendation to destroy the “bad tenant” list, cease collecting information for such a list, and to no longer share personal information about prospective tenants without explicit consent.

This is good news, but the temptation to monetize violations of privacy continues. SecureAssist has expansive plans.  They anticipate launching (by the end of July 2016) similar “report” products targeted at Human Resources officers and employers, as well as parents seeking nannies. 

“If you’re living a normal life,” Thornhill asserts, “then, frankly, you have nothing to worry about.”  We all need to ask– who defines “normal”?  And since when is a corporation’s definition of “normal” the standard for basic human dignity needs like employment or housing? 

 

 

LinkedIn, Spam and Reputation

 

A 12 June decision in California regarding LinkedIn illustrates an increasingly nuanced understanding of reputation in the context of online interactions.

When a user is setting up a LinkedIn account, they are led through a variety of screens that solicit personal information. Although most of the information is not mandatory, LinkedIn’s use of a “meter” to indicate the “completeness” of a profile actively encourages the sharing of information.  Among other things, these steps enable LinkedIn to gain access to the address book contact information of the new user, and prompt that user to provide permission to use that contact information to invite those contacts to establish a relationship on LinkedIn. 

The lawsuit alleges that LinkedIn is inappropriately collecting and using this contact information. LinkedIn, pointing to the consent for this use provided by customers, had sought to have the case dismissed.  Judge Koh looked at the whole process and found that while consent was given for the initial email to contacts, LinkedIn also sent two follow up emails to those contacts who did not respond to the original – and that there was no user consent provided for these follow-ups. 

What is interesting about the decision to allow this part of the claim to go forward is Koh’s analysis of harm.  That analysis doesn’t stop with whether LinkedIn has consent for the follow-up emails, rather she examines what the effect of this practice might be and concludes that it "could injure users' reputations by allowing contacts to think that the users are the types of people who spam their contacts or are unable to take the hint that their contacts do not want to join their LinkedIn network."  Given this, she suggests that users could pursue claims that LinkedIn violated their right of publicity, which protects them from unauthorized use of their names and likenesses for commercial purposes, and violated a California unfair competition law.

 

Social Media Employment Background Checks: Sounding the Call for Regulation

Digital “footprints” on the internet may have an impact both pre-employment and post-employment, and these impacts may disproportionally affect non-mainstream groups whose information is being assessed against standards that are undisclosed and unregulated.  

A recent study (released 21 November) by Alessandro Acquisti and Christina M. Fong of Carnegie Mellon University explores this phenomenon.  Starting with actual information revealed on social media sites, the team created resumes, professional network profiles, and social network profiles.  The resumes were submitted to 4,000 real job opportunities with US employers.  The online profiles were then tweaked by the researchers to be revealing of either religion (Muslim or Christianity) or sexual orientation (homosexual or heterosexual) of the individual, while otherwise equivalent to each other.   

Interestingly, the study did not find that sexual orientation created significant differences in interview requests, but across the US the “Muslim” candidate received 14% fewer interviews than did the “Christian” applicant.  The variation by religious affiliation was especially pronounced when correlated with conservative political indicators by geographical region (areas that favoured conservative candidates in the last national election).  An online component of the study using the same (manipulated) profiles produced similar responses. 

Further, the study suggests that between one in three and one in ten employers were searching online for information about job candidates.

magnifying glass.jpg

This number is at the low end of the scale, but not inconsistent with previous research.  For instance, a 2007 survey of 250 US employers found that 44% of employers used Social Security numbers to check into the backgrounds of job candidates.   2006 survey data from ExecuNet demonstrates a similar pattern, with 77% of executive recruiters using web search engines to research candidates and 35% of those stating that they had ruled candidates out based on the results of those searches.  In 2009, Harris Interactive research showed 45% of employers doing background checks that included social media, while a 2012 Career Builder study showed that two in five employers used social media to check out prospective employees, and of those who did not do so, 11% indicated they planned to start. 

Although the Carnegie Mellon study was focussed on the effect of two narrow characteristics, the authors expressed concern that particular identifiers may not be the only factor exerting an influence on employment decisions. The mere fact a candidate chooses to post such information online may itself lead to inferences and conclusions by prospective employers. 

Acquisti & Fong note that prospective employers who inquire about religious affiliation during an interview open themselves to liability under federal or state equal employment opportunity laws—and also that the US Equal Employment Opportunity Commission has publicly cautioned against the use of online searches to investigate protected characteristics. 

Similarly, in Canada there is no explicit liability in the act of searching, but rather in the issue of whether hiring decisions are being made based on inappropriate criteria.  In other words, it isn’t just a matter of information found in such a search, but also in the (potentially unfair, possibly gendered, classed or sexualized) inferences that may be drawn from the search.

Though the study focussed on pre-employment checks, the issue of online searches does not become moot after an applicant has been hired.  PIPEDA applies to personal information about any federal employee, and other jurisdictions may also cover such information under some legal framework.  This protection is important because online searches may be a tool in disciplinary investigations. 

Self-censorship or meaningful regulation?

The conventional wisdom, of course, is always that individuals must take responsibility for their personal information and should carefully control what information is available online. 

This study is another confirmation that employer (and other institutional) use of online background searches, including social media sites, is an ongoing and increasingly normalized part of the employment relationship.  Given that this information is being accessed and used in pre- and post-employment situations, it is clear that such practices should be examined and regulated. This is necessary to ensure that, at the very least, only information that is correct and relevant will be used, and that the individuals impacted are aware of its collection and use. Mechanisms for the challenge, correction and redress of misinformation need to be established.

This is an emerging and accelerating challenge to individual privacy rights.  Policing the misuse of personal information should not be left as an exclusively individual responsibility – systemic utilization of such information requires a systemic policy and response. 

 

Playing the Privacy Blame Game, or the Fallacy of the “stupid user”

Meet the “Stupid User”

We’ve all heard it.

Whenever and wherever there are discussions about personal information and reputation related to online spaces—in media reports, discussions, at conferences—it’s there, the spectre of the “stupid user.”

Posting “risky” information, “failure” to use built-in online privacy tools, “failure” to appropriately understand the permanence of online activities and govern one’s conduct and information accordingly—these actions (or lack of action) are characteristic of the “stupid user” shibboleth. 

These days when the question of online privacy comes up it seems like everyone is an expert.  Conventional wisdom dictates that that once we put information online, to expect privacy is ridiculous.  “That ship has sailed,” people explain, information online is information you’ve released into the wild. There is no privacy, you have no control over your information, and – most damning of all – it’s your own fault! 

Here is a sampling of some recent cautionary tales,

·         Stupid Shopper:  After purchasing an electronic device with data capture capabilities, a consumer returns it to the store.  Weeks later, s/he is horrified to discover that a stranger purchased the same device from the store and found the consumer’s personal information still on the hard drive. Surely only a “stupid user” would fail to delete their personal information before returning the device, right?

·         Stupid Employee: A woman is on medical leave from work due to depression and receiving disability benefits.  While off work, after consultation with her psychiatrist, she engages in a number of activities intended to raise her spirits, including a visit to a Chippendale’s revue, a birthday party, and a tropical beach vacation.  Her benefits are abruptly terminated and the insurance company justifies this by indicating that upon viewing photos on her Facebook page showing her looking cheerful they considered her to not be depressed and able to return to work.  I mean, really – if you’re going to post all these happy pictures, surely you were asking for such a result?  Stupid not to protect yourself, isn’t it?

·         Stupid Online Slut: An RCMP Corporal is suspended and investigated when sexually explicit photographs in which he allegedly appears are posted to a sexual fetish websiteSurely anyone who is in a position of responsibility should know better than to take such photos, let alone post them online.  How can we trust someone who makes such a stupid error to do his job and protect us?

How Are These Users “Stupid”?

The fallacy of the stupid user is based on the misconception that individuals bear exclusive and primary responsibility for protecting themselves and their own privacy. This belief ignores an important reality–our actions do not take place in isolation but rather within a larger context of community, business, and even government. There are laws, regulations, policies and established social norms that must be considered in any examination of online privacy and reputation.

Taking context into consideration, let’s examine these three cautionary tales more closely:

·         Consumer protection: Despite the existence of laws and policies at multiple levels regulating how the business is required to deal with consumers’ personal information, the focus here was shifted to the failure of the individual customer to take extra measures in order to protect their own information.  Any consideration of whether the law governing this circumstance is sufficient or the failure on the part of the store to meet its legal responsibilities, or even follow its own stated policies, is sidetracked in favour of demonizing the customer.

·         Patient privacy: An individual, while acting on medical advice, posts information and photos on Facebook—which has a Terms of Use that specifically limits the uses to which information on the site may be used—and loses her disability benefits due to inferences drawn by the insurance company based on that information and those photos.  There are multiple players (employer, insurance company, regulators, as well as the employee) and issues (personal health information, business interests, government interests) involved this situation–but the focus is exclusively on the user’s perceived lack of judgment.  We see little to no consideration of the appropriateness of the insurer’s action. No regard for the fact that social networks have a business model based on eliciting and encouraging disclosure of personal information in order to exploit it, as well as architecture specifically designed to further that model.  Instead, all attention focuses on the individual affected and her responsibilities—the user’s decision to put the information online.

·         Private life: Criminal law, a federal employer, administrative bodies, and the media—all these were implicated when an RCMP officer was suspended and subjected to multiple investigations as well as media scrutiny after sexually explicit photographs in which he allegedly appears were posted on a membership-only sexual fetish website. In this case yet again the focus is on the individual, ignoring the fact that even were he to have participated in and allowed photographs to be taken of legal, consensual activities in off-work hours, there is no legal or ethical basis for these activities to be open to review and inspection by employers or the media. 

RE-THINKING THE “STUPID USER” ARCHETYPE

Powerful new tools for online surveillance and scrutiny can enable institutions—government and business—to become virtual voyeurs. Meanwhile, privacy policies are generally written by lawyers tasked with protecting the business interests of a company or institution. Typically multiple pages of legal jargon must be reviewed and “accepted” before proceeding to use software and services – it’s worth pointing out that a recent study says reading all the privacy policies a person typically encounters in a given year would take 76 days!

Not only are they long, the concepts and jargon in these Terms and Conditions are not readily accessible to the layperson. This contributes to a sense of vulnerability and guilt, making the average person feel like a “stupid user”. Typically we cross our fingers and click “I have read the terms and conditions, accept.”

My “Stupid User” theory is more than a difference of opinion about privacy and responsibility.  It’s not restricted to (or even about) expressions of advice or concern. There are, obviously, steps everyone can and should take to secure their information against malicious expropriation/exploitation of personal information. That said, not doing so – whether by virtue of conscious choice or failure to understand or use tools appropriately – does not and must not be considered as license for the appropriation and exploitation of personal information.

Rather than blame the apocryphal “Stupid User”, criticism must instead be aimed squarely at the approach and mind-set that focuses on the actions, errors, omissions, and above all, responsibility of the individual user to the exclusion of recognizing and identifying the larger issues at work.  This is especially important when those whose actions and roles are being obfuscated are in fact the very same entities who have explicit legal and ethical responsibilities to not abuse user privacy.

Online Privacy Rights: making it up as we go?

In the September 2013 Bland v. Roberts decision, the Fourth US Circuit Court of Appeals ruled that “liking” something on Facebook is free speech and as such should be afforded legal protection. This is good news, and while there has been extensive coverage of the decision, there are important implications for employers and employees that have not yet been fully explored.

The question is how far can an employer go in using information gleaned from social media sites against present and future employees?

Bland v. Roberts: about the case

The case was brought by employees at a Virginia Sheriff’s office whose jobs had been terminated.  The former employees claimed that their terminations were retaliation for them “like”-ing the campaign page of the Sheriff’s (defeated) opponent during the election.  Even though the action was a single “click”, the Court determined that it was sufficiently substantive speech to warrant constitutional protection.

Social media checks v. rights of employees

This decision has major implications for the current practice of social media checks of potential and current employees.

More and more that more and more employers are conducting online social media background checks in addition to criminal record and credit bureau checks (where permitted).  A 2007 survey of 250 US employers found that 44% of employers used social media to examine the profiles of job candidates.  Survey data from ExecuNet in 2006 shows a similar pattern, with 77% of executive recruiters using web search engines to research candidates and 35% stating that they had ruled candidates out based on the results of those searches.

Legal and ethical implications of social media checks

Federal and provincial human rights legislation in Canada stipulates that decisions about employment (among other things) must not be made on the basis of discrimination for protected grounds. Employers and potential employers are required to guard against making decisions based on discriminatory grounds.  These have been refined through legislation and expanded by court decisions to include: age, sex, gender presentation, national or ethnic identity, sexual orientation, race, and family status.   

surveillance.jpg

Social media checks can glean information actually shared by a user (accurate or not), but also can fuel inferences (potentially unfair, gendered, classed or sexualized) drawn from online activities. 

For example, review of a given Facebook page may show (depending on the individual privacy settings applied):  statuses, comments from friends and other users, photographs (uploaded by the subject and by others), as well as collected “likes” and group memberships.  These can be used to draw inferences (accurate or not) about political views, sexual orientation, lifestyle and various other factors that could play into decisions about hiring, discipline or a variety of other issues concerning the individual. 

Online space is still private space

The issue of social media profile reviews is becoming an increasingly contentious one. An employer should have no more right to rifle through someone’s private online profile than through one’s purse or wallet. With the Bland v. Roberts ruling and its recognition of Facebook speech as deserving of constitutional protection, important progress has been made in establishing that online privacy is a right and its protection is a responsibility.

Privacy settings and the law: making user preferences meaningful

The scope and meaning of privacy settings on Facebook and other social media are still being negotiated. Some recent developments and new case law are helping to clarify when and where “private” means “protected.”

“Kids today”

A May 2013 Pew Internet survey of teens, social media and privacy indicates that the prevailing wisdom that “kids don’t care about privacy” is wrong.  Indeed, it showed 85% of teens aged 12-17 using privacy settings to some extent, with 60% setting their profile to private and another 25% having a partially private profile. A 2010 study on Facebook users and privacy indicates that adults and teens show equal awareness of privacy controls and that adults are even more likely to use privacy tools.  Social media services are providing tools to protect privacy and users are proactively utilizing those tools.

Unfortunately, those privacy settings aren’t always sufficient to protect personal information:

·         Degree of difficulty: for example, within Facebook, (as I write this) the default setting is “public”, and though granular controls are provided, some find those controls so confusing that information winds up being de facto public. 

·         Also, though nominally taking place within the Facebook environment, information that is shared with third party apps moves outside the control of Facebook. Note, this is a rapidly evolving situation—Facebook frequently updates privacy tools in an effort to balance the interests of users, advertisers, and its business model.

 

“Private” means private: Even more concerning has been the failure of courts to respect privacy settings.  In cases where no privacy settings have been applied, courts have admitted personal information gleaned from Facebook as evidence. For example, in a 2010 Canadian case, [Shonn's Makeovers & Spa v. M.N.R., 2010 TCC 542 ] the court considered an individual’s profile information on Facebook (information which is de facto public) as a decisive factor in ruling the plaintiff was a contractor rather than employee. 

Profiles & privacy: recent court decisions

We’ve seen an unsettling trend of courts admitting information gathered online—even when an individual user has applied privacy settings—categorizing such information as inherently public in spite of their proactive efforts to protect information. 

·         The Ontario Superior Court, for instance, determined in Frangione v. Vandongen et al (2010) ONSC 2823:  (1) that since it is accepted that a person’s Facebook profile may well contain information that is relevant to a proceeding; therefore (2) even when dealing with a profile that has been limited using privacy tools, it is still appropriate for the court to extrapolate from the nature of the social media service that relevant information may be present.    

·         Similarly, in British Columbia the court concluded in Fric v. Gershmann (2012) BCSC 614 that information from a private Facebook profile should be produced.  This decision was grounded in three conclusions:  (1) that the existence of a public profile implied the existence of a private profile that could contain relevant information; (2) that since the plaintiff was relying on photographs of her prior to the accident it was only fair that the defendant have access to photographs after the accident; and (3) that the fact that the profile was limited to friends was somewhat irrelevant given that she had over 350 “friends”, thus suggesting publicness. 

Canadian courts have tended to allow information from social media profiles to be admitted as relevant and available, regardless of whether privacy settings have been used or not.   

Given this, it is particularly significant that in August 2013 the New Jersey District Court recently found [Ehling v Monmouth-Ocean Hospital Service Corp et al Civ No. 2:11-cv-03305] non-public Facebook wall posts are protected under the Federal Stored Communications Act.  There, Ms. Ehling had applied privacy settings to restrict her information to “friends only” and a “friend” took screenshots of her content and shared it with another.  The protection is being extended to the information despite the fact that the third party did not access the information via Facebook itself. 

•  •  •

As the law evolves to meet the challenges of new online risks and rewards, a balance needs to be found in providing meaningful respect for privacy and personal information. It is important for “users”—in other words, all of us—to be able to participate in social, cultural, and commercial exchanges online without the sacrificing the widely recognized right to privacy we have offline.

 

Dark shadows: reputation, privacy, and online baby pictures

In her article of 4 September “Why We Don’t Put Pictures of Our Daughter Online”, Amy Webb starts out with the understanding that parents who post information and pictures about their children are contributing to the eventual data shadow of that child.

She talks about the ever-increasing impact of the data shadow – the consequences of such information, from its availability to future friends and acquaintances all the way to potential employers and educational institutions having access to the data and the inferences they may draw.  

Alas, from this eminently sensible recognition, Webb’s approach quickly devolves into a couple of different (and contradictory) approaches.

Slate’s piece on why you shouldn’t post
photos of children includes a photo of a cute child you shouldn’t post.

 

Slate’s piece on why you shouldn’t post photos of children includes a photo of a cute child you shouldn’t post.

 

First, she argues that “[t]he easiest way to opt-out is to not create that digital content in the first place.”  I would challenge the assumption that opting out is the best answer.  The existence of a data trail need not be an inherently or exclusively negative thing.  Think of the issues that women seeking to leave marriages encountered historically and may still encounter --  the lack of a credit history in the name of the individual woman.  Financial matters being left to the husband can result in these women becoming “invisible “and this invisibility in turn may mean that the women are left without financial resources to draw on.  Opting out (I originally wanted to dub this approach as a kind of digital asceticism or cyberAmish, but found that not only were both terms already in use, but both actively encouraged the use of technology, not the avoidance of it) doesn’t stop digital presence from being important -- having this kind of presence is increasingly an important (perhaps even necessary) precursor to participation in all sorts of arenas.  Instead, it becomes a misguided, even comical scenario reminiscent of  Christopher Walken and Sissy Spacek raising Brendan Fraser in the claustrophobic  “safety” of a Cold War era bomb shelter (Blast from the Past, 1999).

Questions about how to authenticate identity are increasingly moving away from how an individual authenticates themselves and towards how other people’s relationships and perceptions of another act to authenticate that other.   Lack of digital shadow leaves an individual unable to authenticate herself, and thus denied access to useful resources, communities, relationships and information. 

Interestingly, despite her cry for opting out, it is clear that Webb believes deeply in the importance of a digital presence and does not see that changing.  She writes about the strategy of creating a “digital trust fund” for her daughter – how, after reviewing potential baby names to ensure no (current) negative associations or conflicts,

[w]ith her name decided, we spent several hours registering her URL and a vast array of social media sites. All of that tied back to a single email account, which would act as a primary access key. We listed my permanent email address as a secondary—just as you’d fill out financial paperwork for a minor at a bank. We built a password management system for her to store all of her login information…

The disconnects within her purported technical sophistication are many. 

It’s charmingly naïve to assume that Webb's "permanent" Verizon email address, the password management system or the logins set up now will still be operative by the time her daughter reaches an age where her parents deem it appropriate to allow her access to the digital identity they’ve set up.  (There is also a secondary question about the accountability of sites that are allowing her to set up accounts in another person’s name despite the controls they claim to have in place, but that’s another issue entirely.)

Even should those still be extant, what is the likelihood of the selected social media sites (or social media at all) remaining relevant?  Is this really a digital trust fund or the equivalent of the carefully hoarded newspapers and grocery store receipts that I had to wade through and then discard when we packed up my grandmother’s place?

Finally, her confidence that these steps will erase any baby digital footprints is misguided.  She writes that “[a]ll accounts are kept active but private. We also regularly scour the networks of our friends and family and remove any tags.” Nevertheless, it took less than 15 minutes for a moderately techie friend of mine (using nothing more than Google and opposable thumbs) to not only locate the name of Webb’s daughter, but the likely genesis of that name (the romantic overseas venue where Webb’s husband popped the question, and a family middle name.) 

Bizarrely, Webb’s well-intentioned effort to shield her daughter from potential future embarrassment of online baby pictures does not extend to self-reflection about the act of documenting, in detail, her meticulous gathering of data on her data including spreadsheets tracking her urination and poop. Webb wonders, “It’s hard enough to get through puberty. Why make hundreds of embarrassing, searchable photos freely available to her prospective homecoming dates? If Kate’s mother writes about a negative parenting experience, could that affect her ability to get into a good college?” completely without irony.

Am I saying that Webb has failed to sufficiently protect her daughter’s identity?  Not really.  I’m saying that in the modern world it is virtually impossible to keep information locked down.  It’s a waste and a distraction from the real issue.

So…let’s stop relying on opting out and/or protecting identity as a way to insure against the creation of data shadows and focus on guarding against negative repercussions from the content(s) of those shadows.  Instead of accepting that the use (and negative repercussions from that use) of online data are inevitable once the data exists, let us turn attention to establishing rights and protections for personal data. Why should information be considered presumptively public and broadly accessible?  What relevance does a blog post from a 14 year old have to a decision about whether that individual would be a good employee?  Should photographs of activities outside the sphere of school be considered as part of the “record” of a student applicant to academe? 

Webb says that "[k]nowing what we do about how digital content and data are being cataloged, my husband and I made an important choice before our daughter was born.”   Maybe the issue isn’t how the content and data are being catalogued – maybe it’s about how they’re being used.  Indeed, maybe if we stopped being distracted with futile efforts to “opt out” we might focus on forging effective and meaningful information controls. 

 

making regulation meaningful

On 26 August 2013, a San Francisco judge approved a $20 million settlement between Facebook and those of its users whose information had been used in Facebook’s “sponsored stories” advertising campaign without their consent.    You remember “sponsored stories” – the program where a user hitting the “like” button was equated to a commercial endorsement, with the risk that the user’s photo and personal information might even appear in an ad.

The suit was filed in April 2011, alleging that Facebook had not adequately informed its users of the program nor given them an opportunity to opt-out of the program.  Although Facebook admits no wrong-doing, the settlement includes not only the $20 million payout but changes to the language of Facebook’s Terms of Use, the creation of a means for users to review the use of their information in sponsored stories advertising and to control ongoing use of that information.  The settlement also provides parental supervision and the opportunity to opt-out entirely for non-adult users.  Education programs will also be developed, notifying users of these new provisions and powers.

Not_facebook_not_like_thumbs_down.png

Although $20 million sounds like a substantial amount of money, it pales in comparison to the approximately $73 million the court estimates that Facebook earned from those sponsored stories ads that used the information in question. 

It should also be noted that the $20 million will be split among the class action attorneys, privacy organizations, and affected users.  Estimates range from 614,000 to 125 million affected users, resulting in individual payouts of no more than $15.  If every affected user applied for the funds, it would amount to 2 cents/user.  In approving the settlement, the judge acknowledged that the individuals would receive only a nominal amount, but indicated that they had failed to prove that they were "harmed in any meaningful way".

Meaningful?  Facebook’s business plan is predicated on collecting and commodifying the personal information of its users.  It does so successfully.  And when, after paying the settlement, they have still made $50+ million by violating fair information principles, this settlement shows itself not as “punishment” but rather merely as the cost of doing business. 

Users need meaningful protections, and that means that violations of the Terms of Use, of the Privacy Policy, or of fair information principles need to be understood as invasive violations and inherently damaging.  As long as infringing on informational self-determination is considered not to constitute meaningful harm, companies will be able to do so with virtual impunity.