Predictive? Or Reinforcing Discriminatory and Inequitable Policing Practices?

UPTURN released its report on the use of predictive policing on 31 August 2016.  

The report, entitled “Stuck in a Pattern:  Early Evidence on Predictive Policing and Civil Rights” reveals a number of issues both with the technology and its adoption:

  •  Lack of transparency about how the systems work
  • Concerns about the reliance on historical crime data, which may perpetuate inequities in policing rather than provide an objective base for analysis
  • Over-confidence on the part of law enforcement and courts on the accuracy, objectivity and reliability of information produced by the system
  • Aggressive enforcement as a result of (over) confidence in the data produced by the system
  • Lack of audit or outcome measures tracking in order to assess system performance and reliability

The report notes that they surveyed the 50 largest police forces in the USA and ascertained that at least 20 of them were using a “predictive policing system” and another 11 actively exploring options to do so.  In addition, they note that “some sources indicate that 150 or more departments may be moving toward these systems with pilots, tests, or new deployments.”

Concurrent with the release of the report, a number of privacy, technology and civil rights organizations released a statement setting forth the following arguments (and expanding upon them).

  1. A lack of transparency about predictive policing systems prevents a meaningful, well-informed public debate. 
  2. Predictive policing systems ignore community needs.
  3. Predictive policing systems threaten to undermine the constitutional rights of individuals
  4. Predictive policing systems are primarily used to intensify enforcement rather than to meet human needs
  5. Police could use predictive tools to identify which officers might engage in misconduct, but most departments have not done so
  6. Predictive policing systems are failing to monitor their racial impact.

Signatories of the statement included:

The Leadership Conference on Civil and Human Rights

18 Million Rising

American Civil Liberties Union

Brennan Center for Justice

Center for Democracy & Technology

Center for Media Justice

Color of Change

Data & Society Research Institute

Demand Progress

Electronic Frontier Foundation

Free Press

Media Mobilizing Project

NAACP

National Hispanic Media Coalition

Open MIC (Open Media and Information Companies Initiative)

Open Technology Institute at New America

Public Knowledge

 

How About We Stop Worrying About the Avenue and Instead Focus on Ensuring Relevant Records are Linked?

Openness of information, especially when it comes to court records, is an increasingly difficult policy issue.  We have always struggled to balance the protection of personal information against the need for public information and for justice (and the courts that dispense it) to be transparent.  Increasingly dispersed and networked information makes this all the more difficult. 

In 1991’s Vickery v. Nova Scotia Supreme Court (Prothonotary) Justice Cory (writing in dissent, but in agreement with the Court on these statements)  positioned the issue as being inherently about the tension between the privacy rights of an acquitted individual versus the importance of court information and recordsbeing open.

…two principles of fundamental importance to our democratic society which must be weighed in the balance in this case.  The first is the right to privacy which inheres in the basic dignity of the individual.  This right is of intrinsic importance to the fulfilment of each person, both individually and as a member of society.  Without privacy it is difficult for an individual to possess and retain a sense of self-worth or to maintain an independence of spirit and thought.
The second principle is that courts must, in every phase and facet of their processes, be open to all to ensure that so far as is humanly possible, justice is done and seen by all to be done.  If court proceedings, and particularly the criminal process, are to be accepted, they must be completely open so as to enable members of the public to assess both the procedure followed and the final result obtained.  Without public acceptance, the criminal law is itself at risk.

Historically the necessary balance has been arrived at less by policy negotiation than by physical and geographical limitations.  When one must physically attend the court house to search for and collect information from various sources, the time, expense and effort necessary functions as its own form of protection.  As Elizabeth Judge has noted, however, “with the internet, the time and resource obstacles for accessing information were dramatically lowered. Information in electronic court records made available over the Internet could be easily searched and there could be 24-hour access online, but with those gains in efficiency comes a loss of privacy.”

At least arguably, part of what we have been watching play out with the Right to be Forgotten is a new variation of these tensions.  Access to these forms of information is increasingly easily and generally available – all it requires is a search engine and a name.  In return, news stories, blog posts, social media discussions and references to legal cases spill across the screen.  With RTBF and similar suggestions,  we seek to limit this information cascade to that which is relevant and recent. 

This week saw a different strategy employed.  As part of the sentences for David and Collet Stephan – whose infant son died of meningitis due to their failure to access medical care for him when he fell ill – the Alberta court required that notice of the sentence be posted on Prayers for Ezekiel and any other social media sites maintained by and dealing with the subject of their family.  (NOTE:  As of 6 July 2016, this order has not been complied with).

Contrary to some, I do not believe that the requirement to post is akin to a sandwich board, nor that this is about shaming.  Rather, it seems to me that in an increasingly complex information spectrum, insisting that sentence be clearly and verifiably linked to information about the issue.  Instead, I agree that

… it is a clear sign that the courts are starting to respond to the increasing power of social media, and to the ways that criminals can attract supporters and publicity that undermines faith in the legal system. It also points to the difficulties in upholding respect for the courts in an era when audiences are so fragmented that the facts of a case can be ignored because they were reported in a newspaper rather than on a Facebook post.

There has been (and continues to be) a chorus of complaints about RTBF and its supposed potential to frustrate (even censor) the right to KNOW.  Strangely, that same chorus does not seem to be raising their voices in celebration of this decision.  And yet…. doesn’t requiring that conviction and sentence be attached to “news” of the original issue address many of the concerns raised by anti-RTBF forces? 

 

Data Schadenfreude and the Right to be Forgotten

Oh the gleeful headlines. In the news recently:

Researchers Uncover a Flaw in Europe’s Tough Privacy Rules

NYU Researchers Find Weak Spots in Europe’s “Right to be Forgotten” Data Privacy Law

We are hearing the triumphant cries of “Aha! See? We told you it was a bad idea! “

But what “flaw” did these researchers actually uncover?

The Right to be Forgotten (RTBF), as set out by the court, recognized that search engines are “data controllers” for the purposes of data protection rules, and that under certain conditions (i.e., where specific information is inaccurate, inadequate, irrelevant or excessive), individuals have the right to ask search engines to remove links to personal information about them. 

Researchers were able to identify 30-40% of delisted mass media URLs and in so doing extrapolate the names of the persons who requested the delisting—in other words, identify precisely who was seeking to be “forgotten”. 

This was possible because while the RTBF requires search engines to delist links, it does NOT require newspaper articles or other source material to be removed from the Internet.  RTBF doesn’t require erasure – it is, as I’ve pointed out in the past, merely a return to obscurity.  So actually, the process worked exactly as expected. 

Of course, the researchers claim that the law is flawed – but let’s examine at the RTBF provision in the General Data Protection Regulation.  Article 17’s Right to Erasure sets out a framework where an individual may request from a data controller the erasure of personal data relating to them, the abstention of further dissemination of such data, and obtain from third parties the erasure of any links to or copy or replication of that data in listed circumstances.  There are also situations set out that would override such a request and justify keeping the data online – legal requirements, freedom of expression, interests of public health, and the necessity of processing the data for historical, statistical and scientific purposes.

This is the context of the so-called “flaw” being trumpeted. 

Again, just because a search engine removes links to materials that does NOT mean it has removed the actual materials—it simply makes them harder to find.  There’s no denying that this is helpful—a court decision or news article from a decade ago is difficult to find unless you know what you’re looking for, and without a helpful central search overview such things will be more likely to remain buried in the past.  One could consider this a partial return to the days of privacy through obscurity, but “obscurity” does not mean “impenetrable.”  Yes, a team of researchers from New York University Tandon School of Engineering, NYU Shanghai, and the Federal University of Minas Gerais in Brazil was able to find some information. So too (in the dark ages before search engine indexing) could a determined searcher or team of searchers uncover information through hard work. 

So is privacy-through-obscurity a flaw?  A loophole?  A weak spot?  Or is it a practical tool that balances the benefits of online information availability with the privacy rights of individuals? 

It strikes me that the RTBF is working precisely as it should.

The paper, entitled The Right to be Forgotten in the Media: A Data-Driven Study is available at http://engineering.nyu.edu/files/RTBF_Data_Study.pdf.  It will be presented the 16th Annual Privacy Enhancing Technologies Symposium in Darmstadt, Germany, in July, and will be published in the proceedings.

 

I Was Just Venting: Liability for Comments on One's Facebook Page

We’ve all used social media to vent about *something* —a bad day, a jerk on the bus, an ex– whatever is enraging us at the moment.   It’s arguable whether we intend those posts to be taken seriously or whether they’re just hyperbole.  The nature of venting is, after all, about release.  It’s cathartic.

But….what if you could be held liable for your venting?

Worse yet, what if you were held liable for what your friends said or did in response?

Sound crazy?  Turns out it’s possible…

Pritchard v Van Nes -- picture it – British Columbia…..<dissolve scene>

Mr. Pritchard and his family moved in next door to Ms. Van Nes and her family in 2008.  The trouble started in 2011, when the Van Nes family installed a two-level, 25-foot long, and 2-waterfall “fish pond” along their rear property line.  The (constant) noise of the water disturbed and distressed the Pritchards, who started out (as one would) by speaking to Ms. Van Nes about their concerns.

Alas, rather than getting better, the situation kept getting worse

  • the noise of the fish pond was sometimes drowned out by late-night parties thrown by the Van Nes family;

  • when the Pritchard’s complained about the noise, the next party included a loud explosion that Ms. Van Nes claimed was dynamite;

  • the lack of fence between the yards meant that the Van Nes children entered the Pritchard yard;

  • the lack of fence also allowed the Van Nes’ dog to roam (and soil) the Pritchard yard, as evidenced by more than 20 complaints to the municipality; and

  • parking (or allowing their guests to park) so as to block the Pritchard’s access to their own driveway.  When the Pritchards reported these obstructions to police, it only exacerbated tensions between the parties.

On June 9, 2014 tensions came to a head.  Ms. Van Nes published a Facebook post that included photographs of the Pritchard backyard:

Some of you who know me well know I’ve had a neighbour videotaping me and my family in the backyard over the summers.... Under the guise of keeping record of our dog...
Now that we have friends living with us with their 4 kids including young daughters we think it’s borderline obsessive and not normal adult behavior...
Not to mention a red flag because Doug works for the Abbotsford school district on top of it all!!!!
The mirrors are a minor thing... It was the videotaping as well as his request to the city of Abbotsford to force us to move our play centre out of the covenanted forest area and closer to his property line that really, really made me feel as though this man may have a more serious problem.

The post prompted 57 follow-ups – 48 of them from Facebook friends, and 9 by Ms. Van Nes herself. 

The narrative (and its attendant allegations) developed from hints to insinuations to flat out statements that Mr. Pritchard was variously a “pedophile”, “creeper”, “nutter”, “freak”, “scumbag”, “peeper” and/or “douchebag”.

Not content to keep this speculation on the Facebook page, a friend of Ms. Van Nes actually shared the post on his own Facebook page and encouraged others to do the same, and further suggested that Ms. Van Nes contact the principal of the school where Mr. Pritchard taught and “use his position as a teacher against him.  I would also send it to the newspaper.  Shame is a powerful tool.”

The following day, that same friend emailed the school principal, attaching the images from Ms. Van Nes’ page, some (one-sided) details of the situation and the warning that “I think you have a very small window of opportunity before someone begins to publicly declare that your school has a potential pedophile as a staff member. They are not going to care about his reasons – they care that kids may be in danger.”

That same day, another community member (Ms. Regnier  whose children had been taught by Mr. Pritchard and who believed him to be an excellent teacher and valuable resource for the school and community) became aware of Ms. Van Nes’ accusations and went to the school to inform Mr. Pritchard that accusations that he was a pedophile had surfaced on Facebook.  After talking with Mr. Pritchard, she accompanied him to the office to speak with the Principal (Mr. Horton), who had already received the email warning about Mr. Pritchard.  Mr. Horton contacted his superior, who, Mr. Horton testified, seemed shocked, asking Mr. Horton whether he believed the allegations; Mr. Horton said he did not, although he testified that he was concerned as the allegations reflected poorly on him and the school. He testified that if the allegations were substantiated, Mr. Pritchard would have had his teaching license revoked.

Tracking the allegations back to Ms. Van Nes’ Facebook page, Mr. Pritchard’s wife printed out the posts and Ms. Van Nes’ friends list.  They took this material with them to the police station to file a complaint.  Later that evening a police officer arrived at the Pritchard home to collect more details – when the Pritchards attempted to show him the content on Facebook they found that it was no longer accessible. 

Altogether, the post was visible on Ms. Van Nes’ Facebook page for approximately 27 ½ hours.  Its deletion, however, did not remove copies that had been placed on other Facebook pages or shared with others, nor could it prevent the spread of information. 

The effects have been many:

There was at least one child of one of Ms. Van Nes’ “friends” who commented on the posts, who was removed from his music programs. The next time he organized a band trip out of town and sought parent volunteers to be chaperones, he was overwhelmed with offers; that had never previously been the case. He feels that he has lost the trust of parents and students. He dreads public performances with the school music groups. Mr. Pritchard finds he is now constantly guarded in his interactions with students; for example, whereas before he would adjust a student’s fingers on an instrument, he now avoids any physical contact to shield himself from allegations of impropriety. He has cut back on his participation in extra-curricular activities. He has lost his love of teaching; he no longer finds it fun, and he wishes he had the means to get out of the profession. He considered responding to a private school’s advertisement for a summer employment position but did not because of a concern that the posts were still “out there”. Knowing that at least one prominent member of the community saw the posts and commented on them, he feels awkward, humiliated and stressed when out in public, wondering who might know about the Facebook posts and whether they believe the lies that were told about him.
Mr. Pritchard also testified as to how frightened he was that some of the posts suggested he should be confronted or threatened. Mr. Pritchard and his wife both testified that a short time after the posts, their doorbell was rung late at night, and their car was “keyed” in their driveway, an 80 cm scratch that cost approximately $2,000 to repair. His wife also testified to finding large rocks on their driveway and their front lawn.
They also both testified that their two sons, both of whom attended the school where their father teaches, are aware of the Facebook posts, and have appeared to be upset and worried as to the consequences.
Mr. Pritchard testified that he thinks it is unlikely that he could now get a job in another school district. He acknowledged that in fact he has no idea how far and wide the posts actually spread, but he spoke with conviction as to this belief, and I find the fact that he holds this belief to be an illustration of the terrible psychological impact this incident has had.

Who Is Liable and For What?

It’s a horrible tale, and nobody wins.  But what does the court have to say about it?

The claim for nuisance – that is, interference with Mr. Pritchard’s use and enjoyment of his land – is pretty clear.  Both the noise from the waterfall and the two years of the Van Nes’ dog defecating on their yard were clear interferences.  A permanent injunction that the waterfall not be operated between 10pm and 7am was issued.  The judge also awards $2000 for the waterfall noise, and a further $500 for the dog feces. 

The real issue here is, of course, the claim for defamation. 

Is Ms. Van Nes responsible for her own defamatory remarks?  Yes she is.  The remarks and their innuendo were defamatory, and were published to at least the persons who responded, likely to all 2059 of her friends, and (given Ms. Van Nes’ failure to use any privacy settings) viewable to any and all Facebook users.

Is Ms. Van Nes liable for the republication of her defamatory remarks by others? Republication, in this case, happened both on Facebook and via the letter to the school principal.  Yes she is, because she authorized those republications.  Looking at all the circumstances here, especially her frequent and ongoing engagement with the comment thread, the judge finds that Ms. Van Nes had constructive knowledge of Mr. Parks’ comments, soon after they were made.

Her silence, in the face of Mr. Parks’ statement, “why don’t we let the world know”, therefore effectively served as authorization for any and all republication by him, not limited to republication through Facebook. Any person in the position of Mr. Parks would have reasonably assumed such authorization to have been given. I find that the defendant’s failure to take positive steps to warn Mr. Parks not to take measures on his own, following his admonition to “let the world know”, leads to her being deemed to have been a publisher of Mr. Parks’ email to Mr. Pritchard’s principal, Mr. Horton.

Is Ms. Van Nes liable for defamatory third-party Facebook comments?  Again, the answer is yes.  The judge sets out the test for such liability as:  (1) actual knowledge of the defamatory material posted by the third party; (2) a deliberate act or deliberate inaction; and (3) power and control over the defamatory content.  If these three factors can be established, it can be said that the defendant has adopted the third party defamatory material as their own.

In the circumstances of the present case, the foregoing analysis leads to the conclusion that Ms. Van Nes was responsible for the defamatory comments of her “friends”. When the posts were printed off, on the afternoon of June 10th, her various replies were indicated as having been made 21 hours, 16 hours, 15 hours, 4 hours, and 3 hours previously. As I stated above, it is apparent, given the nine reply posts she made to her “friends”’ comments over that time period, that Ms. Van Nes had her Facebook page under, if not continuous, then at least constant viewing. I did not have evidence on the ability of a Facebook user to delete individual posts made on a user’s page; if the version of Facebook then in use did not provide users with that ability, then Ms. Van Nes had an obligation to delete her initial posts, and the comments, in their entirety, as soon as those “friends” began posting defamatory comments of their own. I find as a matter of fact that Ms. Van Nes acquired knowledge of the defamatory comments of her “friends”, if not as they were being made, then at least very shortly thereafter. She had control of her Facebook page. She failed to act by way of deleting those comments, or deleting the posts as a whole, within a reasonable time – a “reasonable time”, given the gravity of the defamatory remarks and the ease with which deletion could be accomplished, being immediately. She is liable to the plaintiff on that basis.

Having established all three potential forms of liability for defamation, Mr. Pritchard is awarded $50,000 in general damages and an additional $15,000 in punitive damages.

But I Was Just Venting…

A final thought from the judgement – one that takes into account the medium and the dynamic of Facebook. 

I would find that the nature of the medium, and the content of Ms. Van Nes’ initial posts, created a reasonable expectation of further defamatory statements being made. Even if it were the case that all she had meant to do was “vent”, I would find that she had a positive obligation to actively monitor and control posted comments. Her failure to do so allowed what may have only started off as thoughtless “venting” to snowball, and to become perceived as a call to action – offers of participation in confrontations and interventions, and recommendations of active steps being taken to shame the plaintiff publically – with devastating consequences. This fact pattern, in my view, is distinguishable from situations involving purely passive providers. The defendant ought to share in responsibility for the defamatory comments posted by third parties, from the time those comments were made, regardless of whether or when she actually became aware of them.

So go ahead. Vent all you want. But your responsibility may extend further than you think…proceed with caution.

Revenge Porn: In Ontario, You’ll Pay With More Than Karma

Doe 464533 v N.D. is a January 2016 decision from the Ontario Superior Court of Justice that makes a strong statement that those who engage in revenge porn will pay with more than just karma points!

The case involved an 18-year-old girl, away at university but still texting, phoning, emailing and otherwise connecting with her ex-boyfriend. Though the formal relationship had ended in spring, they continued to see each other “romantically” through the summer and into that autumn.  These exchanges included him sending multiple intimate photos and videos of himself, and requesting the same of her. 

After months of pressure, she made an intimate video, but was still uncomfortable sharing it.  She texted making clear her misgivings and he convinced her to relent, reassuring her that no one else would ever see the video. Eventually and despite her misgivings she sent the video to him.

Shortly thereafter, she learned that her ex had, on the same day he received it, posted the video to an online website.  He was also sharing it with some of their high school classmates.  She was devastated and humiliated by the discovery, leading to emotional and physical distress that required ongoing counselling, as well as suffering academically and socially. 

The video was online for approximately three weeks before his mother (hearing of the incident from the victim) forced him to remove it.  As the Judge points out, “[t]here is no way to know how many times it was viewed or downloaded during that time, or if and how many times it may have been copied onto other media storage devices…or recirculated.”

The damage is not, of course, limited to that three-week period – it is persistent and ongoing.  She continues to struggle with depression and anxiety.  She lives with the knowledge that former classmates and community members are aware of the video (and in some cases have viewed it), something that has caused harm to her reputation. In addition, she is concerned about the possibility that the video may someday resurface and have an adverse impact on her employment, her career, or her future relationships.

 

The police declined to become involved due to the age(s) of those involved, but she did bring a civil action against him. 

She was successful on her claim of breach of confidence.

She was successful on her claim of intentional infliction of mental distress.

But where it gets really interesting is in Justice Stinson’s assessment of the invasion of privacy claim.

Building upon the recognition of a tort of intrusion upon seclusion in Ontario, he returns to that analysis to locate the injury here as one not of intrusion but of public disclosure of embarrassing facts.  

Normally, the three factors necessary to show such a tort would be:

  1. The disclosure must be a public one. 
  2. The facts disclosed must be private; and
  3. The matter made public must be one which would be offensive and objectionable to a reasonable man of ordinary circumstances.

It is incontrovertible that the video was publicly disclosed. The subject matter of the video – apparently her masturbating – is certainly private.  The first two elements are made out. 

Here is where the judge wins my heart – he refuses to layer sexual shame on an already victimized plaintiff.  Instead of focussing on the subject of the video (her masturbating), he modifies the final requirement so that the requirement is that either the matter publicized or the act of publication itself would be highly offensive to a reasonable person.

In this case, it is the behaviour of the ex that is offensive:

…the defendant posted on the Internet a privately-shared and highly personal intimate video recording of the plaintiff. I find that in doing so he made public an aspect of the plaintiff’s private life. I further find that a reasonable person would find such activity, involving unauthorized public disclosure of such a video, to be highly offensive. It is readily apparent that there was no legitimate public concern in him doing so.

Justice Stinson issues an injunction directing the ex to immediately destroy any and all intimate images or recordings of the plaintiff in whatever form they may exist that he has in his possession, power or control.  A further order permanently prohibits him from publishing, posting, sharing or otherwise disclosing in any fashion any intimate images or recordings of her.  Finally, he is permanently prohibited from communicating with her or members of her immediate family, directly or indirectly.

As for damages, the judge mentions that her claim is limited by procedure to $100,000.    He then considers the following:

  • ·         The circumstances of the victim at the time of the events, including factors such as age and vulnerability. The plaintiff was 18 years old at the time of the incident, a young adult who was a university student. Judging by the impact of the defendant’s actions, she was a vulnerable individual;
  • ·         The circumstances of the assaults including their number, frequency and how violent, invasive and degrading they were. The wrongful act consisted of uploading to a pornographic website a video recording that displayed intimate images of the plaintiff. The defendant’s actions were thus very invasive and degrading. The recording was available for viewing on the Internet for some three weeks. It is impossible to know how many times it was viewed, copied or downloaded, or how many copies still exist elsewhere, out of the defendant’s (and the plaintiff’s – and the Court’s) control. As well, the defendant showed the video to his friends, who were also acquaintances of the plaintiff. Although therewas no physical violence, in these circumstances, especially in light of the multiple times the video was viewed by others and, more importantly, the potential for the video still to be in circulation, it is appropriate to regard this as tantamount to multiple assaults on the plaintiff’s dignity;
  • ·         The circumstances of the defendant, including age and whether he or she was in a position of trust. The defendant was also 18 years of age. He and the plaintiff had been in an intimate – and thus trusting – relationship over a lengthy period. It was on this basis, and on the basis of his assurances that he alone would view it, that he persuaded her to provide the video. His conduct was tantamount to a breach of trust; and
  • ·         The consequences for the victim of the wrongful behaviour including ongoing psychological injuries. As described above, the consequences were emotionally and psychologically devastating for the plaintiff and are ongoing

He awards:

General damages:  $50,000

Aggravated damages (where injury was aggravated by the manner in which it was done):  $25,000

Punitive damages:  $25,000         

With pre-judgement interest and her costs for the action, the full award is $141,708.03

Is it enough to make up for the violation?  No, but I can’t imagine any amount would be.  I hope it’s enough to make the next malicious ex think twice before engaging in this type of behaviour.

On top of that, she gets validation.

She gets recognition that NOTHING she did was inappropriate or offensive.

The judge commends her for earning her undergraduate degree despite these events, as well as for her courage and resolve in pursuing the remedies to which she is entitled. Further, he lets her know that through that courage, she has set a precedent that will allow others who are similarly victimized to seek recourse.

 

 

 

 

 

Peeple: the Commodification of Social Control?

From www.forthepeeple.com

Meet Peeple

We are a concept that has never been done before in a digital space that will allow you to really see how you show up in this world as seen through the eyes of your network.

Peeple is an app that allows you to rate and comment about the people you interact with in your daily lives on the following three categories: personal, professional, and dating.

Peeple will enhance your online reputation for access to better quality networks, top job opportunities, and promote more informed decision making about people.

My first interest in reputation in online spaces came from a particular kind of knowledge –knowledge that any girl who went to high school has -- that “reputation” and “dating” are never a good combination. Such evaluations are never as objective or truthful as they purport to be, and never without a cost to those who are being assessed/rated.  Maybe everyone knows this, but I’m inclined to think that some of us—those who by virtue of our Otherness are inevitably the object of critical review—internalize that knowledge at a much deeper level.

Given this, I confess that I smiled ruefully when I saw a photo of the two founders of Peeple—the self-described “positivity app launching in November 2015” that purports to enable users to rank people the way other apps (think Yelp) rank restaurants and, say, public restrooms. Peeple’s founders are blondish, youngish, and conventionally attractive.

Normal
0





false
false
false

EN-US
JA
X-NONE

 
 
 
 
 
 
 
 
 


 
 
 
 
 
 
 
 
 
 
 


 <w:LatentStyles DefLockedState="false" DefUnhideWhenUsed="false"
DefSemiHidden="false" DefQFormat="false" DefPriority="99"
LatentStyleCount="371"…

                    Nicole McCullough and Julia Cordray

 

 

I’m not noting their appearance to be dismissive…but I am suggesting (fairly or not) that those who are least likely to have been socially marginalized and ostracized are also perhaps most likely to believe that an app designed to rate and comment on other people could “spread love and positivity.”

Frenzied media coverage has raised many of the most obvious problems with this business idea including:

  • Users can set up profiles for others without  the consent of the person being rated
  • Ratings are inherently subjective
  • There aren’t credible safeguards for accuracy or protections from bias
  • It will be up to a combination of automated software and human site administrators to determine if feedback is “positive” or “negative”, whether to publish it or remove it, etc.
  • It presumes, without evidence, that crowd-sourced opinions are reliable
  • The fundamental concept is an invasion of privacy and threat to reputation
  • The approach objectifies human beings and commoditizes interpersonal relationships.

These are all important concerns, but I’d like to take a step back and look at the larger overarching potential impact of Peeple in terms of creating a state of perpetual surveillance that itself enforces and reinforces particular (mainstream) expectations of behaviour.

This project brings to mind the Panopticon—an architectural concept for institutional buildings, designed so that inmates/inhabitants can be observed from a central point without knowing whether they are being watched at any given moment. It’s based on philosopher Jeremy Bentham’s assertion that “the more constantly the persons to be inspected are under the eyes of the persons who should inspect them, the more perfectly will the purpose X of the establishment have been attained. Ideal perfection, if that were the object, would require that each person should actually be in that predicament, during every instant of time.” (Jeremy Bentham, The Panopticon Writings by Mweran Bozovic at Letter 1). 

Philosopher Michel Foucault later elaborated upon Bentham’s notion of the Panopticon, seeing in it a metaphor for the exercise of power in modern societies.  He explains that “…it arranges things in such a way that the exercise of power is not added on from the outside, like a rigid, heavy constraint, to the functions it invests, but is so subtly present in them as to increase their efficiency by itself increasing its own points of contact. (See Michel Foucault, Discipline and Punish: The Birth of the Prison).

What does this have to do with Peeple?  What overarching control could there be when the app itself clearly states that it is simply sharing feedback?  These reports aren’t happening in a vacuum – inevitably ratings are made with reference to a shared community standard  – setting and reinforcing community norms and reviewing whether or not individuals’ have appropriately met or performed those standards. 

Traditionally, surveillance within the panopticon was intended to impose and enforce chosen norms/rules/behaviours.  Its goal was the production of “docile bodies” – to remove the need for policing of behaviour via force, replacing it instead with the creation of a state of vulnerability induced by the perception of perpetual visibility that resulted in individuals self-policing their own behaviours towards the desired outcome. 

With Peeple, we run that same risk of creating docile bodies and enforcing desired behaviours – knowing that information is collected and shared will (perhaps inevitably) influence the behaviour of an individual who is subject to those reviews. Anyone who wants to continue active and productive participation in a community must be aware of this information repository and the standards that it maintains and enforces.  The collection and sharing of reputation becomes in essence a form of social control. 

Worryingly, in the case of Peeple, it’s a form of social control that is both privately administered and inherently commodified.

a dismayed yelp -- shouldn't we have some rights to our own reputation?

A case against Yelp got dismissed this week.  It’s an interesting one too – businesses who claim that Yelp manipulates ratings against businesses who do not purchase advertising on Yelp. 

Yelp bills itself as an “online urban guide” – a crowdsourced local business review site.  Consumers rate their experience(s) with a business, and  the accumulated ratings and experiences are available to anyone (though you’ll need an account to actually submit a review).    The company themselves isn’t particularly local though – with over 130 million unique visitors per day in over 20 languages, Yelp’s Alexa rank for May 2014 was a more than respectable 28.  This is a company that may speak local but has definite range and scope for the exercise of power.

Yelp has long been dogged by allegations that they manipulate the rankings of businesses – either that they will remove negative reviews for businesses who purchase advertising or alternatively that a refusal to buy advertising could result in the disappearance of positive reviews.  Finally, a group of small businesses filed suit against Yelp claiming that it was extorting small businesses into buying advertising. 

Extortion, they say.  When I think of extortion I think of blackmail.  Organized crime.  That sort of thing.  A battle between a crowd-recommendation site and a variety of entrepreneurs seems a little…bloodless.  (maybe my parents *did* let me read inappropriate materials – turns out the woman from the town library who called my mom to report me was right after all!)

Anyone reading the headlines after the case was dismissed might be excused for thinking that Yelp had been vindicated.

Yelp Extortion Case Dismissed by Federal Court

Court Sides With Yelp

Appeals Court Rules for Yelp in Suit Alleging the Online Review Site Manipulated Reviews

Well, the court didn’t exonerate Yelp.   There was no finding here that the manipulation didn’t or couldn’t happen.  Nope, the lawsuit was dismissed because….drum roll please….businesses don’t have a right to positive reviews online. 

The business owners may deem the posting or order of user reviews as a threat of economic harm, but it is not unlawful for Yelp to post and sequence the reviews," Judge Marsha Berzon wrote for the three-judge panel. "As Yelp has the right to charge for legitimate advertising services, the threat of economic harm that Yelp leveraged is, at most, hard bargaining.

Does it matter?  Isn’t this just a battle between businesses?  Well….no.  Not necessarily.  In a world of crowdsourcing and reputation, granting a business carte blanche to manipulate reviews is a scary prospect.  An even scarier one is the idea that you might not have rights over your reviews/reputation.

(fear not RTBF-foes -- i'm not suggesting we should have the right to change, erase or otherwise manipulate such reviews....i'm just suggesting maybe nobody else should be able to do so either, especially with a view to harming me reputationally)

How to Profit From the Right to be Forgotten (Operators are Standing By!)

 Search Engines are on Board

After setting up a request form, receiving tens of thousands of requests in the first day(s), and sending those requests through its review process, Google has now begun to remove information from its search results.   The court said Google had to do it, Google set up a process to do it, that process is free and even relatively quick. 

Not the end of the issue, unfortunately.  Google isn’t the only search engine out there, which means that information may still appear in the results of other search engines.  Other search engines are said to be developing similar processes in order to comply with the court’s interpretation, so that may help. Ultimately, however, even if all commercial search engines adopt this protocol, there are still other sites that are themselves searchable. 

Searching within an individual site

This matters because having a link removed from search results doesn’t get rid of the information, it just makes it harder to find.  No denying that this is helpful -- a court decision or news article from a decade ago is difficult to discover unless you know what you’re looking for, and without a helpful central search overview such things will be more likely to remain buried in the past.  Some partial return to the days of privacy through obscurity one might say. 

The Google decision was based on the precept that the actions of a search engine in constantly trawling the web meant that it did indeed collect, retrieve, records, organizes, discloses and stores information and accordingly does fall into the category of data control.  When an individual site allows users to search the content on the site, this same categorization does not apply.  Accordingly, individual sites will not be subject to the obligation (when warranted) to remove information from search results on request. 

If we take it as written that everything on the Internet is ultimately about either sex or money (and of course cats), then the big question is, of course, how this can be commodified?  And some sites have already figured that out. 

Here’s What We Can Offer You

Enter Globe 24h, a self-described global database of public records: case law, notices and clinical trials.  According to the site, this data is collected and made available because:

[w]e believe information should be free and open. Our goal is to make law accessible for free on the Internet. Our website provides access to court judgements, tribunal decisions, statutes and regulations from many jurisdictions. The information we provide access to is public and we believe everyone should have a right to access it, easily and without having to hire a lawyer. The public has a legitimate interest in this information — for example, information about financial scams, professional malpractice, criminal convictions, or public conduct of government officials. We do not charge for this service so we pay for the operating costs of the website with advertising.

 

A laudable goal, right? 

The public records that are held by and searchable on the site contain personal information, and the site is careful to explain to users what rights they have over their information and how to exercise them.  Most notably, the site offers clear and detailed explanation of how a user may request that their personal information be removed from the records – mailing a letter that includes personal information, pinpoint location of the document(s) at issue, documentary proof of identity, a signature card, an explanation of what information is requested to be removed and why.  This letter is sent to a designated address (and will be returned if any component of the requirements is not met) and after a processing time of up to 15 days.  Such a request, it is noted, may also involve forwarding of the letter and its personal information to data protection authorities.

But Wait, There’s More!!

Despite their claims that they bear the operating costs themselves (the implication being that they do so out of their deep commitment to access to information) the site does have a revenue stream.  

Yes, for the low low price of €19 per document, the site will waive all these formalities and get your information off those documents (and out of commercial search engine caches as well) within 48 hours.  Without providing a name, address, phone number, signature, or identification document.  No need to be authenticated, no embarrassing explanation of why you want the information gone, no risk of being reported to State authorities, the ease of using email and no risk of having your request ignored or fail to be acted upon.  It’s even done through PayPal, so it can theoretically be done completely anonymously. 

If the only way to get this information removed were to pay the fee, the site would fall foul of data protection laws, but that’s not the case here.  You don’t have to pay the money.  That said, the options are set up so that one choice seems FAR preferable to the other….and it just happens to be the one from which the site profits. 

There you have it – the commodification of the desire to be forgotten.   Expect to see more approaches like this one. 

My takeaway?  It’s not really possible to effectively manage what information is or isn’t available.  Removing information entirely, removing it from search engine results, redacting it from documents or annotating it in hopes of mitigating its effect – in the long run information is out there and, if it is accurate, is likely going to be found and become incorporated into a data picture/reputation of a given individual. 

 


LinkedIn, Spam and Reputation

 

A 12 June decision in California regarding LinkedIn illustrates an increasingly nuanced understanding of reputation in the context of online interactions.

When a user is setting up a LinkedIn account, they are led through a variety of screens that solicit personal information. Although most of the information is not mandatory, LinkedIn’s use of a “meter” to indicate the “completeness” of a profile actively encourages the sharing of information.  Among other things, these steps enable LinkedIn to gain access to the address book contact information of the new user, and prompt that user to provide permission to use that contact information to invite those contacts to establish a relationship on LinkedIn. 

The lawsuit alleges that LinkedIn is inappropriately collecting and using this contact information. LinkedIn, pointing to the consent for this use provided by customers, had sought to have the case dismissed.  Judge Koh looked at the whole process and found that while consent was given for the initial email to contacts, LinkedIn also sent two follow up emails to those contacts who did not respond to the original – and that there was no user consent provided for these follow-ups. 

What is interesting about the decision to allow this part of the claim to go forward is Koh’s analysis of harm.  That analysis doesn’t stop with whether LinkedIn has consent for the follow-up emails, rather she examines what the effect of this practice might be and concludes that it "could injure users' reputations by allowing contacts to think that the users are the types of people who spam their contacts or are unable to take the hint that their contacts do not want to join their LinkedIn network."  Given this, she suggests that users could pursue claims that LinkedIn violated their right of publicity, which protects them from unauthorized use of their names and likenesses for commercial purposes, and violated a California unfair competition law.

 

Keeping it Real: Reputation versus the Right to be Forgotten”

In the wake of the recent legal ruling in Spain, described popularly—albeit erroneously, in my opinion—as creating a “right to be forgotten”, Google has created a web-form that allows people to request that certain information be excluded from search results.  More than 12,000 requests to remove personal data were submitted within the first 24 hours after Google posted the forms, according to the company. At one point Friday, Google was getting 20 requests per minute. 

If it’s not really about being “forgotten”, what is at the heart of this decision? In an article dated 30 May 2014, ABC News asserted that it’s about the right to remove “unflattering” information, and characterizes the process as one of censorship, used by people to polish their reputations.  Framing the issue in this way, however, is dismissive and an oversimplification. 


BALANCING THE PUBLIC’S RIGHT TO KNOW

The decision does NOT provide carte blanche for anyone to force the removal of any information for any reason.  In fact, the Google form is clear that in order to make a request, the following information is required:

(a) Provide the URL for each link appearing in a Google search for your name that you request to be removed. (The URL can be taken from your browser bar after clicking on the search result in question).

(b) Explain, if not clear, why the linked page is about you (or, if you are submitting this form on behalf of someone else, the person named above).

(c)  Explain how this URL in search results is irrelevant, outdated or otherwise inappropriate. [Emphasis is mine.]

Even after this information is provided, there is no guarantee that the removal request will be approved.  Google has indicated that requests will be assessed to determine whether there is a public interest in the information at issue, such as facts about financial scams, professional malpractice, criminal convictions, or public conduct of government officials.  Although other search engines that function in the EU have not yet announced their own plans for complying with the decision, similar factors can be expected to be considered.

REPUTATION

Without a doubt, reputation is increasingly important in business, academia, politics, and in our culture as a whole. The availability of a wide range of data and information via search engines is an integral part of reviewing reputations and of making educated risk and trust assessments.  That said, those reputation judgments are most effective when the available information is reliable and relevant.

In other words, it’s not necessary to make absolutely any and all information available, but rather it’s important to ensure the accuracy of the information that is available.  To conflate informational self-determination with censorship is problematic, but to use such a characterization in order to defeat the basic precepts of data protection – which include accuracy and limiting information to that which is necessary – can actually be destructive both of individual rights and of the proper power of reputation.

This is a new area of policy and law touching on powerful information tools and very important personal rights. It’s imperative to get this right.