Police Bodycams: Crossing the Line from Accountability to Shaming

 

Police bodycams are an emerging high-profile tool in law enforcement upon which many hopes for improved oversight, accountability, even justice are pinned.

When it comes to police bodycams, there are many perspectives:

  • Some celebrate them as an accountability measure, almost an institutionalized sousveillance.
  • For others, they’re an important new contribution to the public record
  • And where they are not included in the public record, they can at least serve as internal documents, subject to Access to Information legislation.

These are all variations on a theme – the idea that use of police bodycams and their resulting footage are about public trust and police accountability.

But what happens when they’re used in other ways?

In Spokane, Washington recently a decision was made to use bodycam footage for the purpose of shaming/punishment.  In this obviously edited footage, Sgt. Eric Kannberg deals calmly with a belligerent drunk, using de-escalation techniques even after the confrontation gets physical.  Ultimately, rather than meting out the typical visit to the  drunk tank, the officer opts to proceed via a misdemeanor charge and the ignominy of having the footage posted to Spokane P.D.'s Facebook page. The implications of this approach in terms of privacy, dignity, and basic humanity are far-reaching.

The Office of the Privacy Commissioner of Canada has issued Guidance for the Use of Body-Worn Cameras by Law Enforcement;  guidance that strives to balance privacy and accountability. The Guidelines include:

Use and disclosure of recordings

The circumstances under which recordings can be viewed:

  • Viewing should only occur on a need-to-know basis. If there is no suspicion of illegal activity having occurred and no allegations of misconduct, recordings should not be viewed.
  • The purposes for which recordings can be used and any limiting circumstances or criteria, for example, excluding sensitive content from recordings being used for training purposes. 
  • Defined limits on the use of video and audio analytics.
  • The circumstances under which recordings can be disclosed to the public, if any, and parameters for any such disclosure. For example, faces and identifying marks of third parties should be blurred and voices distorted wherever possible.
  • The circumstances under which recordings can be disclosed outside the organization, for example, to other government agencies in an active investigation, or to legal representatives as part of the court discovery process.

Clearly, releasing footage in order to shame an individual would not fall within these parameters. 

After the posted video garnered hundreds of thousands of views, its subject is now threatening to sue.  He is supported by the ACLU, which expressed concerns about both the editing and the release of the footage. 

New technologies offer increasingly powerful new tools for policing.  They may also intersect with old strategies of social control such as gossip and community shaming.  The challenge – or at least an important challenge– relates to whether those intersections should be encouraged or disrupted.

As always, a fresh examination of the privacy implications precipitated by the implementation of new technology is an important step as we navigate towards new technosocial norms.

ScoreAssured’s unsettling assurances

Hearing a lot of talk about Tenant Assured – an offering from new UK company ScoreAssured.  Pitched as being an assessment tool for “basic information” as well as “tenant worthiness,” TenantAssist scrapes social media sites (named sites so far as Facebook, Twitter, LinkedIn, and Instagram) content – including conversations and private messages – and then runs the data through natural language processing and other analytic software to produce a report. 

The report rates the selected individual on five “traits” – extraversion, neuroticism, openness, agreeableness, and conscientiousness.   The landlord never directly views posts of the potential tenant, but the report will include detailed information such as activity times, particular phrases, pet ownership etc.

Is this really anything new?  We know that employers, college admissions, and even prospective landlords have long been using social media reviews as part of their background check process. 

TenantAssured would say that at least with their service the individual is asked for and provides consent.   And that is, at least nominally, true.  But let’s face it – consent that is requested as part of a tenancy application is comparable to the consent for a background check on an employment application – “voluntary” only if you’re willing to not go any further in the process.  Saying “no” is perceived as a warning flag that will likely result in one not being hired or not getting housing. Declining jobs and/or accommodations is not a luxury everyone can afford. 

Asked about the possibility of a backlash from users, co-founder Steve Thornhill confidently asserted that “people will give up their privacy to get something they want.”  That may be the case…but personally I’m concerned that people may be forced to give up their privacy to get something they urgently need (or quite reasonably want).

But let’s presume for a second that the consent is “freely” given. Problems with this model remain: 

  • Reports may include information such as pregnancy, political opinions, age, etc. – information that is protected by human rights codes.  (Thornhill says, “all we can do is give them the information, it’s up to landlords to do the right thing”)
  • Performed identity – our self-presentation on social media sites is constructed for particular (imagined) audiences.  To remove it from that context does not render it presumptively true or reliable – quite the opposite.
  • Invisibility of standards – how are these traits being assessed?  What values are being associated with particular behaviours, phrases and activities and are they justified?  An individual who is currently working in a bar or nightclub might show activity and language causing them to be receive negative ratings as excessive partiers or unstable, for instance.  In fact, the Telegraph demonstrated this by running reports on their financial journalists (people who, for obvious reasons, tend to use words like fraud and loan rather frequently) and sure enough the algorithm rated them negatively in “financial stability”.
  • Unlike credit bureaus, which are covered under consumer protection laws, there is no regulation of this sector.  What that means, among other things, is that there is not necessarily any way for an individual to know what is included in their report, let along challenge the accuracy or completeness of such a report. 

The Washington Post quite correctly identifies this as an exponential change in social media monitoring, writing that “…Score Assured, with its reliance on algorithmic models and its demand that users share complete account access, is something decidedly different from the sort of social media audits we’re used to seeing.  Those are like cursory quality-control check; this is more analogous to data strip-mining.”

Would it fly here?

Again, we know that background checks and credit checks for prospective tenants aren’t new.  We also know that, in Canada at least, our Information and Privacy Commissioners have had occasion to weigh in on these issues.

In 2004, tenant screening in Ontario suffered a setback when the  Privacy Commissioner of Ontario instructed the (then) Ontario Rental Housing Tribunal to stop releasing so much personal information in their final orders. As a result, names are now routinely removed from the orders, making it significantly more difficult to scrape the records generally.  As for individual queries, unless you know the names of the parties, the particular rental address and file number already, you will probably not be able to find anything about a person’s history in such matters.

Now, with the release of PIPEDA Report of Findings #2016-002, Feb 19, 2016 (posted 20 May 2016), that line of business is even more firmly shuttered.  There, the OPC investigated the existence of a “bad tenant” list that was maintained by the landlord association. The investigation raised numerous concerns about the list:

  • Lack of consent by individuals for their information to be collected and used for such a purpose
  • Lack of accountability – there was no way for individuals to ascertain if any information about them was on the bad tenant list, who had placed it there, and what the information was. 
  • Simultaneously, the landlord association was also not assessing the accuracy or credibility of any of the personal information that it collected, placed on the list and regularly disclosed to other landlords, who then made decisions based upon it.
  • Further, there was no way to ensure accuracy of the information on the list, and no way for individuals to challenge the accuracy or completeness of the information.

It was the finding of the Privacy Commissioner of Canada that by maintaining and sharing this information, the association was acting as a credit reporting agency, albeit without the requisite license from the province.  Accordingly, the Commissioner found that the purpose for which the tenant personal information was collected, used or disclosed was not appropriate under s.5(3) of PIPEDA.  The institution, despite disagreeing with the characterization of it as a credit bureau, implemented the recommendation to destroy the “bad tenant” list, cease collecting information for such a list, and to no longer share personal information about prospective tenants without explicit consent.

This is good news, but the temptation to monetize violations of privacy continues. SecureAssist has expansive plans.  They anticipate launching (by the end of July 2016) similar “report” products targeted at Human Resources officers and employers, as well as parents seeking nannies. 

“If you’re living a normal life,” Thornhill asserts, “then, frankly, you have nothing to worry about.”  We all need to ask– who defines “normal”?  And since when is a corporation’s definition of “normal” the standard for basic human dignity needs like employment or housing? 

 

 

Charter Challenge to PIPEDA

The Canadian Civil Liberties Association, along with Chris Parsons of the Citizen Lab, have filed a challenge to some provisions of PIPEDA, specifically the parts of the Act that allow private corporations to disclose user personal information without a warrant to a government institution, for a number of reasons, including national security and the enforcement of any law of Canada, a province or a foreign jurisdiction. 

The fact that the information is being obtained from the private sector further complicates things.  As CCLA's General Counsel stated:  "Non-state actors are playing an increasingly large role in providing law enforcement and government agencies with information they request.  The current scheme is completely lacking in transparency and is inadequate in terms of accountability mechanisms."     

CCLA's legal challenge asks that provisions of PIPEDA be struck as an unconstitutional violation of the right to life, liberty and security of the person (s.7) and the right to be free from unreasonable search and seizure (s.8) under the Charter. 

Feeling Safe Doesn’t Mean You Are: Conflating Alarm/Notification with Prevention


Recently various news stories have trumpeted Kitestring as:  a safety app for women” and “an app that makes sure you get home safe.”   In an April 2014 story, service creator Stephan Boyer explains that he founded Kitestring to keep my girlfriend safe.   Even feminist blog site Jezebel’s headline invoked the claim that Kitestring makes people safer, though the story itself acknowledges that the value of the service is in making women feel safer.

Kitestring is a web-based service that takes on the role of a safety call – when enabled, it notifies pre-designated persons if the user does not check-in within a pre-set time period.  Where other “safety” apps require some positive action in order to sound an alert – bSafe creates a safety alarm button that must be pushed in order to alert others, while Nirbhaya sends out the alarm message when the phone is shaken – Kitestring will send out the alert *unless* the positive action of checking-in is undertaken.   


What we’ve got here is another iteration of the belief that the more information is collected, the more we can know, predict and protect.  And while it’s easier to critique this position when looking at issues like invasive NSA monitoring, even voluntary services like this one have this same logical flaw inherent in them.  In this case, without disregarding the importance of a safety call (via telephone or through any of these services) and of access to services such as this one, equating sounding an alarm with keeping the individual (virtually always identified as female) safe is a dangerous overstatement. 

Public surveillance cameras have long been touted as making public spaces (as well as those within them) safer.  Evidence doesn’t exactly support these claims though – studies looking at CCTV in London, England have consistently found little or no correlation between the presence and/or prevalence of CCTC cameras and crime prevention or reduction.  To put it harshly, public video surveillance (whether recorded or live monitored) won’t prevent me being raped.  The video record of it may be of assistance in identifying the rapist, but even that is uncertain, depending as it does on quality of camera and recording, camera positioning, etc. 

I’m not against services like Kitestring – I want people to know if I don’t get home or somehow fall off the grid.  That said, letting people know I’ve gone missing isn’t the same as preventing the problem in the first place.  Headlines that claim “This New App could’ve Prevented My Friends’ Rape” are optimistic at best, misleading at worst. 


Culture notebook: The day web promos got personal

Sometimes something in our culture pulls back the thin curtain between Internet users and those reaping and repurposing those users’ personal information.

The new NBC television show The Blacklist has a released a deliciously creepy promotional gimmick that ties in with one’s Facebook account to generate a very clever, very unsettling interactive experience.

After watching a promotional trailer and a couple of canned videos where the show’s stars seem to be interrogating the viewer, one logs in using Facebook account (accepting the NBC terms and conditions, of course) and then watches as friends’ profile pictures seamlessly pop into the video on the screen.

The context is answering questions about friends (“which of these friends is most paranoid about privacy” is one of the questions).

The campaign is the brainchild of Toronto digital interactive agency, Secret Location.

Why this is significant

This certainly isn’t the first time a television show has had a web tie-in—be they webisodes <shudder> or other variations.   And it’s not the fact that the underlying paranoid tension has been moved online, nor even that the tension is personalized and exploited by this campaign.

Rather this promo exposes another significant transformation in our relationship with surveillance and data mining. 

There’s a weird kind of arc that seems to come with the introduction of new technologies.  They’re created for one purpose and rolled out in furtherance of that purpose.  But once they’re out—in the wild, so to speak—they become fair game and people start to play with them. It’s reminiscent of Rubin’s line from the William Gibson short story The Winter Market that “Anything people build, any kind of technology, it's going to have some specific purpose. It's for doing something that somebody already understands. But if it's new technology, it'll open areas nobody's ever thought of before. You read the manual, man, and you won't play around with it, not the same way.”  Eventually, finally, this play starts to itself become commodified. 

We seem to be arriving to this point with surveillance culture.   Surveillance and data mining have been widely rolled out purportedly for security purposes.  But once the tech is out there, people started to play with it. There are sites that amalgamate public camera feeds, as well as Puppycam, Pandacam, and Condorcam that draw significant crowds. Then there are artists like the Surveillance Camera Players, and works by Banksy that variously involve, invoke and incorporate surveillance cameras.  Artist Hasan Elahi has turned his experience of being tracked and erroneously added to the “no-fly” list His startling and hilarious TED talk FBI, Here I am describes his ongoing project of self-surveillance, documented on his website.

Banksy

Banksy

Along that continuum we see a change – from the pure fun and subversion implicit in the Surveillance Camera Players, the educational intentions of the San Diego Zoo’s cams, and on to the commodification of the feed of something like puppycam, which is advertising supported based on viewer hits. 

With this latest jump, our relationship to surveillance and data mining has again changed.  Now the model is not simply commodified – somehow the omnipresence of these things in our lives has become so normal that the advertising hook here isn’t based on the *fact* of the surveillance, but rather on the “fun” of seeing what results from omnipresent surveillance and data mining.

Online Privacy Rights: making it up as we go?

In the September 2013 Bland v. Roberts decision, the Fourth US Circuit Court of Appeals ruled that “liking” something on Facebook is free speech and as such should be afforded legal protection. This is good news, and while there has been extensive coverage of the decision, there are important implications for employers and employees that have not yet been fully explored.

The question is how far can an employer go in using information gleaned from social media sites against present and future employees?

Bland v. Roberts: about the case

The case was brought by employees at a Virginia Sheriff’s office whose jobs had been terminated.  The former employees claimed that their terminations were retaliation for them “like”-ing the campaign page of the Sheriff’s (defeated) opponent during the election.  Even though the action was a single “click”, the Court determined that it was sufficiently substantive speech to warrant constitutional protection.

Social media checks v. rights of employees

This decision has major implications for the current practice of social media checks of potential and current employees.

More and more that more and more employers are conducting online social media background checks in addition to criminal record and credit bureau checks (where permitted).  A 2007 survey of 250 US employers found that 44% of employers used social media to examine the profiles of job candidates.  Survey data from ExecuNet in 2006 shows a similar pattern, with 77% of executive recruiters using web search engines to research candidates and 35% stating that they had ruled candidates out based on the results of those searches.

Legal and ethical implications of social media checks

Federal and provincial human rights legislation in Canada stipulates that decisions about employment (among other things) must not be made on the basis of discrimination for protected grounds. Employers and potential employers are required to guard against making decisions based on discriminatory grounds.  These have been refined through legislation and expanded by court decisions to include: age, sex, gender presentation, national or ethnic identity, sexual orientation, race, and family status.   

surveillance.jpg

Social media checks can glean information actually shared by a user (accurate or not), but also can fuel inferences (potentially unfair, gendered, classed or sexualized) drawn from online activities. 

For example, review of a given Facebook page may show (depending on the individual privacy settings applied):  statuses, comments from friends and other users, photographs (uploaded by the subject and by others), as well as collected “likes” and group memberships.  These can be used to draw inferences (accurate or not) about political views, sexual orientation, lifestyle and various other factors that could play into decisions about hiring, discipline or a variety of other issues concerning the individual. 

Online space is still private space

The issue of social media profile reviews is becoming an increasingly contentious one. An employer should have no more right to rifle through someone’s private online profile than through one’s purse or wallet. With the Bland v. Roberts ruling and its recognition of Facebook speech as deserving of constitutional protection, important progress has been made in establishing that online privacy is a right and its protection is a responsibility.

I'd Tap That (and I have): #creepyNSA

Like it wasn’t creepy enough when it was about “security”

Since June, when the first Snowden revelations about NSA wiretaps of Verizon customers were published in the Guardian, there have been online jokes about #NSAlovepoems.  However, with the release August 23, 2013, of initial details about “LoveInt” – jargon for the use of NSA listening technology for romantic purposes – it's looking like the jokes were (or could be) true. 

Let’s be clear – this isn’t romantic.  Putting it in terms of “love interests” obscures what’s really going on.  This is STALKING.  Nothing to do with security or counter-terrorism. 

Never ones to let a fertile area lie fallow, the twitterati are all over it.  #LoveInt, #NSAlovepoems and #NSApickuplines are have been providing (nervous) laughter all weekend.  Let’s hope that once the laughter fades, the discomfort remains and leaves behind the understanding that this kind of invasion is never ok – not when it comes to stalkers, and not “for our own good” by government agencies either.