Is IP address personal information (in Europe)?

Are IP addresses personal information?  On 28 October, the German Federal Court of Justice referred the question to the European Court of Justice (they who gave us the contentious Google Spain decision).

The case stems from the fact that when users visit German government sites, the site collects their IP addresses along with other information.  This information is logged and stored “in order to track down and prosecute unlawful hacking”. 

For once Canada can consider itself well ahead of the curve.   The Office of the Privacy Commissioner of Canada is clear that “An Internet Protocol (IP) address can be considered personal information if it can be associated with an identifiable individual.”  A 2013 report from that Office “What an IP Address Can Reveal About You” goes further into the subject, ultimately concluding that

…knowledge of subscriber information, such as phone numbers and IP addresses, can provide a starting point to compile a picture of an individual's online activities, including:

·         Online services for which an individual has registered;

·         Personal interests, based on websites visited; and

·         Organizational affiliations.

It can also provide a sense of where the individual has been physically (e.g., mapping IP addresses to hotel locations, as in the Petraeus case). 

This information can be sensitive in nature in that it can be used to determine a person’s leanings, with whom they associate, and where they travel, among other things.  What’s more, each of these pieces of information can be used to uncover further information about an individual. 

The Federation of German Consumer Organizations has raised concerns that classifying IP address as personal information could create delays and onerous administrative and consent requirements for internet use in Europe, or alternatively could necessitate a reconsideration of some of the provisions of the EU Data Protection Directive.  It would be interesting to hear from a similar Canadian body as to their experiences….or perhaps the CJEU should consider some kind of case study in order to include practical experience into their considerations.

Does Knowledge Create a Duty to Warn?

A young model who was contacted, solicited by a “modelling company” via the site Model Mayhem and was subsequently drugged, sexually assaulted and filmed, has received permission to pursue her case.

“Jane Doe” was raped in 2011.  Months later, she was contacted by the FBI and learned that other women had been victimized the same way by the same people in the past.  In fact, not only had this happened to other women, but the owners of Model Mayhem were aware of this and the ongoing criminal investigation.

Not, of course, that she found out about the criminal investigation or the prior knowledge through advisory warnings or any attempt to safeguard users.  The information only became available in the course of a court battle between Internet Brands (who acquired Model Mayhem in 2008) and the original owners.  Defending themselves against a claim for money owed on the acquisition, Internet Brands complained that the owners had failed to disclose the fact of the ongoing investigation, something that could expose Internet Brands to civil liability.  Protecting themselves, sure.  Protecting site users?  Not so much. 

So Jane Doe began an action against Internet Brands for $10 million for failure to warn.  In response, Internet Brands invoked s.230 of the Communications Decency Act which provides immunity from liability for providers and users of an "interactive computer service" who publish information provided by others. 

In a September 2014 decision, Justice Clifton distinguished between the intent of the clause – to protect sites from being held responsible for user-generated content – and the substance of Jane Doe’s claim. That substance focusses on the special relationship between the site and its users and the fact that Internet Brands had knowledge about the criminal investigation and decided not to disseminate any warning to users.  Accordingly, Clifton decided that the case should go forward, leaving the lower court to determine whether the site had a duty to warn that it failed to meet. 

It is expected that this decision will be appealed.  Nevertheless, there is a moment here, an opportunity to further discuss some very important issues.  I have said before that we must ensure we don’t get sidetracked by technology and instead focus on the fundamental issue(s) in making law and policy....#nowisthetime

 

 

 

When privacy and dignity collide

Recently I was going through a series of training modules based on the Accessibility for Ontarians with Disabilities Act…a requirement for a major project I’m working on.  The principles and ideas aren’t new to me, but in the process I was struck by the assumptions seemingly built into the examples.

For instance – a university registrar’s office is contacted via the Bell Relay Operator, (a service that enables calls between people who are hearing and those who are hearing and/or speech impaired). The operator conveys a request for the personal information contained in a student file. The correct response, according to the module, is to provide the information requested.  Why? Because Bell Relay operators are intermediaries who are governed by a strict confidentiality agreement, and thus the phone call should be understood as a request from the student directly.  To question the use of an intermediary violates the dignity of the student. 

I’ve gotta say, while I understand concerns about asking people to provide (excessive) information, I also have concerns about just assuming that personal information can or should be provided to a third party without explicit consent to do so. Is it really damaging in such a case to request confirmation that it is actually the student making the request (albeit via the relay operator) before handing it over?  Sort of feels to me as though doing the same confirmation of identity and entitlement to personal information associated with any direct call requesting that information is dignity-affirming rather than reducing.

It seems as though there are competing ideas of dignity here.  I mean, I understand and respect that people shouldn’t be asked “what’s wrong with you” or have to otherwise “justify” their entitlement to accommodation.  At the same time though, I wonder if the emphasis on not asking but instead relying on protections built in (i.e., the confidentiality requirements for relay operators) doesn’t, at least in some ways, imply that there is something somehow shameful about using assistive devices and hence we shouldn’t draw attention to them.  Isn’t it possible to ascertain that personal information is appropriately protected?  Isn’t doing so another part of protecting/respecting dignity and autonomy?  Surely there is a way to honour the expectations of both accessibility and privacy? 

just wondering….

 

a dismayed yelp -- shouldn't we have some rights to our own reputation?

A case against Yelp got dismissed this week.  It’s an interesting one too – businesses who claim that Yelp manipulates ratings against businesses who do not purchase advertising on Yelp. 

Yelp bills itself as an “online urban guide” – a crowdsourced local business review site.  Consumers rate their experience(s) with a business, and  the accumulated ratings and experiences are available to anyone (though you’ll need an account to actually submit a review).    The company themselves isn’t particularly local though – with over 130 million unique visitors per day in over 20 languages, Yelp’s Alexa rank for May 2014 was a more than respectable 28.  This is a company that may speak local but has definite range and scope for the exercise of power.

Yelp has long been dogged by allegations that they manipulate the rankings of businesses – either that they will remove negative reviews for businesses who purchase advertising or alternatively that a refusal to buy advertising could result in the disappearance of positive reviews.  Finally, a group of small businesses filed suit against Yelp claiming that it was extorting small businesses into buying advertising. 

Extortion, they say.  When I think of extortion I think of blackmail.  Organized crime.  That sort of thing.  A battle between a crowd-recommendation site and a variety of entrepreneurs seems a little…bloodless.  (maybe my parents *did* let me read inappropriate materials – turns out the woman from the town library who called my mom to report me was right after all!)

Anyone reading the headlines after the case was dismissed might be excused for thinking that Yelp had been vindicated.

Yelp Extortion Case Dismissed by Federal Court

Court Sides With Yelp

Appeals Court Rules for Yelp in Suit Alleging the Online Review Site Manipulated Reviews

Well, the court didn’t exonerate Yelp.   There was no finding here that the manipulation didn’t or couldn’t happen.  Nope, the lawsuit was dismissed because….drum roll please….businesses don’t have a right to positive reviews online. 

The business owners may deem the posting or order of user reviews as a threat of economic harm, but it is not unlawful for Yelp to post and sequence the reviews," Judge Marsha Berzon wrote for the three-judge panel. "As Yelp has the right to charge for legitimate advertising services, the threat of economic harm that Yelp leveraged is, at most, hard bargaining.

Does it matter?  Isn’t this just a battle between businesses?  Well….no.  Not necessarily.  In a world of crowdsourcing and reputation, granting a business carte blanche to manipulate reviews is a scary prospect.  An even scarier one is the idea that you might not have rights over your reviews/reputation.

(fear not RTBF-foes -- i'm not suggesting we should have the right to change, erase or otherwise manipulate such reviews....i'm just suggesting maybe nobody else should be able to do so either, especially with a view to harming me reputationally)

It’s obscurity, not apocalypse: All that the “right to be forgotten” decision has created is the right to ask to have information removed from search engine results.

The National Post recently carried a week of op-eds, all focused on responding to the “right to be forgotten” that was (allegedly) created by the Google Spain decision.   Some extremely well-known and widely-respected experts have weighed in on the subject: 

Ann Cavoukian and Chris Wolfe: 

…while personal control is essential to privacy, empowering individuals to demand the removal of links to unflattering, but accurate, information arguably goes far beyond protecting privacy… The recent extreme application of privacy rights in such a vague, shotgun manner threatens free expression on the Internet. We cannot allow the right to privacy to be converted into the right to censor.

Brian Lee Crowley:

This ruling threatens to change the Internet from a neutral platform, on which all the knowledge of humanity might eventually be made available, to a highly censored network, in which, every seven seconds, another person may unilaterally decide that they have a right to be forgotten and to have the record of their past suppressed…We have a duty to remember, not to forget; a duty not to let the past go, simply because it is inconvenient or embarrassing.

Paula Todd:

Should there be exceptions to the principle of “let the public record stand”? Most countries already have laws that do just that — prohibiting and ordering the deletion of online criminal defamation, cyberabuse and images of child sexual abuse, for example. Google, and other search engines, do invent algorithms that position certain results more prominently. Surely a discussion about tweaking those algorithms would have been less draconian than this cyber censorship.

With all due respect to these experts, I cannot help but feel that each of them has missed the central point – pushed by the rhetoric about a “right to be forgotten” into responding to a hypothetical idea rather than the concrete reality of the decision.

It is not about mere reputation grooming.

It is not about suppressing or rewriting history.

It is not about silencing critics.

It is not about scrubbing clean the public record.

It is not about protecting people from the consequences of their actions.

Frankly, this ruling isn’t the creation of a new form of censorship or suppression – rather, it’s a return to what used to be.  The decision sets the stage for aligning new communications media with more traditional lifespans of information and a restoration of the eventual drawing of a curtain of obscurity over information as its timeliness fades. 

Facing the facts:

It is important to be clear that all the “right to be forgotten” decision has created is the right to ask to have information removed from search engine results.   

There is no guarantee that the information will be removed – in its decision the court recognized that while indeed there are situations where removal would be appropriate, each request requires a careful balancing of individual rights of informational self-determination against the public interest.

It is also worth pointing out that this is hardly the only recourse users have to protect and shape online privacy, identity, and reputation. In an Atlantic article about the introduction of Facebook Social Graph, the authors comment that:

Online, obscurity is created through a combination of factors. Being invisible to search engines increases obscurity. So does using privacy settings and pseudonyms. Disclosing information in coded ways that only a limited audience will grasp enhances obscurity, too. Since few online disclosures are truly confidential or highly publicized, the lion's share of communication on the social web falls along the expansive continuum of obscurity: a range that runs from completely hidden to totally obvious.

Other ubiquitous privacy protective techniques don’t tend to engender the same concerns as “right to be forgotten”.  Nobody is ringing alarm bells equating privacy settings with censorship – indeed, we encourage the use of privacy settings as responsible online behavior.  And while there are certainly concerns about the use of pseudonyms, those concerns are focused on accountability, not freedom of speech or access to information.  In fact, the use of pseudonyms is widely considered as facilitating freedom of speech, not preventing it. 

Bottom line:

I’m all for freedom of expression.  Not a fan of censorship either.  So I would like to make a heartfelt plea to the community of thinkers who focus on this area of emerging law, policy, and culture: exaggerations and overreactions don’t help clarify these issues and are potentially damaging in the long run.  A clear understanding of what this decision is —and what it is not— will be the best first step towards effective, balanced implementation. 

 

How to Profit From the Right to be Forgotten (Operators are Standing By!)

 Search Engines are on Board

After setting up a request form, receiving tens of thousands of requests in the first day(s), and sending those requests through its review process, Google has now begun to remove information from its search results.   The court said Google had to do it, Google set up a process to do it, that process is free and even relatively quick. 

Not the end of the issue, unfortunately.  Google isn’t the only search engine out there, which means that information may still appear in the results of other search engines.  Other search engines are said to be developing similar processes in order to comply with the court’s interpretation, so that may help. Ultimately, however, even if all commercial search engines adopt this protocol, there are still other sites that are themselves searchable. 

Searching within an individual site

This matters because having a link removed from search results doesn’t get rid of the information, it just makes it harder to find.  No denying that this is helpful -- a court decision or news article from a decade ago is difficult to discover unless you know what you’re looking for, and without a helpful central search overview such things will be more likely to remain buried in the past.  Some partial return to the days of privacy through obscurity one might say. 

The Google decision was based on the precept that the actions of a search engine in constantly trawling the web meant that it did indeed collect, retrieve, records, organizes, discloses and stores information and accordingly does fall into the category of data control.  When an individual site allows users to search the content on the site, this same categorization does not apply.  Accordingly, individual sites will not be subject to the obligation (when warranted) to remove information from search results on request. 

If we take it as written that everything on the Internet is ultimately about either sex or money (and of course cats), then the big question is, of course, how this can be commodified?  And some sites have already figured that out. 

Here’s What We Can Offer You

Enter Globe 24h, a self-described global database of public records: case law, notices and clinical trials.  According to the site, this data is collected and made available because:

[w]e believe information should be free and open. Our goal is to make law accessible for free on the Internet. Our website provides access to court judgements, tribunal decisions, statutes and regulations from many jurisdictions. The information we provide access to is public and we believe everyone should have a right to access it, easily and without having to hire a lawyer. The public has a legitimate interest in this information — for example, information about financial scams, professional malpractice, criminal convictions, or public conduct of government officials. We do not charge for this service so we pay for the operating costs of the website with advertising.

 

A laudable goal, right? 

The public records that are held by and searchable on the site contain personal information, and the site is careful to explain to users what rights they have over their information and how to exercise them.  Most notably, the site offers clear and detailed explanation of how a user may request that their personal information be removed from the records – mailing a letter that includes personal information, pinpoint location of the document(s) at issue, documentary proof of identity, a signature card, an explanation of what information is requested to be removed and why.  This letter is sent to a designated address (and will be returned if any component of the requirements is not met) and after a processing time of up to 15 days.  Such a request, it is noted, may also involve forwarding of the letter and its personal information to data protection authorities.

But Wait, There’s More!!

Despite their claims that they bear the operating costs themselves (the implication being that they do so out of their deep commitment to access to information) the site does have a revenue stream.  

Yes, for the low low price of €19 per document, the site will waive all these formalities and get your information off those documents (and out of commercial search engine caches as well) within 48 hours.  Without providing a name, address, phone number, signature, or identification document.  No need to be authenticated, no embarrassing explanation of why you want the information gone, no risk of being reported to State authorities, the ease of using email and no risk of having your request ignored or fail to be acted upon.  It’s even done through PayPal, so it can theoretically be done completely anonymously. 

If the only way to get this information removed were to pay the fee, the site would fall foul of data protection laws, but that’s not the case here.  You don’t have to pay the money.  That said, the options are set up so that one choice seems FAR preferable to the other….and it just happens to be the one from which the site profits. 

There you have it – the commodification of the desire to be forgotten.   Expect to see more approaches like this one. 

My takeaway?  It’s not really possible to effectively manage what information is or isn’t available.  Removing information entirely, removing it from search engine results, redacting it from documents or annotating it in hopes of mitigating its effect – in the long run information is out there and, if it is accurate, is likely going to be found and become incorporated into a data picture/reputation of a given individual. 

 


LinkedIn, Spam and Reputation

 

A 12 June decision in California regarding LinkedIn illustrates an increasingly nuanced understanding of reputation in the context of online interactions.

When a user is setting up a LinkedIn account, they are led through a variety of screens that solicit personal information. Although most of the information is not mandatory, LinkedIn’s use of a “meter” to indicate the “completeness” of a profile actively encourages the sharing of information.  Among other things, these steps enable LinkedIn to gain access to the address book contact information of the new user, and prompt that user to provide permission to use that contact information to invite those contacts to establish a relationship on LinkedIn. 

The lawsuit alleges that LinkedIn is inappropriately collecting and using this contact information. LinkedIn, pointing to the consent for this use provided by customers, had sought to have the case dismissed.  Judge Koh looked at the whole process and found that while consent was given for the initial email to contacts, LinkedIn also sent two follow up emails to those contacts who did not respond to the original – and that there was no user consent provided for these follow-ups. 

What is interesting about the decision to allow this part of the claim to go forward is Koh’s analysis of harm.  That analysis doesn’t stop with whether LinkedIn has consent for the follow-up emails, rather she examines what the effect of this practice might be and concludes that it "could injure users' reputations by allowing contacts to think that the users are the types of people who spam their contacts or are unable to take the hint that their contacts do not want to join their LinkedIn network."  Given this, she suggests that users could pursue claims that LinkedIn violated their right of publicity, which protects them from unauthorized use of their names and likenesses for commercial purposes, and violated a California unfair competition law.

 

R v Spencer: a new era of privacy jurisprudence for Canada


The newspapers are trumpeting the Supreme Court of Canada decision in R v Spencer, as well they should.  It was a good, thoughtful decision, one that conveys a strong understanding of privacy. 

The case, on appeal from Saskatchewan, dealt with a situation where police requested (and received) subscriber information from an ISP based on an IP address.  The information was revealed by the ISP in response to a request with no warrant.  The SCC was asked to determine whether this was an unreasonable search and seizure in contravention of s.8 of the Charter and they determined that it was.  

Until this decision, (some) Canadian ISP’s were of the opinion that exceptions for information revealed to certain bodies for law enforcement, security or related purposes as set out in s. 7 of PIPEDA authorized the provision of personal information without the necessity of a warrant.  Today’s decision puts an end to that practice. 


In examining the subject matter of the search, the court rejected a limited approach that saw the information as merely the name and address of an ISP subscriber, holding that to do so was to miss the fact that the information at issue was the subscriber information as linked to particular Internet activity as well as the inferences that might be drawn from that profile (para 32, emphasis mine). 

The court also employed a new and nuanced tripartite understanding of information privacy, looking at privacy variously as secrecy; as control over information; and as anonymity.  (para 38)

It is this final category of privacy as anonymity where the decision perhaps makes its greatest contribution.  In relation to user online activity, the Court focused extensively on the idea of privacy as anonymity, writing at para 46 that:

Moreover, the Internet has exponentially increased both the quality and quantity of information that is stored about Internet users. Browsing logs, for example, may provide detailed information about users’ interests. Search engines may gather records of users’ search terms. Advertisers may track their users across networks of websites, gathering an overview of their interests and concerns. “Cookies” may be used to track consumer habits and may provide information about the options selected within a website, which web pages were visited before and after the visit to the host website and any other personal information provided...[t]he user cannot fully control or even necessarily be aware of who may observe a pattern of online activity, but by remaining anonymous — by guarding the link between the information and the identity of the person to whom it relates — the user can in large measure be assured that the activity remains private…


Ultimately, the Court concluded that there was (or could be) a reasonable expectation of privacy as to the anonymity of their online activities.  Given this reasonable expectation of privacy, the police obtaining the subscriber information from the ISP without a warrant was a violation of s.8 of the Charter and thus an unconstitutional search.

This finding is an important one and not just for the privacy of individual internet users.  Indeed, in light of current concerns about security and cyberbullying, as currently expressed in C-13 in S-4.  C-13, the newest iteration of the government’s lawful access legislation combined with cyberbullying provisions, contains provisions for voluntary warrantless disclosure.  The Court’s strong recognition of a constitutional reasonable expectation of privacy in such information is in direct opposition to the presumptions underlying such provisions.  S-4, which would update PIPEDA, has been criticized as expanding the expanding scope of voluntary disclosure, an approach which must also be reconsidered in light of today’s decision.

Government has long sought to justify lawful access-type legislation as creating the new powers necessary in order to address new technologies.  In its decision, the Court addresses concerns expressed by law enforcement that requiring a warrant could impede or even facilitate the investigation of online crime, countering at para 49 that:

In light of the grave nature of the criminal wrongs that can be committed online, this concern cannot be taken lightly. However, in my view, recognizing that there may be a privacy interest in anonymity depending on the circumstances falls short of recognizing any “right” to anonymity and does not threaten the effectiveness of law enforcement in relation to offences committed on the Internet. In this case, for example, it seems clear that the police had ample information to obtain a production order requiring Shaw to release the subscriber information corresponding to the IP address they had obtained.

This case dealt with child pornography – that the SCC are clear that a warrant was necessary in this case indicates that the privacy interest is an important one, not to be overridden easily.  This too should be read as a warning to the Government that expansion of intrusive powers into personal privacy must be grounded in demonstrable issues rather than mere unsupported assertions of necessity.




Keeping it Real: Reputation versus the Right to be Forgotten”

In the wake of the recent legal ruling in Spain, described popularly—albeit erroneously, in my opinion—as creating a “right to be forgotten”, Google has created a web-form that allows people to request that certain information be excluded from search results.  More than 12,000 requests to remove personal data were submitted within the first 24 hours after Google posted the forms, according to the company. At one point Friday, Google was getting 20 requests per minute. 

If it’s not really about being “forgotten”, what is at the heart of this decision? In an article dated 30 May 2014, ABC News asserted that it’s about the right to remove “unflattering” information, and characterizes the process as one of censorship, used by people to polish their reputations.  Framing the issue in this way, however, is dismissive and an oversimplification. 


BALANCING THE PUBLIC’S RIGHT TO KNOW

The decision does NOT provide carte blanche for anyone to force the removal of any information for any reason.  In fact, the Google form is clear that in order to make a request, the following information is required:

(a) Provide the URL for each link appearing in a Google search for your name that you request to be removed. (The URL can be taken from your browser bar after clicking on the search result in question).

(b) Explain, if not clear, why the linked page is about you (or, if you are submitting this form on behalf of someone else, the person named above).

(c)  Explain how this URL in search results is irrelevant, outdated or otherwise inappropriate. [Emphasis is mine.]

Even after this information is provided, there is no guarantee that the removal request will be approved.  Google has indicated that requests will be assessed to determine whether there is a public interest in the information at issue, such as facts about financial scams, professional malpractice, criminal convictions, or public conduct of government officials.  Although other search engines that function in the EU have not yet announced their own plans for complying with the decision, similar factors can be expected to be considered.

REPUTATION

Without a doubt, reputation is increasingly important in business, academia, politics, and in our culture as a whole. The availability of a wide range of data and information via search engines is an integral part of reviewing reputations and of making educated risk and trust assessments.  That said, those reputation judgments are most effective when the available information is reliable and relevant.

In other words, it’s not necessary to make absolutely any and all information available, but rather it’s important to ensure the accuracy of the information that is available.  To conflate informational self-determination with censorship is problematic, but to use such a characterization in order to defeat the basic precepts of data protection – which include accuracy and limiting information to that which is necessary – can actually be destructive both of individual rights and of the proper power of reputation.

This is a new area of policy and law touching on powerful information tools and very important personal rights. It’s imperative to get this right.

 


Re-Viewing Reputation : Italy Investigates Trip Advisor


We rely on reputation more and more to help us make decisions about trust and relationships in online spaces.  It is not surprising then that the very system(s) from which reputation is derived need to be re-viewed and assessed in order to ensure that reputation is and remains reliable and trustworthy. 


 An iteration of this concern is beginning in Italy, whose anti-trust body just announced an investigation into Trip Advisor.  The site attaches aggregate ratings to hotels, restaurants and other services based on individual reviews and rankings (also viewable by users) submitted by users of those services.  The investigation is a response to allegations (by consumers and by some service) that information is not clearly and evidently the result of using the services – that some reviews are from users who may never have visited or used the service in question; and that some information is in fact commercial placement that is not easily distinguished from user-provided reviews.    


Interestingly, the investigation looks not just into the validity/veracity of the information but also into whether Trip Advisor sufficiently guards against gaming of their system.   It will be interesting to see how broadly the authority defines Trip Advisor’s responsibilities.  What if anything must a site do to ensure the quality of the content posted on their site and included in their aggregation?  What standards should their due diligence be judged against, and what penalties applied?   Increasingly as more complex information relationships become the norm, such a question would likely fall under the rubric of innocent dissemination, where an intermediary like Trip Advisor does not necessarily have such a duty unless or until notified of a concern or problem. 


Protecting Intimacy, Preventing Revenge and Balancing Fundamental Rights

Recent court decisions in Germany and Israel seem to indicate a growing recognition of the importance of personal privacy as well as the potential(s) for damage to privacy as a result of disclosure of intimate images and/or details.

In Israel, the Supreme Court just upheld the 2011 decision in Plonit vs. Ploni and Almonit.  The case involved a challenge to the publication of a book, filed against both the author of the book and its publishers.  The book detailed a relationship with a female student and was written by the male with whom she had previously had a relationship.  In requesting that the publication be recalled, she claimed that her private and public world were described in graphic detail, including her body, emotions, weaknesses, conscience, activities and preferences for sexual stimulation.  The judge agreed that whether or not the book was classified as fiction, the plaintiff was sufficiently identifiable that the book was an invasion of privacy.  Naming both privacy and free speech to be fundamental rights, the judge felt that the appropriate balance between literary freedom and privacy, in this case, justified preventing the book from being published as well as paying the plaintiff damages. 

While private ownership of images rather than publication of text was at issue in Germany, the court struck a similar balance, at least in intimate context(s).  At the end of a relationship with a professional photographer, the woman plaintiff requested that he delete photographs and videos of her taken during their relationship.  When he refused, she went to court asking that they enforce her request.  A variety of images were at issue, both erotic and non-erotic.  It was unquestioned that they had all been taken with consent.  Nevertheless, the court determined that any consent given to possession of the images was withdrawn at the end of the relationship.  Given that and based on a right to image, and the recognition that intimate images go to the heart of the personality right, the court found that her ex had an obligation to delete upon request all nude or otherwise erotic images.  Images characterized as everyday or otherwise unlikely to compromise her privacy were excluded by the order and did not have to be deleted on request. 

With the EU recognizing a “right to be forgotten”), the media trumpeted the German decision as Germany upholding a right to one’s own image, the media claimed a victory for those victimized by revenge porn.  Viktor Mayer-Schönberger notes that the German case is a particular manifestation of European doctrine rather than an iteration of the right to be forgotten.  "But what can be said is that is that these two rulings may make more and more people aware of their personal rights in the digital sphere. At the very least, it should embolden future claimants who pro-actively want to prevent revenge porn."

 

 


#YesAllWomen: a digital chorus with the power to transform

In response to the recent Isla Vista shootings, and the misogynistic manifesto of the shooter, Twitter was inundated by #YesAllWomen – a hashtag appended to tweets detailing the many ways in which women’s experiences are shaped by misogyny, sexism and fear.  More than 150,000 tweets had already used the hashtag by 3am Sunday.

Social scientist Steph Harold in a 22 May, 2014 article reflected on her experience of online activism, creating a viral hashtag, and what she learned from it.  Ms. Herold used Twitter to appeal to women who had chosen abortion, asking them  to speak out as an act of challenge to anti-choice organizations and agendas, asking them to tell their stories and use the hashtag #ihadanabortion.  The thread exploded, with over 10,000 uses of the hashtag in the first day. This included people sharing their stories, as well as anti-choice activists using the hashtag to shame, while various media and advocacy groups weighed in.  The experience was a transformative one for Ms. Herold, causing her to think extensively about the power of social media and the question of how to measure its effectiveness.  She outlines four ways to create real cultural change around abortion, all of which she insists must be  grounded in hard work and activism, not just social media. 

I wonder, however, if the very strategies she identifies can’t actually be served by social media rather than being distinct from it. 

I remember reading about the role of consciousness-raising groups in second wave feminism—how the realization that what had seemed like individual inadequacies or inabilities were in fact common to many was instrumental in politicizing and empowering women. Frank exchanges of stories provided important context, perspective, and solidarity. 

As I witnessed the growth of #YesAllWomen and the responses to its messages I felt as though there was the potential for something beyond a viral twitter moment to result from this. 

The Atlantic says simply that “…that the vast majority of men who explore it with an open mind will come away having gained insights and empathy without much time wasted on declarations that are thoughtless” an insight eloquently articulated in Neil Gaiman’s tweet “the #yesallwomen hashtag is filled with hard, true, sad and angry things. I can empathise & try to understand & know I never entirely will.”

The feminist website Jezebel elevates the conversation swirling around the hashtag to an even loftier level stating “…now with trends like the #YesAllWomen hashtag, we are uprooting everyday sexism, the ideas that perpetuate systematic marginalization, outright violence towards women, rape culture, and demonization of women who deign to stand up for themselves, forcing it out and showing just how pervasive and destructive it is.”

Herold’s four strategies for creating meaningful cultural/policy change are:

  • ·         Address silence, shame and fear
  • ·         Increase visibility
  • ·        Transform negative attitudes, beliefs and stereotypes; and
  • ·         Deconstruct myths and misperceptions

When I look at the powerful acts of speech, visibility and witness that populate #YesAllWomen, it seems to me that we’re seeing precisely these strategies in action.  I don’t want to oversimplify, and I’m certainly not claiming that a weekend worth of Twitter posts will in and of themselves lead to social and cultural transformation.  That said, there is a big segment of the population who rarely do (because they don't have to) ever imagine what it's actually like to live as another gender (and/or race, ethnicity, sexual orientation…). 

It is my hope that the raising of individual voices in the digital chorus of #YesAllWomen, and the larger recognitions they inspire can help remedy that failure of imagination and facilitate the development of empathy.


Charter Challenge to PIPEDA

The Canadian Civil Liberties Association, along with Chris Parsons of the Citizen Lab, have filed a challenge to some provisions of PIPEDA, specifically the parts of the Act that allow private corporations to disclose user personal information without a warrant to a government institution, for a number of reasons, including national security and the enforcement of any law of Canada, a province or a foreign jurisdiction. 

The fact that the information is being obtained from the private sector further complicates things.  As CCLA's General Counsel stated:  "Non-state actors are playing an increasingly large role in providing law enforcement and government agencies with information they request.  The current scheme is completely lacking in transparency and is inadequate in terms of accountability mechanisms."     

CCLA's legal challenge asks that provisions of PIPEDA be struck as an unconstitutional violation of the right to life, liberty and security of the person (s.7) and the right to be free from unreasonable search and seizure (s.8) under the Charter. 

European Court of Justice and the Right to be Forgotten

On 13 May 2014, the European Court of Justice issued its decision in the Google Spain “Right to be Forgotten” case.

The case was initiated in 2010 when Spanish citizen Mario Costeja Gonzalez found that the results of a Google search of his name included a 1998 newspaper article detailing his debt status and the forced sale of his property.  Given that it was 10 years later and he had resolved the financial issues, he felt that the link should not still be available.  After both the newspaper itself and Google refused to remove the information, a complaint to the Spanish data protection authority  (AEPD) was made.  After investigation, the AEPD ordered Google to remove the links, but Google challenged the ruling, eventually leading the case to the European Court of Justice.

This week’s decision is in stark contrast to the preliminary ruling in June 2013 where the Court’s Advocate General disagreed that Google was a data controller and thus that they had an obligation to delete the links.   The full court found that, in fact, the operator of a search engine by scanning and indexing does in fact collect, retrieve, record, organize, disclose and store information and accordingly does fall into the category of data controller.  As a data controller, in this case they did have an obligation to remove the data. 

It is worth noting, however, that the obligation to remove data was not put forward as always applicable or absolute, with the court instead recognizing (a) that there were certain situations in which such removal would be appropriate, and (b) that the individual’s rights of informational self-determination must be balanced against the interest(s) of the public in knowing or having access to the information.  How or whether that works in practice remains to be seen.

Identifying the Catch-22 implicit in the decision, Zittrain has commented that: 

In fact, I can’t tell if the Spanish citizen actually won anything.  The Court’s own press release names him, and the fact that he at one point owed so much money that he had a property foreclosed.  Not only does that illustrate the Streisand Effect, giving attention to exactly the thing he wanted to keep private, but more important, it appears to show that the Court doesn’t see a problem with publishing the very data it thinks sensitive enough to be worthy of an entirely new category of protection.

The decision has garnered all sorts of media coverage and expert opinions, forecasting everything from business as usual to the end of the internet as we know it.  Perhaps the most balanced (and honest) response came from the Information Commissioner’s Office in the UK, who essentially congratulated the court for including Google under the rubric of EU data protection law and promised discussion on the practical implications of the decision once there had been an opportunity to review and consider the decision. 

Discussions about the “right to be forgotten” are not new.  Perhaps this decision will facilitate that right.  I think, however, that one of the things we must consider before speaking out about this decision is the larger question of why anything need be hidden or forgotten.  The right to be forgotten is, at least in part, predicated on concern(s) about the permanence of data online and the effect of long-ago moments, actions, or otherwise less than flattering information upon individuals as they go through their lives.  Is the right to be forgotten the only way to deal with this?  Couldn’t we instead take our current understandings (that youthful indiscretions needn’t define someone’s whole life) and apply it to information available online?  Learning and strengthening critical reading skills rather than placidly accepting any and all information presented online as both relevant and accurate? 

15 May 2014:  And so it begins:  Reuters reports that Google have already received multiple takedown requests.  Let's wait and see how this plays out shall we....

Feeling Safe Doesn’t Mean You Are: Conflating Alarm/Notification with Prevention


Recently various news stories have trumpeted Kitestring as:  a safety app for women” and “an app that makes sure you get home safe.”   In an April 2014 story, service creator Stephan Boyer explains that he founded Kitestring to keep my girlfriend safe.   Even feminist blog site Jezebel’s headline invoked the claim that Kitestring makes people safer, though the story itself acknowledges that the value of the service is in making women feel safer.

Kitestring is a web-based service that takes on the role of a safety call – when enabled, it notifies pre-designated persons if the user does not check-in within a pre-set time period.  Where other “safety” apps require some positive action in order to sound an alert – bSafe creates a safety alarm button that must be pushed in order to alert others, while Nirbhaya sends out the alarm message when the phone is shaken – Kitestring will send out the alert *unless* the positive action of checking-in is undertaken.   


What we’ve got here is another iteration of the belief that the more information is collected, the more we can know, predict and protect.  And while it’s easier to critique this position when looking at issues like invasive NSA monitoring, even voluntary services like this one have this same logical flaw inherent in them.  In this case, without disregarding the importance of a safety call (via telephone or through any of these services) and of access to services such as this one, equating sounding an alarm with keeping the individual (virtually always identified as female) safe is a dangerous overstatement. 

Public surveillance cameras have long been touted as making public spaces (as well as those within them) safer.  Evidence doesn’t exactly support these claims though – studies looking at CCTV in London, England have consistently found little or no correlation between the presence and/or prevalence of CCTC cameras and crime prevention or reduction.  To put it harshly, public video surveillance (whether recorded or live monitored) won’t prevent me being raped.  The video record of it may be of assistance in identifying the rapist, but even that is uncertain, depending as it does on quality of camera and recording, camera positioning, etc. 

I’m not against services like Kitestring – I want people to know if I don’t get home or somehow fall off the grid.  That said, letting people know I’ve gone missing isn’t the same as preventing the problem in the first place.  Headlines that claim “This New App could’ve Prevented My Friends’ Rape” are optimistic at best, misleading at worst. 


arrest in lieu of warrant?

Astounding. 

On 29 April, the Supreme Court of the US, in a 6-3 decision, concluded that someone who has been arrested in "absent" from the premises and therefore cannot refuse to allow a search of their home. 

In a previous case -- one dealing with domestic violence -- the situation had the husband refusing to allow law enforcement to enter the premises, while the wife consented.  There, the Supreme Court ruled that law enforcement should NOT have entered the premises because though one partner was consenting, the other was on the premises and refusing.  That refusal, said the court, was sufficient to deny entrance in the absence of a search warrant.

The recent decision keys on the idea of absence, with the majority ruling that someone cannot refuse a search when they are not at home.   Thus, there is no need to get a search warrant  where two occupants disagree about whether to allow entrance to the home and the one refusing consent is arrested, since they are no longer considered "present" and are thus unable to refuse entrance. 

“We therefore hold that an occupant who is absent due to a lawful detention or arrest stands in the same shoes as an occupant who is absent for any other reason,” Alito said.
 

While I don't disagree that there are situations where an emergency should trump the need to get authorization to enter a private home, in the absence of an emergency law enforcement should NOT be able to subvert the obligation to get a warrant merely by arresting those who stand in their way. 

 

How Many People Need to Know What Muppet You Are ??

I can’t be the only one whose Facebook feed has been flooded with quiz results.  Which Star Wars character are you?  Which sandwich?  Which country?  Which author is your soul mate?  Which random piece of stuff best represents you?

1328-1-facebook-quiz.jpg

I won’t pretend to be above the fray – I couldn’t resist finding out what Muppet I was (Miss Piggy, for the record) – but it seems as though the floodgates have been opened of late.  Every day I learn more and more about the alter egos of friends and acquaintances.  Only, the thing is, I’m not the only one learning these things.  Nor is the audience restricted to those the individual user chooses to share the results with. 

Back in 2012, the WSJ analyzed the top 100 Facebook apps (at the time) to see what personal information they collected.  The results are sobering --- despite the fact that Facebook’s Terms for Developers state that apps can collect only the information they need, this study showed that the norm is in fact that apps collect all sorts of personal information that cannot conceivably be necessary in order for the app to function, and that they collect it not only from the individual but sometimes from their friends as well.   A similar study performed by the University of Virginia found that of the top 150 applications on Facebook (at that time), 90 percent were demanding (and being given access to) information they didn't need in order for the app to function.

It doesn’t stop with the app developers either.  Know why app developers want to collect as much information as possible?  They say it’s for personalization and customization in order to make your experience of the app as positive as possible, and who am I to say that’s not part of it?  It is not, however, the whole of it.  Not even close.  All this information – self-reported personal and behavioural information – is the lifeblood of the targeted advertising market.  A market that according to the Direct Marketing Association’s study brings in $156 billion in 2012. 

Please know that I’m not complaining – it’s fun to see the results, to learn new things about friends or even just to have silly things to laugh about.  I have to wonder though, how many people are aware of just how widespread is the audience with whom they’re sharing this information, let alone how that information is likely collected and commodified; and whether they would be as eager to participate in these quizzes if they were aware.

Pitfalls of Personalized Advertising

Imagine someone sends you a promotional calendar.  Do you pay any attention to it?

What if it has your name on it?

What if it has your picture on it?

Perks have long been a sales tactic.  At one end of the spectrum are the luxury items -- free tickets, expense account steak dinners and single malt, “training” sessions in exotic locations.  At the other end, there’s still a drive to differentiate, to promote, and to build relationships but instead of luxuries, they turn to personalization. 

When it comes to privacy, personalization can go tragically wrong. 

In the past week, we’ve seen a couple of egregious examples of personalization gone wrong – Office Max sending one of its customers promotional mail that included the address line “Daughter Killed in Car Crash  and Bank of America offering a credit card to Lisa Is A Slut McIntire.   

It’s reminiscent of the revelations last year about how Target (and others) collect and analyze customer information, leading to situations where they are marketing to a profiled pregnant teenager before her father even knew she was pregnant!

Or how about when Wired UK sent out uber-personalized covers to selected subscribers and opinion makers, back in 2011.   One recipient’s personalized cover apparently included the following information:  name, age and birthdate, address, previous address, parents address and (apparently mined from his twitter account) the fact that he had met up with his ex-boyfriend earlier in the month. 

Thing is, we read these news stories or we hear about the incidents and they are intrusive and frightening but they are also distant.  Far removed from us.  Companies elsewhere profiling people we don’t know.    I’ve talked before about the stupid user, the way that line of thinking offloads responsibility onto the individual user rather than on the organization(s) who are exploiting the information – one of its other effects is the insidious way it encourages individuals to buy into it, to presume that a user whose privacy is invaded in this way has brought it upon themselves, has somehow “allowed” this to happen to them, a mindset that implicitly promises that the rest of us are still safe.  Simultaneously highlighting risks and reinforcing the stupid user mindset.

Of course, whether the companies are near or far, whether their victims are known to us or strangers shouldn’t matter.  Doesn’t matter, really.  Though that doesn’t change the fact that when the distance is bridged, when it’s someone or somewhere we know, it hits closer to home. 

This week, I talked to someone who received a calendar in the mail from a printing company with whom his organization had dealt in the past.   A simple promotion, but an opportunity to show off the company’s product and bring the company name to the forefront of a customer’s mind.    To raise their offering out of the ordinary, the company had personalized the calendar.  Again, a fairly simple idea – we’ve all seen the hats, the logo t-shirts and golf shirts, the monogrammed pens.  So this time, the company went one step further – they personalized the calendar not only with his name, but with a picture of him.  A picture that he says they must have gotten from his Facebook even though he’s not Facebook friends with anyone at the company. 

It’s not telling your parents that you’re pregnant.  Or mistakenly name-calling or revealing agonizing personal details in a label.  Nor is it splashing your personal information all over a magazine cover.   Indeed, he says it’s not that bad.  That he probably didn’t have strong enough privacy settings (or any privacy settings) on his photos. 

You see how insidious that stupid user thinking is?    An invasion of privacy and he’s already taking responsibility for it, bringing up the issue of privacy settings.  He doesn’t want to shame the company, make a complaint, or look for compensation.  Despite his discomfort with the invasion, he still holds himself accountable. 

Is that fair?  ‘Cause that’s what happens when we buy into stupid user – we blame each other.  We blame ourselves.  And the companies that mine our personal information, that crawl our online presence(s), that pull our personal photos off Facebook and use them for marketing purposes (in contravention of Facebook’s own Terms of Use) – they get to keep doing it.