Social Media at the Border: Call for Comments: Until 22 August 2016

The US Customs and Border Protection Agency are proposing to add new fields to the form people fill out when entering/leaving the country—fields where travelers would voluntarily enter their social media contact information.  The forms would even list social media platforms of interest in order to make it easier to provide the information. 

This raises serious concerns. Some might ask, how can this be controversial if it’s voluntary?  If someone doesn’t want to share the info, they’ll say, then don’t. Case closed.

Unfortunately, it isn’t that simple. This initiative raises some serious questions:

Is it really voluntary?

Are individuals likely to understand that provision of this information is, in fact, voluntary?  If this becomes part of the standard customs declaration form, how many people will just fill it out, assuming that like the rest of the information on the form, it is mandatory? 

Is the consent informed?

Fair information principles require that before people can meaningfully consent to the collection of personal data they need to understand the answers to the following questions: Why is the information being collected?  To what uses will it be put, with whom will it be shared?  We have to ask, will the answers to these question be known? Will they be shared and visible?  Will they be drawn to the attention of travelers? 

Can such consent be freely given?

In a best-case scenario where the field is clearly marked as voluntary and the necessary information about purposes is provided—can such indicia really overrule our instinctive fear/understanding that failing to “volunteer” such information can be an invitation for increased scrutiny. 

Is it relevant?

Even if the problem of mandatory “volunteering” of information is addressed, what exactly is the point?  Is this information in some way relevant?  It is suggested that this initiative is the result of

… increasing pressure to scrutinize social media profiles after the San Bernardino shooting in December of last year. One of the attackers had posted a public announcement on Facebook during the shooting, and had previously sent private Facebook messages to friends discussing violent attacks. Crucially, the private messages were sent before receiving her visa. That news provoked some criticism, although investigators would have needed significantly more than a screen name to see the messages.

If this is meant to be a security or surveillance tool,  is it likely to be effective as such? Will random trawling of social network participation—profiling based on profiles—truly yield actionable intelligence?

Here’s the problem: every individual’s social media presence is inherently performative.  In order to accurately interpret interactions within online social media spaces, it is imperative to recognize that these utterances, performances, and risks are undertaken within a particular community and with a view to acquiring social capital within that particular community

Many will ask, if information is public why worry about protections?  Because too often issues of the accuracy, reliability or truthfulness of information in these various “publics” are not considered when defaulting to presumptive publicness as justification. All such information needs to be understood in context.

Context is even more crucial when such information is being consulted and used in various profiling enterprises, and especially so when it is part of law enforcement or border security. There is a serious risk of sarcasm, artistic expression, mere frustration or hyperbole resulting in the criminalization of individuals who are thoughtless (or indeed simply not thinking along the lines preferred by law enforcement agencies) rather than dangerous. 

The call for comments contains extensive background, but the summary they provide is simple:

U.S. Customs and Border Protection (CBP) of the Department of Homeland Security will be submitting the following information collection request to the Office of Management and Budget (OMB) for review and approval in accordance with the Paperwork Reduction Act: CBP Form I-94 (Arrival/Departure Record), CBP Form I-94W (Nonimmigrant Visa Waiver Arrival/Departure), and the Electronic System for Travel Authorization (ESTA). This is a proposed extension and revision of an information collection that was previously approved. CBP is proposing that this information collection be extended with a revision to the information collected. This document is published to obtain comments from the public and affected agencies.

They are calling for comments. You have until 22 August 2016 to let them hear yours.  https://federalregister.gov/a/2016-14848

Should corporations *really* be the arbiters of free speech?

Facebook, Twitter, YouTube and Microsoft – in partnership with the European Commission – have unveiled a new code of conduct regarding hate speech.  This commitment is part of the response to the Brussels terrorist attacks, and is explicitly targeted at countering what can best be described as “terrorist propaganda”.

Hate speech, for these purposes, is set out in Framework Decision 2008/913/JHA of 28 November 2008, and is focussed on “racism and xenophobia”, which are recognized as “direct violations of the principles of liberty, democracy, respect for human rights and fundamental freedoms and the rule of law, principles upon which the European Union is founded and which are common to the Member States.”

The Framework Article 1 sets out the offences:

(a)  publicly inciting to violence or hatred directed against a group of persons or a member of such agroup defined by reference to race, colour, religion, descent or national or ethnic origin;

(b)  the commission of an act referred to in point (a) by public dissemination or distribution of tracts, pictures or other material;

(c)  publicly condoning, denying or grossly trivialising crimes of genocide, crimes against humanity and war crimes as defined in Articles 6, 7, and 8 of the Statute of the International Criminal Court, directed against a group of persons or a member of such a group defined by reference to race, colour, religion, descent or national or ethnic origin when the conduct is carried out in a manner likely to incite to violence or hatred against such a group or a member of such a group;

(d)  publicly condoning, denying or grossly trivialising the crimes defined in Article 6 of the Charter of the International Military Tribunal appended to the London Agreement of 8 August 1945, directed against a group of persons or a member of such a group defined by reference to race, colour, religion, descent or national or ethnic origin when the conduct is carried out in a manner likely to incite to violence or hatred against such a group or a member of such a group;

Under the Code of Conduct, these technology and social media companies commit to reviewing and acting upon notifications for removal of hate speech—removing or disabling access to such content within 24 hours. 

They also commit to educating and raising awareness with their users about the types of content not permitted under these rules and community guidelines.

Call me a cynic, but while I applaud the idea, I don’t have a lot of faith in its implementation.  We’ve witnessed years of vitriol and hatred based on sex, gender, gender identity and expression, and sexual orientation play out online, without much progress in disrupting or addressing it. Certainly the various platforms and companies haven’t been particularly effective in either educating or protecting users. Even when reporting tools and standards are in place, their application has tended to be fairly arbitrary and unreliable.

Apparently I’m not the only one with concerns about this – European Digital Rights (EDRi) and Access Now released a contemporaneous statement announcing the “…decision not to take part in future discussions and confirming that we do not have confidence in the ill-considered ‘code of conduct’ that was agreed.”

EDRi and Access Now’s concerns are not with effectiveness – they cut deeper, questioning both the process by which the code was developed, and the effect of the code:

  • creation of the code of conduct took place in a non-democratic, non-representative way; 
  • the “code of conduct” downgrades the law to a second-class status, behind the “leading role” of private companies that are being asked to arbitrarily implement their terms of service; 
  • the project as set out seems to exploit unclear liability rules for companies; and 
  • there are serious risks for freedom of expression since legal but controversial content could be deleted as a result of this “voluntary” and unaccountable take-down mechanism.

The two organizations emphasize that their separation from the project (and process) should not be construed as indicating a lack of commitment to the underlying aims – as they state:

[c]ountering hate speech online is an important issue that requires open and transparent discussions to ensure compliance with human rights obligations. This issue remains a priority for our organisations and we will continue working for the development of transparent, democratic frameworks

How do we do this going forward?

Must we keep technology companies on the boundaries of the protection battles?  Or keep their efforts separate from those of under law?

Could tech companies work with (and within) the law?  Can they do so without effectively becoming agents of the state? 

And no matter how we organize ourselves, how can we make such codes and commitments more than lip service – how can we create and encourage efficacy in their application?  Perhaps more to the point, how do we inculcate a real understanding of the price and prevalence of hate speech (of all sorts) in these spaces such that strategies and solutions are developed with an accurate understanding of the issue? 

It IS time to address issues of hate speech and risk/danger online.  It is also time, however, to do so appropriately – to go back to the drawing board and, via broad consultation an authoritative, transparent and enforceable process should be developed and implemented.  One that, at a mimimum:

  • Recognizes the key role(s) of privacy Identifies civil rights and civil liberties as key issues 
  • Builds transparency and accountability into both the development process and into whatever strategy is ultimately arrived at 
  • Ensures a balanced process that includes public sector, private sector, and civil society voices are heard. 

So…let’s start by establishing a multi-stakeholder engagement process tasked with defining the parameters and needs of hate speech protection and developing attendant best practices for privacy, accountability and transparency within that process.   Once that design framework is agreed to, it will be much clearer how best to implement the process; and how to ensure that it is appropriately balanced against concerns around freedom of expression.

Oh, and while we're at it?  If we're going to finally come up with an effective way to address these issues, then let's include sex, gender, gender identity and expression, and sexual orientation into our definition of hate speech while we're at it.  We know we need to!