Governments Must Do More To Tackle Threatening Behaviour Online
02 August 2013
By Lucy Purdon, Policy Officer, Privacy International; Research Fellow, IHRB
The question of how to regulate certain forms of harmful speech in the digital realm without restricting legitimate speech and debate is an on-going controversy that affects every country in the world differently.
What appears to be a global problem, however, is threats of violence directed towards women online, in particular the rise in visibility of threats to rape and kill. Female journalists and campaigners have also received bomb threats via social media.
This week, UK-based journalist and campaigner Caroline Criado-Perez publicised her experience of receiving death and rape threats on Twitter, seemingly triggered by the successful campaign to feature the author Jane Austen on the new £10 note. When MP Stella Creasy defended Criado-Perez, she was subjected to similar abuse. Several tweets specify the time and place she will be raped.
International human rights law has strong protections for freedom of expression, but at the same time recognises that restrictions can be placed on this right in order to protect the rights of others. The International Covenant on Civil and Political Rights (ICCPR) obliges signatories to criminalise “any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility and violence.”
This demonstrates the high threshold of what is actually deemed illegal speech. Some of the tweets to Caroline Criado-Perez and Stella Creasy arguably fall within this definition (although specific references to gender are an unfortunate omission in the ICCPR) but not all of them. Some opinions may be archaic and offensive, but they are not illegal. However, the sheer volume of tweets, enabled by technology that has made this kind of speech more easily accessible and circulated widely and quickly, is undoubtedly distressing.
The knee-jerk reaction is to blame the private companies that run the platforms on which the content appears. Twitter is facing criticism for not acting fast enough to tackle the abuse directed at these two women. The UK government was quick to join in the criticism; shadow home secretary Yvette Cooper called Twitter’s reactions “inadequate” and it has been reported that Twitter executives will face a government inquiry.
What is the responsibility of a company whose platform is used to make such a threat? It does seem that Twitter needs more accessible methods of reporting threatening behaviour and a clearer escalation path. There are plans to improve the ‘Report Tweet’ function, which is currently only available on the iPhone mobile app and involves completing an online form. It is unclear what action Twitter will take aside from ‘hiding’ the reported tweet and eventually removing the offending account. Twitter’s own policies prohibit publishing “direct, specific threats of violence against others” but little else.
It seems Twitter may need to improve its internal policies and processes to spot when a user is being systematically targeted or singled out for abuse; communicate more effectively with victims to support them in tackling the problem; and contact the abusers to warn them about their behaviour (Facebook recently introduced a similar method aimed at reducing cyber-bullying which they call an “educational checkpoint”).
But missing from this debate is criticism of inadequate enforcement of laws that are supposed to deal with harassment. Merely blocking users or removing offending content, as both women in this case have been overwhelmingly told to do, is the technical equivalent of sweeping a problem under the carpet. Nor does it make the women feel safe in the real world.
A Senior Police Chief has called on social media companies to do more, ultimately because police forces do not have the resources to “deal with what could amount to thousand of allegations… of web-based hate crimes.”
That may well be true, but can hardly be the solution. Do we really want companies to police the Internet? What is the basis of the assumption that law enforcement will be flooded with allegations? Can law enforcement officials ignore potential crimes because of too many complaints? Tackling the problem now will surely save resources in the future. Sound judgment will be required. Paul Chambers, who lost two jobs because of prosecution he faced, finally won in the High Court, which over-ruled his conviction for a tweet he obviously sent as a joke, threatening to blow up an airport because his flight was delayed. Law enforcement officials will need to figure out which threats are credible and which is frivolous.
Positive steps have already been taken: The Crown Prosecution Service recently published Guidelines on Prosecuting Cases Involving Communications Sent Via Social Media, where offences are in line with international human rights law, such as prosecuting “credible threats of violence” that “target specific individuals”, taking into account whether the speech in question is “more than offensive, disturbing and shocking”.
Blaming the companies is easy, but it distracts from the shortcomings in laws and attitudes towards protecting people from online abuse, which is a much more long-term and difficult debate and requires greater efforts than merely creating and pushing a button. As with any kind of ‘hate speech’, there are deep-seated societal problems and biases to contend with. Until attitudes change offline, they will continue to be reflected online. Laws that apply off-line need to be extended on-line. No more, no less.
Latest IHRB Publications
10 May 2019
07 May 2019
01 May 2019
22 April 2019
28 November 2018
13 November 2018
21 February 2019
21 February 2019
21 February 2019
07 May 2019
12 December 2018
20 November 2018
01 May 2019
24 April 2019
28 February 2019
07 December 2018
10 July 2018
26 April 2018
15 March 2019
16 January 2019
10 October 2018
09 December 2018
30 November 2017
14 June 2016