Who You Gonna Call? Troll Busters?
22 July 2016
By Salil Tripathi, Senior Advisor, Global Issues, IHRB
Milo Yiannopoulos is the kind of man who becomes well-known only because of the Internet. An editor at the right-wing website, Breitbart, he has, often been suspended from Twitter, the online micro-blogging site, because of his abusive behaviour towards other Twitter users.
Earlier this week he crossed a bridge too far when he took on Leslie Jones, the actor in the major Hollywood summer release, Ghostbusters. Jones is an African-American woman, and not only did Yiannopoulos heap scorn on her, but his vitriol encouraged his rabid followers to insult Jones with racist, misogynist abuse, which led to Jones quitting Twitter. Within a day, Twitter acted and banned Yiannopoulos permanently from Twitter.
On the face of it, it looks as if justice has been done. The victim has received some redress; the abuser can’t abuse anymore; and the space has been cleaned up. But the episode raises troubling questions for which there aren’t easy answers, and amidst this companies have tried to find solutions to make the Internet more hospitable to the kind of diversity that exists in the real world. These solutions include tools to spot abuses early, crowd-sourced filtering, users acting as monitors, self-help assistance to those who receive abuse to fight back, but each has its limitations.
The Hyde Park Speakers’ Corner in London is a public space; a shopping mall isn’t one
Twitter’s reaction in the Jones case, while well-intentioned, was opportunistic and arbitrary. It has not acted with similar alacrity with other complaints from less famous people. Twitter can of course do what it wants, because despite its outward appearance as a ‘public forum’, Twitter is private space. A private corporation owns it and it sets its own rules about who can say what. This has clear implications for the right of free expression, the right to seek, receive, and impart information, and the right to privacy.
In this Twitter isn’t alone – Facebook, the leading social media website, also has its own community standards through which it decides what can be said on the platform.
Companies do benefit from being on the right side of the free speech debate. They want to protect their own privacy, their users’ privacy, and promote traffic to attract advertising, boost revenues, and make profit. They resent state intrusions and guard their space zealously.
When ‘do no harm’ requires proactive protection
This is the central dilemma of freedom of expression in the age of the Internet – the state has historically posed grave danger to free speech. States have jailed, exiled, tortured, or executed writers, cartoonists, poets, and other dissidents.
The private sector has been seen as a countervailing power that would champion freedom of expression, and often does. Twitter, Facebook, Google, and Microsoft to their credit, routinely stand their ground against government requests to hand over data or to take down specific content, unless there is a reasonable and just cause. Companies also publish reports of such requests.
But the private sector’s relationship with human rights is complicated. When Google decided to leave China, when Twitter, Microsoft, Facebook, and Google decided to publish transparency reports and statistics, when Apple stared back at the US Justice Department, when Microsoft refused to hand over data to the US Government, and when Blackberry refused to give codes to countries that demanded that the company give them access but later reached settlement, the companies’ actions – even if it was in their own interest – advanced human rights. But when companies comply with arbitrary demands by governments, accede to questionable judicial pronouncements, or acquiesce to a large number of vociferous protesters, their actions undermine human rights.
Companies insist that their standards are applied properly and evenly to create a wholesome experience for all users. But that word, ‘wholesome’, is problematic. It conjures images of family-friendly companies like Disney, which make life a G-rated experience. Life is more complicated and less pleasant than that.
Uniform application in a variable world
This raises the second major dilemma for companies like Twitter and Facebook – how do they apply their policies or community standards in a consistent, uniform way? A woman abused on the Internet in Africa has the same rights as a woman abused on the Internet in the United States. Would Twitter act in a similar way with similar swiftness if the complainant wasn’t a movie star like Jones, but an African woman without Jones’s fame? Twitter may like to say otherwise, but the answer is no. Twitter’s immediate response in the Jones case was as much to contain damage during an extremely polarised American election year. The simpler option for Twitter, in such an environment, is to ban a user like Yiannopoulos.
But what would Twitter do, for example, in the case of a gay activist using Twitter in a Middle Eastern country if a large number of individuals were to complain that the activist is making obscene comments which have no place in their civilization? How would it deal with cases of feminists or activists from minority groups in countries that hide behind the moral relativist argument and insist that their standards are different, and therefore, such outlying individuals should be prohibited?
Companies apply their standards using algorithms or through monitoring, and mistakes occur often. Facebook has faced criticism in the past for banning images of breastfeeding women, forcing it to explain and clarify its policy, or removed works of art showing nudity, leading to an outcry, reinstatement of the art and explanation its policy. People have also raised complaints against edgy political commentary or satire, and political campaigning.
How does a company then decide which communication to allow and which to prevent? Unfortunately, it is not a straightforward or transparent process. I have talked to Facebook representatives about cases of rationalist bloggers from Bangladesh who live abroad, who have found their pages “vanish” because either Facebook’s algorithms have detected something awkward, or probably because enough numbers of complaints have been received to trigger an automatic action to stop the post from appearing. Facebook representatives told me that these decisions were taken by people, and not by machines.
The case of Twitter is similar – there has been justified outrage over misogyny on Twitter. The forum’s free-for-all culture has meant that ill-tempered, uncivil Twitter users, known as trolls, have made rude, crude, obscene, and threatening comments to women on Twitter. I know of women who ‘protect’ their accounts (so that only those they allow can read their tweets), and I know of women who have given up using the medium altogether. Such attacks have indeed silenced some voices – Leslie Jones’s example is only the latest. In 2013 in the UK, Caroline Criado-Perez, a journalist who campaigned for novelist Jane Austen’s face to be used on new £10 notes, received not only abuse but also rape threats. It led to a lot of soul-searching in the UK and calls on Twitter to fix the problem.
The issue is no longer confined to a few countries. In India, the new minister for women and child development, Maneka Gandhi, has set up a cyber-cell against trolling. Many feminist organisations have applauded this move. India’s laws regarding speech are far more restrictive than even in the UK, whose laws are considerably stricter than in the US. Victorian-era laws in India enable the state to act more quickly and decisively, and individuals too can sue others they accuse of disrupting harmony.
In an Imperfect world, there are no perfect solutions
The ‘big data’ revolution is real; we are spending more of our lines on-line than ever before. Understanding how to safeguard human rights in this brave new digital world is a major priority.
But companies are not duty-bearers for protecting human rights – they have the responsibility to respect rights, i.e. do no harm. They do not have the mandate, capacity, expertise, or ability to make complex decisions about protecting human rights. Yet the ICT sector’s unique nature often flips the roles of protect and respect on their head. ICT companies try their best, but as our Digital Dangers series of reports shows, they have to grapple with competing demands dealing with hate speech, dual use of technology, and network shutdowns.
So if the government is not to be trusted, and if companies can’t always get it right, what is the solution?
There is no single silver bullet. Instead, companies need to develop their own capability to assess situations better. They need to undertake risk and impact assessments that constantly and proactively reach out to human rights lawyers, free speech experts, academics, and civil liberties groups to make informed decisions that are applied fairly, consistently, without bias, and not in response to a temporary Internet uproar, or when a celebrity complains.
Governments, at the same time, should ensure that those Internet users who are expressing their views and not inciting violence have the full freedom to do so. There are indeed innovative ways to counter dangerous speech, as the Panzagar project in Myanmar and elsewhere, such as Canada, Kenya, Sri Lanka, and the US shows. There are also ways to confront dangerous speech, by following specific guidelines.
Will that be sufficient to deal with individuals like Yiannopoulos and his army of trolls? (In a candid moment, ironically, he had once written that the Internet is turning all of us into ‘sociopaths’).
The best response lies in ignoring the trolls. Banning them only increases the mystique around them, making them more attractive to the kind of people who believe that the world is a massive stitch-up and any voice that’s banned must be inherently good. It is actually the other way around – some voices, inherently good, are sometimes banned, and to prevent that, the Internet should remain open to all voices, so that the outliers are exposed, ignored, ridiculed, and denied martyrdom.
Disclosure: IHRB Senior Advisor Tripathi is also Chair of PEN International’s Writers in Prison Committee. PEN International campaigns for the freedom to write and the freedom to read.
Latest IHRB Publications
10 May 2019
07 May 2019
01 May 2019
22 April 2019
28 November 2018
13 November 2018
21 February 2019
21 February 2019
21 February 2019
07 May 2019
12 December 2018
20 November 2018
01 May 2019
24 April 2019
28 February 2019
07 December 2018
10 July 2018
26 April 2018
15 March 2019
16 January 2019
10 October 2018
09 December 2018
30 November 2017
14 June 2016