When tech companies must stand firm against state power
16 May 2024
Global tech giants are engaged in a battle over human rights in India, the country described as the world’s largest democracy, which is at the half-way stage of its marathon, six-week long elections.
At stake are fundamental concerns of privacy, individual liberty, and multinational companies’ responsibility to respect these human rights. To mitigate risks and reduce adverse impacts, multistakeholder initiatives as well as corporate organisations have developed initiatives and normative standards. These include the Rabat Plan of Action to tackle censorship, the Christchurch Call to eliminate dissemination of violent content online, the Global Network Initiative’s statement on government efforts to ban digital content, the civil society-led principles on surveillance.
The Apple case
The challenges ahead for tech companies can be seen in a number of recent developments. First is Apple’s role in an ongoing case involving the chief minister of the opposition-ruled Delhi state, Arvind Kejriwal, who had been in jail for over a month without bail, after he was arrested on alleged money-laundering charges.
Formal charges of corruption over a reversed taxation policy against Kejriwal remain yet to be proved, but he has been granted bail till 1 June so that he can campaign in the ongoing elections. In any case, Indian courts give authorities considerable leeway to hold the accused in jail for extensive periods while they investigate charges, particularly politically-sensitive ones. Investigators want access to data on his cellphone. Kejriwal has not complied, saying the phone he now uses was bought recently, and does not contain any data relevant to the investigations. The authorities reached out to Apple for their help to access his data.
Citing its policy to protect user privacy, Apple has refused to comply.
The WhatsApp case
In another case, the instant messaging app, known as WhatsApp, owned by Meta, which is wildly popular in India, has refused to accede to government demands that the app should allow authorities to ‘trace’ the origin of specific messages. India’s digital rules require such traceability, presumably to track down potentially criminal acts including, but not limited to, terrorism, spread of hate speech, or money-laundering. However, WhatsApp has argued that stopping encrypting messages would not only violate its agreement with its users, but also that it is an impractical measure. More importantly, it would set a precedent and other countries would make similar demands, undermining WhatsApp’s business model. The messaging service’s consumers have the right to expect that their privacy would be protected. If the platform were to accede to every governmental request, it would undermine the right, increasing the consumers’ vulnerability.
In late April, WhatsApp told India it would stop offering its services in India, if it is forced to break encryption. That would presumably hurt many supporters of the Indian government, since the ruling party’s information technology cell has been adept at using social media platforms, including WhatsApp, to disseminate its own messages. Finally, in early May this year, prime minister Narendra Modi spoke at an election rally in Rajasthan in western India, where he alleged that his predecessor, Manmohan Singh, had promised that marginalised groups, particularly Muslims, would have the first claim over India’s resources. Singh’s speech was in 2006, and Modi’s characterisation distorted Singh’s message. Thousands of Indian voters wrote to the election commission to protest Modi’s speech, and the commission belatedly asked the party leadership to respond. Meanwhile, Instagram, also owned by Meta, removed Modi’s speech, saying Modi’s remarks violated the company’s hate speech policies.
The X case
In 2022, Twitter challenged the Indian government’s orders to block certain accounts and tweets, citing India’s own free speech laws. In 2024, X, as the company is now known (and with a new owner), reluctantly complied with the law. India had the last laugh. Civil society groups have long complained that India’s ‘take-down’ requests (to remove content) are over-broad and potentially unconstitutional, and international experts are alarmed.
All three examples show these tech companies understand their responsibility to respect human rights: in the first two instances, protecting the right to privacy, and in the third case, protecting the rights of affected groups.
However, this is not to suggest all social media platforms have always responded with alacrity and in a rights-affirming manner. For example, Facebook apologised for its role in the 2018 riots in Sri Lanka against Muslims. It was similarly accused of promoting violence against the Rohingyas in Myanmar in 2022, and earlier, in 2018, the company commissioned an independent assessment over its role. The report concluded that the company had not done enough to prevent fomenting online or off-line violence, and Meta agreed with the broad conclusions and promised to do more, including recruiting more people and monitoring content. UN experts called on social media companies to do more to stand up to the Myanmar junta. There have been other concerns over the use of WhatsApp in promoting fake news and violence in India. Leading into the current elections, Indian civil society organisations expressed concerns over spread of election disinformation on Meta’s platforms, and Meta said it took prompt action to remove such pages.
The challenge and complexities of monitoring content
Tech companies are increasingly aware of how their products are being used. They recognise, for example, that the vast amounts of videos and broadcasts on a range of tech platforms during armed conflict—in Gaza, Ukraine, and elsewhere—have crucial archival value, historical significance, and are forms of evidence in future trials. At the same time, companies have to make agonising decisions on taking down content that might pose imminent danger of violence, such as hate speech by prominent leaders. While news media organisations have the responsibility to report accurately, both as a matter of record and in view of their newsworthiness, social media platforms do not call themselves as news organisations. As they have their own community standards and policies, they have an obligation to implement those consistently, transparently, and fairly.
According to some estimates, more than 5,000 hours worth of videos are added to just one popular platform—YouTube—each minute. While tech companies have vast resources, it is beyond the capacity of any organisation to monitor each video, unless flagged by other users pointing out problems. Tech companies have sought technological solutions, such as using algorithms or artificial intelligence to identify material requiring action. But it is not a fail-safe mechanism.
Resisting state pressure to violate rights
Governments are often violators of human rights, or complicit in those violations, by permitting hate speech against groups they oppose, and by imposing surveillance and targeting human rights defenders, political opponents, journalists, and dissidents, who challenge the government. That places tech companies in the peculiar spot, of complying with the law when it is consistent with international standards, and upholding the standards when governments are in the wrong.
It is wrong to assume that democracies—or countries that regularly hold elections—would not violate human rights. While India has a long record of regular elections, organisations that monitor democracies, such as Sweden’s V-Dem, the media watchdog RSF (Reporters Without Borders), and Freedom House, have all noted the rapid erosion of liberties in India, and substantially downgraded its ranking in recent years. In spite of the United States’ professed commitment to the First Amendment, which prevents the state from passing laws that restrict freedom of speech and press, many students, many of them non-violent, have been arrested on US campuses in recent weeks, because they oppose the conflict in Gaza. The UK is in the process of removing asylum seekers to Rwanda after passing a controversial law that may violate its obligations under international refugee law.
A few years ago, at the Stockholm Internet Forum, the former Swedish foreign minister Carl Bildt attempted to reassure a skeptical audience of human rights activists and practitioners that if a society is democratic, nobody needs to worry about surveillance. His assumption: such surveillance would be lawful, time-bound, necessary, and proportionate. When I pointed out Bildt’s statement to a leading human rights activist from the global south, she laughed and said, if a society needs surveillance, it is not a democracy. The activist knew from her experience how you cannot take any state’s commitments and assurances for granted.
There is no global mechanism to effectively restrain authorities who act in ways that violate human rights. That is why businesses like the tech companies now taking stands in India will continually have to rise to the occasion and learn to say no, firmly, to abuses of state power, as some have begun to do.