Our Mission: The Institute for Human Rights and Business (IHRB) is dedicated to being a global centre of excellence and expertise on the relationship between business and internationally proclaimed human rights standards...more

Case Study: Safaricom

Download Report PDF: Corporate Responses to Hate Speech in the 2013 Kenyan Presidential Elections. Case Study: Safaricom

Digital Dangers - Identifying and Mitigating Threats to Human Rights in the Digital Realm

Digital Dangers

What is the Digital Dangers Project?

“Digital Dangers: Identifying and Mitigating Threats in the Digital Realm” is a new initiative being developed by the Institute for Human Rights and Business (IHRB) in collaboration with the School of Law at the University of Washington in Seattle. The Digital Dangers Project aims to identify and analyse threats to human rights, in particular freedom of expression and privacy, arising from the use of Information and Communication Technologies (ICTs) and to recommend actions companies, civil society and governments can take to protect human rights in the digital realm, which in turn affects human rights in the real world.

Over the next two years, the Digital Dangers Project will develop an online global database of cases from around the world where human rights are infringed in the digital world or in the real world as a result of using digital technology. A series of ICT company case studies will also be undertaken. At the end of the process, we will publish a longer paper drawing lessons from the database and case studies and present recommendations for governments, companies, and civil society groups monitoring corporate and government conduct in this realm. The project will be guided by international human rights law, the UN Guiding Principles on Business and Human Rights, and work with multistakeholder initiatives where appropriate.

What Digital Dangers do companies face?

Digital Dangers are the risks ICT companies face when, intended or not, they act in ways that lead to human rights abuses, sometimes due to their actions, sometimes complying with government requests or orders. These include, and are not limited to:

  1. Disconnecting or disrupting network access
  2. Monitoring, evaluating and blocking user content at the request of third parties (state or non-state actors)
  3. Selling dual use technology where there is a high probability of its misuse
  4. Complying with government orders to impose surveillance
  5. Monitoring user-content under company’s own policies
  6. Handing over stored user content and data to the state

Rolling Out the Digital Dangers Project

Part I: The Digital Dangers Database

The Digital Dangers Project will be in two parts. At the University of Washington Law School there will be a comprehensive study putting together reported digital danger cases or incidents from around the world, to establish a database and typology identifying patterns and trends to assist the development of the case studies and recommendations.

Part II: Digital Dangers Case Studies

The second part of the project will be involve ICT company case studies addressing recent events and other emerging issues based on the digital dangers identified. Researchers from the Institute for Human Rights and Business will be embedded in companies that face digital dangers to observe the challenges at close quarters and observe the steps the company takes to prevent or mitigate risks to human rights.

The first such case study, about the efforts of the Kenyan mobile operator Safaricom to mitigate the spread of hate speech on its SMS platform during the Kenyan presidential elections of March 2013 was published in November 2013. Other case study research projects are underway. Several leading companies have shown interest in participating in the process. All the case studies will be published.

Objectives of the Digital Dangers Project

  • To embed the UN Guiding Principles on Business and Human Rights more firmly in the ICT sector by:
    • Reminding governments of their obligation to protect human rights in the online world and to establish specific procedures to prevent the use of technology to harm human rights.
    • Encouraging businesses to resist pressure from state and non-state actors so that their actions do not result in harm to human rights.
    • Exploring remedies for protecting human rights in the digital realm.
  • To publish a paper at the end of the project drawing lessons from the data base and case studies which recommends actions that protect human rights based on exploring legally-permissible boundaries to freedom of expression in the context of privacy, security, and hate speech; examining government power and misuse; and researching and analysing company actions.

The final outputs of the project will be used to encourage:

  1. Advocacy with government to implement its duties under the UN Guiding Principles and integrate relevant international corporate responsibility standards into legislation regarding freedom of expression and privacy;
  2. Business-to-business learning so that companies come together to explore common challenges and good practice based on the UN Guiding Principles;
  3. Civil society capacity building around the application of the UN Guiding Principles within the information and communication technology sector.

For more information, please contact IHRB staff:

Case studies: Lucy Purdon – Programme Support Manager, ICT ( )

Research and policy: Salil Tripathi – Director of Policy ( )

Digital Dangers Database and legal issues: Anita Ramasastry – Professor at University of Washington Law School and Senior Research Fellow, IHRB ( )

Description of the Project

Access to ICTs, including the internet and telecommunications, is critical for the realisation of a whole range of rights – civil and political, as well as economic, social and cultural rights. The corporate sector has a unique role in this regard, as it manufactures the technology, develops the software, invests in and often owns the infrastructure, collects, disseminates, and stores data, sets prices for access, permits the flow of data through its networks, and connects far-flung regions of the world. In many instances, companies provide platforms for speech and communications that, while not state-operated, amount to a large quasi-public space, where ideas can be exchanged. Governments, in response to the growth of the internet and telecommunications, set rules, including what can be said or shown or not, what can be sold or not, and who can access the Internet.

While international human rights law includes freedom of expression, it also allows the state to place reasonable restrictions on this right. States may place restrictions on this right to protect people from violence, crime, terrorism, and intrusion of their privacy. States often regulate online speech, for example, to protect children from pornography or sexual exploitation or to protect consumers from fraud. The right to privacy can also be derogated from in times of an officially proclaimed public emergency.

For companies, it can be difficult or problematic to apply these restrictions in practice when they are asked to potentially curtail their users’ rights. Many governments, formally and informally, ask ICT companies to impose surveillance on individuals or groups, or to let them intercept their communication; to block specific websites; to seek access to data to gather intelligence; and on occasion, to suspend access to the Internet and mobile phone networks, citing reasons of national security or public order, with or without judicial oversight. Sometimes they use these powers not for legitimate reasons, but for political reasons or to suppress dissent. In many countries people under digital surveillance have been jailed and tortured. Many of them are human rights defenders, journalists, bloggers, critics of the government, minority groups, trade union activists, religious activists, or opposition politicians. Governments have also asked companies to alter technology to permit easier surveillance, or got other companies to make alterations to the technology after purchasing it from one company. Companies comply with such requests, orders, and instructions in various ways: Some companies spot business opportunities. Some companies refuse to co-operate. Some companies publish data of the number of requests for user information or content removal they receive and whether or not they have complied. Some companies obey home governments but refuse to comply with such requests from other governments.

These Digital Dangers adversely affect the rights to free expression, privacy, security, safety, and free association and assembly. There are often subsequent consequences which affect many other rights, including the right to a fair trial, equality before law, and the likelihood of unlawful arrest and physical harm, including torture. It is critical that companies apply the human rights framework in reaching their decisions, which should be drawn from the UN Guiding Principles for Business and Human Rights, so that their actions are consistent with international standards and respect human rights. Companies have legitimate concerns about the consequences of government requests, when they are being asked to comply with laws which may make them complicit in abuses that follow. But non-compliance makes their own staff vulnerable if they have physical presence in the country, and it can place their business operations in jeopardy.

Digital Dangers are the warning signs that companies and governments must heed when governments interact with the ICT sector or use ICT products and services furnished by companies, to avoid human rights abuses. Governments must put in place vigorous measures to prevent the abuse of power. And companies must resist complying with orders that do not conform to judicial procedures. In order to develop a culture where human rights are protected in the digital realm and therefore in the real world, it is important for ICT companies and governments to be aware of these Digital Dangers, and take steps to mitigate adverse impacts.

The Digital Dangers described below offer a preliminary list of instances of company action that can lead to human rights consequences. The list will be refined and the warning signs clarified during the course of this project. They include (but are not limited to):

  • Disconnecting or Disrupting Network Access: In times of civilian strife, crisis, or emergency, governments have sometimes required temporary suspension of, or disruption of access to the Internet or mobile phone services, citing harms that might follow if a bomb is detonated by a mobile phone, if there is civil unrest, or when unsubstantiated rumours are allowed to be disseminated which are alleged to possibly incite violence. As more and more people become connected, network disruption or disconnection will have a greater and increasingly dangerous impact on human rights, therefore it is critical that companies are prepared to respond to these orders in a way that does not leave them complicit in human rights violations.
  • Monitoring, Evaluating and Blocking User Content at the request of third parties (state or non-state actors): Governments often ask companies to monitor and evaluate user content to identify hate speech, exchange of child pornography, or other activities that might incite crime, including violence. Sometimes governments may ask for direct access to such data. In some countries, non-state actors such as religious groups, political groups, or other dominant groups demand that specific content be blocked or barred. Companies need clear frameworks to determine the extent to which they can cooperate with such requests that are in line with international human rights standards.
  • Selling dual-use technology when there is a high probability of its misuse: Technology often develops faster than regulators can react to their potential for negative impacts and as a result, export control may lag behind technological developments. Companies building and operating networks must abide by legal requirements to provide systems allowing “lawful interception”. If technology developed and intended to improve connectivity and security is misused, this can pose a risk to human rights, for example through censorship and privacy infringements. Companies must conduct due diligence on sales to governments and third parties to minimise the risk of misuse which can have adverse consequences for human rights, so that the technology is used for its intended purpose - providing communications that helps to realise many human rights.

    Companies could also modify through software the features and functionality of hardware products, such as network infrastructure, which then act in ways not originally intended, making it possible for the buyer to use the technology to harm human rights. Governments can do so by asking the company to modify the product or by getting other companies to build additional features into the product so that they can undertake activities that are illegal and/or cause harm, such as arbitrary surveillance.
  • Complying with government orders to impose surveillance: Governments sometimes ask companies to enable real time surveillance on individuals or groups, including providing the technology to intercept communications and/or recording it outside of systems provided for lawful interception. Often justified as required to prevent crime, in some countries governments use the technology to spy on people whose activities that they oppose but which are otherwise legitimate and legal.
  • Monitoring user-content under company’s own policies: Most companies have terms of agreements with their users, and those terms often contain phrases that restrict enjoyment of human rights. Companies’ “community standards” are often poorly defined, and left to the interpretation of company officials, who are not necessarily experts on human rights. This can lead to companies acceding to demands from non-state actors: In some countries, non-state actors, such as armed groups, militant organisations, religious groups, and others demand that certain content that offends them should be taken down. Doing so without court orders, or without a requirement from a legitimate authority undermines the human rights of the content generators.
  • Handing over stored user content and data to the state: Governments have often made direct requests or passed orders, sometimes backed by courts, asking companies to hand over user data (or metadata) of individuals or groups. There are allegations that governments have also sought access to such data directly or through specific tools. It is crucial there are proper checks and balances in place in order for companies to avoid being complicit in violations of privacy. Even if the government action is in good faith, companies risk losing the trust and the business of their customers if they are perceived to have acted beyond what the law requires.
website security