Fierce debates on the impact of Artificial Intelligence (AI) on human rights were underway long before Generative AI (Gen AI) stormed the stage in 2023. Civil society groups regularly exposed racist predictive policing algorithms, welfare fraud detection algorithms targeting the poor and minorities, gig workers fired by algorithmic decisions and the use of facial recognition at protests. When Gen AI tools quickly captured both public and media attention in 2023, they turbo-charged the AI debate and the sense of urgency for lawmakers to “do something”.

Today, many countries are facing the enormous and complex task of setting rules to ensure AI development and use is safe, secure and trustworthy. Where are we now, and could existing standards for corporate conduct like the United Nations Guiding Principles on Business and Human Rights (UNGPs) assist States in developing AI regulations?

 

What is the “best” way to regulate AI?

We are some time away from a unified approach on how to regulate AI for the protection of human rights. Intergovernmental organisations, regional groups and States are all passing laws or adopting guidance for public and private actors. It is yet to be seen how these different legal frameworks, proposed technical standards and principles will interact with each other, and what will emerge as best practice.

 The newly adopted AI Act from the European Union (EU) is considered the world’s first comprehensive legal framework for developing and deploying AI and also the strictest. 

The Act outlines a risk-based approach to what kind of AI is allowed to be developed and used in four risk categories: Unacceptable, High, Limited, and Minimal, and prescribes guidelines for each. 

For example, an unacceptable system (which would be prohibited under the Act) would be emotion recognition in schools. A high risk AI system would be one used to influence the outcome of elections and voter behaviour. Deep Fakes are considered limited risk, somewhat contentiously. Most AI applications are considered minimal risk and are unregulated such as spam filters. The AI Act could become a benchmark for States wanting to pass their own AI regulations, much as the GDPR acted as a blueprint for data protection laws around the world. The Act requires EU member states to assign responsibility to one of their own regulators for enforcing stiff penalties—a maximum penalty of 7% of annual turnover for prohibited uses of AI and 3% for other breaches. This has some regulators worried, particularly under-resourced data protection authorities who fear the burden will mainly fall to them.

The Council of Europe Committee on Artificial Intelligence recently finalised the text for a Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law, which also sets out a risk based approach built on respect for human rights as well as principles of design and use such as transparency and oversight; accountability and responsibility; equality and non-discrimination; privacy and personal data protection; safety, security and robustness and safe innovation. As with the Convention on Cybercrime (the Budapest Convention), non-member States could be invited to accede and the Convention could potentially form the basis of AI legislation beyond Europe’s borders. As part of ongoing co-operation commitments between the EU and the Council of Europe, “recognising” the impact of AI on human rights is affirmed as a common value, signaling support for each institution’s efforts to regulate AI.   

While most States have not yet addressed national AI legislation, there are several approaches emerging which are useful to highlight:

Risk based approach legislation: As demonstrated in the EU AI Act, similar laws are under consideration in Canada and Brazil

Principles based approach: The UK, for example, opted not to put forward specific AI legislation or create a new AI regulator, and instead proposed a principles-based approach enforced by existing regulators eg. data protection, finance, competition and healthcare. Privacy and data protection regulators in Asia have also assumed responsibility to shape AI policy, in Australia, Singapore, South Korea, New Zealand and Hong Kong. 

Voluntary/Self-regulation: The US, for example, is taking an arguably even more hands off approach, focusing on corporate self-regulation and promoting innovation, largely based on voluntary guidelines like the National Institute of Standards and Technology (NIST) AI Risk Management Framework.  The Department of Homeland Security (DHS) recently announced the AI Safety and Security Board, featuring 16 CEOs out of 22 expert members, to advise on the deployment of AI in US critical infrastructure. 

Many are closely watching developments in China, including if and when a draft AI law is presented sometime this year. In the meantime, the Chinese government has adopted three regulatory measures covering mandatory registration systems for some algorithms, labelling of deepfakes and control over Generative AI. 

These regulations are believed by the Asia Society and Policy Institute to balance innovation and control. For example, the 2023 "Interim Measures for the Management of Generative Artificial Intelligence Services" adopts a risk based approach in terms of risks to socialist values and the economy, indicating the intention to maintain a hold over what AI can be developed and for what purpose.

 

How can the UNGPs inform AI regulation?

In this multiplicity of pending rules, companies have largely been left to their own devices to decide best practice among themselves, with varied results and inconsistent applications.  

Existing human rights standards and principles still apply to technical innovations and underpin many of the proposals and efforts outlined above. The UN Human Rights Office’s BTech project has already recommended, as part of a larger effort, that human rights due diligence (HRDD) as set out in the UNGPs should form the basis for risk management across the AI value chain. It goes without saying that HRDD should be undertaken before products and tools are released to the public; in practice this could include involving impacted communities in design and development to enable broad and diverse input; creating safe spaces for whistleblowers within a company, and committing to algorithmic transparency by working with independent researchers and allowing access to training datasets and algorithmic audits.

What is unclear from the current crop of proposals is how people harmed by AI are able to seek redress. The third pillar of the UNGPs on access to remedy has been a struggle to realise in the tech sector, but it takes on a particular urgency with the application of AI. For example, victims of pornographic deepfakes must have avenues to identify the perpetrator and remove the images. As more decisions are being made about people with AI- such as who is arrested, fired, hired, gets a visa, qualifies for welfare benefits, etc. people must have avenues to find out when and how AI has been used in decisions about them, and contest the outcome when things go wrong. 

Current routes to contest or seek redress for AI-related harms have a high bar for engagement, for example the “robo-firing” case against ride-sharing apps Uber and Ola brought under the GDPR, and the landmark case in the Netherlands that contested algorithmic welfare risk assessment brought under human rights law. 

In order to close this gap, as part of the State duty to protect, government authorities could look to updating consumer or product safety legislation for routes to redress, including liability, thereby enacting principles of responsibility and accountability as a compliment to data protection and technical requirements in specific AI legislation. Future proofing any tech regulation must include effective grievance mechanisms, because identifying wrongdoing and providing effective remedies  prevents the same mistakes being made over and over again. Ensuring  a strong focus on this issue in developing new international efforts  such as the proposed UN Global Digital Compact initiative  may provide the catalyst for the provision of  access to remedy in AI regulation.    

When the UNGPs were first adopted in 2011, there was little appetite for regulating tech companies or the internet, and little talk of risk or liability. This is no longer the case with the sector and certainly not the case with AI, where early conversations around legislation and regulation routinely include considerations of risk, liability, responsible development and the vision to create AI that is safe, secure and trustworthy. These are encouraging early signs, but stronger efforts are still needed to ensure human rights based approaches are  applied to  regulations at all levels in order to account for the full AI life cycle, from identifying risks to sufficient access to remedy. 


Lucy Purdon is a freelance consultant and recently founded Courage Everywhere, a space focused on gender justice, policy & innovation to advance technology for the benefit of all women. Lucy has worked at the intersection of human rights and technology in senior and leadership roles for over a decade, including at Mozilla Foundation, Privacy International and IHRB.

Latest IHRB Publications

When tech companies must stand firm against state power

Global tech giants are engaged in a battle over human rights in India, the country described as the world’s largest democracy, which is at the half-way stage of its marathon, six-week long elections. 

At stake are fundamental concerns of privacy,...

Regulating AI is messy – can the UNGPs assist lawmakers?

Fierce debates on the impact of Artificial Intelligence (AI) on human rights were underway long before Generative AI (Gen AI) stormed the stage in 2023. Civil society groups regularly exposed racist predictive policing algorithms, welfare fraud...

How to stop poorly planned climate transitions from sparking civil unrest

From the farmers of southern France to the Wayúu Indigenous people of La Guajira, Colombia and the coal miners of Mpumalanga, South Africa; many communities around the world are living on the frontlines of once-in-a-generation industrial shifts to...

{/exp:channel:entries}