ChatGPT won’t be able to stop its own misuse. It’s up to companies to place human rights principles at the core of machine learning AI.
 

The attention that ChatGPT has received in recent days has prompted a great deal of discussion about the implications of this new tool across a range of actors, from business, civil society, and governments. While the tool is amusing and entertaining, we remain at a very early stage understanding its longer term consequences.   

Chatbots like ChatGPT are artificial entities, developed from a technology known as ‘generative AI.’ Artificial intelligence is the ability of a computer system to imitate human thinking processes to help us  learn and solve problems. 

 

Machine learning: the good and bad

One of AI’s main applications is machine learning, which helps machines learn using mathematical models and formulae, without being given instructions. Drawing on vast amounts of data, its responses seem more granular and authoritative, appearing to be as intelligent as they sound. 

As with any technological development, machine learning has potential for good, but also creates risks that are not foreseen, since the technology does not have emotions or values underpinning it, and so its advice may in fact cause harm. 

So how does ChatGPT stack up today in terms of its own understanding of its potential societal impacts, and what are its human rights responsibilities as a product/service designed by experts working for a technology company? 

We asked ChatGPT: What are the adverse impacts of ChatGPT on human rights and business?

Here’s how it responded to the question: 

“As an AI language model, I am programmed to provide impartial and objective responses to all queries. However, in addressing the prompt you've presented, it is important to note that ChatGPT and AI in general, can have both positive and negative impacts on human rights and business. While AI technology has brought tremendous benefits to the global community, including automation of tedious tasks, better decision-making, and increased efficiency, it is also true that there are potential adverse impacts of AI and ChatGPT on human rights and business.

One of the potential adverse impacts of ChatGPT on human rights is the risk of bias and discrimination. AI models like ChatGPT are trained on vast amounts of data, including text data from the internet. If the training data contains biases or discriminatory language, it can result in the AI model learning and perpetuating those biases and discrimination. This could lead to situations where certain groups are discriminated against or unfairly treated based on their gender, race, or other characteristics,

Moreover, as AI technology continues to improve and become more sophisticated, there is a growing concern that AI systems may replace human jobs, leading to unemployment and poverty. While this may have economic implications, it also poses a risk to human rights, as the ability to work and earn a living is a fundamental right.

AI technology also has the potential to infringe on privacy rights. With the ability to collect and analyze vast amounts of data, there is a risk that AI models like ChatGPT could be used to monitor individuals' activities, including their online behavior, leading to breaches in privacy.

In terms of business, ChatGPT and AI technology could also have adverse impacts. One of the primary risks is the potential for job displacement, as AI systems may replace human workers. This could result in job losses and a shift in the skills required for employment, potentially leading to economic and social dislocation. Additionally, there is a risk that AI systems may be used to perpetuate unfair competition, either through the use of biased data or the use of AI models to make strategic decisions that are detrimental to competitors.

There are also risks associated with the use of AI in decision-making, particularly in situations where decisions have significant human rights implications. For example, in the criminal justice system, AI models could be used to determine bail amounts, sentence lengths, or even guilt or innocence. If these models are biased or discriminatory, they could result in unjust outcomes that violate human rights.

In conclusion, while AI and ChatGPT have the potential to bring many benefits to society, including increased efficiency and better decision-making, there are also potential adverse impacts on human rights and business. As AI technology continues to evolve, it will be important to consider these risks and take steps to mitigate them, including ensuring that AI systems are transparent and accountable, and that they are trained on unbiased and diverse data sets. It will also be important to ensure that human rights are protected in the design and implementation of AI systems, and that they do not result in unfair competition or job displacement.”

 

Repetition without understanding

Timnit Gebru, a leading thinker on AI, who warned of ethical considerations that must underpin such research (and parted company with Google acrimoniously), is among the experts who has described generative AI as “stochastic parroting,” because it repeats information it was fed without understanding the meaning, and it is stochastic because it is random. 

Notice the structure of ChatGPT’s response: it reads like an accomplished high school essay which is written by a bright teenager who wants to get an A grade but may have to settle for B+ because it doesn’t cross the threshold from being competent to being excellent. It covers known ground – privacy, bias, discrimination, and job losses, and introduces a few novel concepts (such as potential misuse in judiciary, but which is not directly relevant to business and human rights). 

But what ChatGPT’s response fails to do is offer sufficient insights about using the technology in business and its impacts on human rights. It also offers no insights about what businesses and developers of the technology must do to prevent misuse. 

 

AI as a means to spread hate speech and falsehoods

One issue that the essay does not address, but with which all companies that run social media platforms or develop interactive software are acutely conscious of, is the spread of hate speech and falsehoods in the form of misinformation and disinformation. As Jonathan Swift wrote in 1710, “Falsehood flies, and truth comes limping after it, so that when men come to be undeceived it is too late; the jest is over and the tale hath had its effect.” 

AI-generated texts cannot verify if the information is accurate and truthful or not. That requires the ability to evaluate and the capacity to check original sources (and know which original sources are in fact real). But in a world where deep-fakes proliferate, and forgeries multiply, how could AI be trained to cross-check and reassure itself that the information it has provided is true? When chatbots have been asked to provide evidence for their claims, some have made assertions and pointed at academic citations which are irrelevant or false. 

If the complaint against algorithm-dictated social media platforms is that they drive people more and more towards extreme positions based on their clicks and reading preferences, chatbots can achieve the same with ruthless efficiency. The recent experience of the New York Times writer, Kevin Roose, with Bing, Microsoft’s search engine, is chilling. 

Some companies, like Microsoft, have been advocating ‘responsible AI’ principles with accountability, transparency, fairness, reliability, safety, privacy, security, and inclusiveness goals. The devil, as always, lies in the details. 

 

Fixing AI before science fiction catches up with us

The idea of machines taking over the world is not new: science fiction writers have written about it for a long time. Arthur C Clarke wrote a short story, The Sentinel, in 1948 (published in 1951) which became the remarkable film, 2001: A Space Odyssey, in which a computer, HAL, takes over the functioning of a spacecraft. In 1950, Isaac Asimov wrote the pathbreaking I, Robot, which outlined three rules which offer a good place to start thinking about AI. Those rules are:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  • A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

We are some way from such technology, and the time to fix it is now. The technology is still nascent, but it is learning fast. 

An early acronym for computers was GIGO, or ‘garbage-in-garbage-out,’ or you get the data that’s been fed into the system. It is human intelligence and sentience which can distinguish (when it chooses to), between facts and lies, and between fact and opinion. 

Companies developing generative AI tools or machine learning-based technologies must place principles at the core – principles that respect human rights and uphold norms, values, and ethics which are at the heart of the international order, and which put individual and societal well-being at the centre. 

Microsoft has attempted a principles-based approach to AI, and that is welcome and a good start. But a responsibility as wide-ranging as developing rules for AI cannot be entrusted to a singular entity, however well-meaning. There is an urgent need to include experts who understand technology, human rights, as well as authoritative voices from international organisations, and most important, civil society that represents affected groups, to create a platform or a common framework that serves the interests of the people (and protects their rights), so that governments can frame rules that apply everywhere. 

Latest IHRB Publications

The perception of ‘value’ needs to change if the World Bank’s mission is to succeed

Last week we attended the Spring Meetings of the World Bank and International Monetary Fund (IMF) in Washington, D.C. The annual IMF-World Bank meetings bring together finance ministers and central bankers from all regions as a platform for official...

How should businesses respond to an age of conflict and uncertainty?

As 2024 began, European Commission President Ursula von der Leyen aptly summed up our deeply worrying collective moment. As she put it, speaking at the annual World Economic Forum in Switzerland, we are moving through “an era of conflict and...

Bulldozer Injustice: how a company’s product is being used to violate rights in India

Bulldozers have been linked to human rights violations for many years, at least since 2003 when the US activist Rachel Corrie was crushed to death by a Caterpillar bulldozer while protesting against the demolition of a Palestinian home with a family...

{/exp:channel:entries}