It is the cardinal rule of the Internet that anything you say in cyberspace lives in perpetuity, and there is no such thing as complete privacy.

Users can use cryptographic tools to protect their communications, change their passwords repeatedly, interact only with a trusted few – but the digital footprint that we, as users, leave behind, enables any organisation with resources at hand, to build a fairly detailed picture of who we are. And then, sell us products and services we may – or may not – need. Or, as in the Cambridge Analytica/Facebook case makes clear, play on our hopes and fears to influence our voting choices.

Cambridge Analytica is no ordinary consultancy, and Facebook’s business relationship with it seems to have cried out for greater checks and balances.

The hammering from the market that Facebook has received this week shows that investors have taken a dim view of the company’s activities and explanations for its involvement in the unfolding case involving Cambridge Analytica’s role in the 2016 US Presidential elections. Investors see spades of legal trouble ahead for Facebook and other companies in similar businesses.

Facebook’s response appears to have been remarkably tepid when it learned about researchers, who claimed to be doing academic work involving 50 million of its users’ data, had an arrangement to share user data with Cambridge Analytica. The firm is no ordinary consultancy, and Facebook’s business relationship with it seems to have cried out for greater checks and balances.

How Did We Get Here?

Cambridge Analytica plays major strategic, tactical, and operational roles in elections around the world, most notably in the 2016 US Presidential campaign of Donald Trump.

Its Chief Executive has given many presentations about how they helped the campaign identify themes to highlight and specific voters to target. A major Trump campaign donor, the Mercer family, had a financial stake through shell companies in Cambridge Analytica. Its vice-president was Steve Bannon, who would later become chief strategist at the White House during the first seven months of the Trump administration. Furthermore, it actively sought business with Russian companies – when the cloud of Russian involvement in the US Presidential elections of 2016 is so thick it can be seen from outer space.

The exceptional delay in acting on the information – nearly two years – shows that Facebook knew, or should have known, what the consequences of the breach could be.

And yet, no alarm bells appear to have sounded at the social media giant.

Facebook does not appear to have informed the affected users of the data breach, and all that it asked the researchers and Cambridge Analytica to do was to delete the information they continued to possess, seemingly believing that they had obliged. It took further action only after it received reports that Cambridge Analytica had not deleted all the information. Facebook decided to suspend Cambridge Analytica from the network only last week.

Facebook’s prime concern appears to be that its ‘platform policies’ have been violated.

Facebook’s real problem begins there – that it seems to think the only problem is the breach of its policies. The exceptional delay in acting on the information – nearly two years – shows that Facebook knew, or should have known, what the consequences of the breach could be. The lackadaisical manner in which Facebook appears to have treated the matter indicates a failure to undertake effective due diligence.

The road ahead for the company will be rocky.

The UK Information Commissioner has today sought a court warrant to enter Cambridge Analytica’s premises and asked Facebook to stop its audit of Cambridge Analytica, so as not to prejudice her own investigation. In the US, Senator Ron Wyden has asked pointed questions to Facebook about what it knew in 2015; what it did after that; how many similar breaches have occurred or could have occurred; whether it has informed the nearly 50 million people whose privacy has been breached; if not, why not, in this instance or other cases; and whether its actions are consistent with relevant privacy laws.

(In 2011, Facebook entered into a consent agreement with the Federal Trade Commission which required the company to maintain a comprehensive privacy programme to address privacy risks and protect confidentiality. Why is it that three years after, such a massive breach could occur, the Senator has asked. He has also sought biennial privacy assessment reports from independent third party professionals that Facebook is required to obtain and provide to the FTC.)

What’s at Stake

To be sure, conducting opposition research, tailoring messages that appeal to the electorate – messages that play on their hopes and fears, and targeting voters by segmenting them in different groups is neither illegal nor necessarily wrong. Marketing companies do it all the time when they attempt to sway consumers from buying one brand over another. But advertising standards exist, which have rules against certain forms, in particular subliminal advertising, which aims to manipulate the viewer’s thinking.

The question is no longer academic as to whether Cambridge Analytica was ‘persuading’ voters or ‘manipulating’ them.

The documentary on Channel 4 clearly shows the firm’s executives boasting about abilities and resources the company can deploy to affect electoral outcomes. Cambridge Analytica refutes this, saying they were trying to get a better sense of the client’s true intentions and had significant concerns following the secretly filmed conversations.

The sophisticated messaging may have contributed in influencing voters to stay at home, or change their votes, in critical, marginal precincts, affecting the overall outcome.

Whether they did it or not is beside the point.

At its simplest, Cambridge Analytica – a firm with an exceptional amount to answer for as to the seemingly unethical and legally questionable tactics aimed at manipulating voters’ decisions – has used information researchers obtained ostensibly for academic purposes from Facebook. And Facebook appears to have been lax in exercising any control over how the vast amount of data in its custody, belonging to millions of users, is used.

The impact of this data breach is colossal. It may have affected the outcome of a US Presidential election, and all that has followed. True, neither Cambridge Analytica, nor Facebook, ‘elected’ Donald Trump to the White House; but the sophisticated messaging may have contributed in influencing voters to stay at home, or change their votes, in critical, marginal precincts, affecting the overall outcome. This is not to absolve the voters of their responsibility – they exercised their choice, and they have to live with the consequence.

What’s likely, however, is both Cambridge Analytica and Facebook may face extensive litigation from people who believe their privacy is compromised, and the compensation bill for that can potentially overwhelm even a tech giant like Facebook. The first such lawsuit is already underway: David Carroll, who teaches at the Parsons School for Design in New York, has sued Cambridge Analytica, under British data protection law, asking the firm to tell him what it knows about him. 

What’s to Come

The genie of social networks or political consultancy firms cannot be put back in the bottle.

The challenge for governments now is to enforce existing laws (and prosecute those who are in breach) and legislate new regulations to ensure that data is not misused and that such data breaches do not occur again.

Governments and the industry have moved rapidly in the past with data breaches – as the cases of Equifax, LinkedIn, and Yahoo! have shown. Companies have paid stiff compensation and penalty to affected parties. That is the minimum expected in this instance.

Deeper understanding of and commitment to implementing the UN Guiding Principles on Business and Human Rights provides a logical framework for further discussion on the responsibilities of these companies. A robust due diligence process helps identify risks and would have provided warning signs.

Users need to know that if they are not paying for a service or product, they are not the company’s customers; they – and their data – are the product being sold.

The deed is now done.

Governments will have to investigate procedural failures of the companies involved, as well as breaches of law and regulations. That may include assessing Facebook’s power and reach, and whether existing laws concerning monopolies can be used to curb its power. In many countries, Facebook has become the equivalent of the Internet – in fact, its controversial Free Basics programme intended to do just that, by making it the sole gateway to the Internet, and the removal of net neutrality may only hasten such a process.

There will be inevitable regulatory changes following this, with stricter rules about data gathering, data retention, and the need for far greater clarity of language needed in explaining consent. Perhaps the business model needs to be turned around – companies like Facebook should pay users for the data they hold, if they are to monetise the data by offering it to advertisers. Users need to know that if they are not paying for a service or product, they are not the company’s customers; they – and their data – are the product being sold.

Righting the Ship

It is clear that more is required for Facebook to know and show that those given access to its users’ data will not go on to misuse it; that the risks their business relationships pose to users’ rights are prevented, mitigated, and where breached also remedied.

These and other steps are essential to restore user trust in the tools and services on which they have come to rely for education, information, and entertainment.

To concretely demonstrate that it respects human rights, Facebook should carefully consider these necessary steps:  

  • Make its privacy policies simple to understand.
  • Make opt-out provisions allowing users to stop third parties from accessing their data more explicit.
  • Retain only the bare minimum data necessary to identify the person opening the account (to prevent impersonation and identity theft), and to facilitate financial transactions.
  • Give an undertaking that it will neither gather nor profile all other forms of data – such as the individual’s preferences, likes, dislikes, political opinions, religious belief, or sexual orientation. If any third party, for academic or commercial purposes, wishes to examine the data, Facebook must seek consent of the users each time, before making such data available.
  • Establish procedures so that if a government agency wishes to examine user data on grounds of national security, its request must be strictly under the law, in writing, made by an appropriate authority, with court approval, and be time-bound and focused on the specific circumstances.
  • Develop technology to ensure that any third party drawing on the data should be able to do so only for a limited time and for specific purposes. The company should be able to prevent the third party from duplicating the data and retain the ability to erase the data remotely after the contracted period is over. It should have built-in software embedded in the program that can alert the company if the third party has breached any condition.
  • Agree to provide for or participate in remedy, which may include an apology, restitution, rehabilitation, financial or non-financial compensation, and punitive sanctions, as appropriate.
  • Provide an undertaking that it has taken steps to prevent harm by guaranteeing non-repetition.

These are necessary steps, but they may still not be sufficient.

Technology moves rapidly, and developers are often several steps ahead of regulators. And yet, these and other steps are essential to restore user trust in the tools and services on which they have come to rely for education, information, and entertainment. And more important, to respect and protect their right to express themselves freely, and to retain their dignity and privacy. 


 

Photo by Isriya Paireepairit

Latest IHRB Publications

How should businesses respond to an age of conflict and uncertainty?

As 2024 began, European Commission President Ursula von der Leyen aptly summed up our deeply worrying collective moment. As she put it, speaking at the annual World Economic Forum in Switzerland, we are moving through “an era of conflict and...

Bulldozer Injustice: how a company’s product is being used to violate rights in India

Bulldozers have been linked to human rights violations for many years, at least since 2003 when the US activist Rachel Corrie was crushed to death by a Caterpillar bulldozer while protesting against the demolition of a Palestinian home with a family...

The state of just transitions in the cocoa sector

The mounting impacts of the climate crisis are seen starkly in the lives of agricultural workers, most often in developing countries. Discussions around just transitions understandably focus on energy, but agriculture and deforestation are also huge...

{/exp:channel:entries}