AI backlash is brewing - communities must have a real say over how it is developed and deployed

27 April 2026

In a growing number of countries around the world, communities are mobilising against large-scale data centre development projects needed to power artificial intelligence (AI) systems. At the same time, the restricted release of advanced frontier AI systems to select commercial actors is intensifying concerns about concentration of power over potentially transformative technologies, with implications for safety, security and accountability. 

What is emerging is a double legitimacy crisis in AI governance: a crisis of procedural legitimacy, where those affected by decisions lack meaningful influence over them, and a crisis of structural legitimacy, where control over fast-moving AI capabilities is concentrated in the hands of a few actors with limited oversight.   

The backlash against data centres and growing unease surrounding frontier AI systems are not separate debates. They are parallel manifestations of the same underlying governance failure - one playing out at the level of communities and infrastructure, the other at the level of nations and capabilities.

Consulted but not heard

As data centre development accelerates, communities facing higher utility costs, water stress, and environmental degradation are demanding greater involvement in decision-making. In Europe, the pushback has followed successful lobbying efforts by tech companies to include provisions in EU law that shield individual data centres' energy use and water consumption data from affected communities, researchers, and journalists. In the US, local tensions around similar issues have reached the national stage, with lawmakers proposing a federal moratorium on certain new data centre developments until environmental and community impacts can be assessed. 

What is striking about such developments is not only the intensity of the backlash, but how often communities report being consulted, yet rarely feel heard. Even where residents raise concrete concerns, the persistent experience is that business-led engagement with affected stakeholders is extractive: information is gathered, but power is not shared. 

These dynamics are reinforced by structural incentives. The scale and urgency of AI infrastructure development, combined with competitive pressures and regulatory uncertainty, encourage companies to prioritise speed and control over deliberation. Engagement becomes something to manage, rather than a mechanism to shape outcomes.

Other industries have learned, often belatedly, that excluding affected communities leads to conflict, delay, and long-term reputational damage. Recent research has highlighted the pressing need for clear benchmarks and strategic frameworks that enhance engagement effectiveness and accountability. 

The parallels with earlier economic transformations are instructive. During trade liberalisation in the 1990s, diffuse national gains often obscured concentrated local losses borne by communities hit hard by economic globalisation. The long-term social and economic fallout - from job loss to public health strain - resulted in erosion of trust in governments and private actors. Today, a similar pattern is emerging. The aggregate benefits of AI development are masking localised harms and asymmetries in who bears costs and who exercises control. 

Who has the power - and the responsibility? 

At the frontier of AI, the legitimacy challenge takes a different but related form. Advanced AI systems are not just products; they are platforms with far-reaching societal implications, including - as leading experts are warning - capabilities that could reshape cyber-security, economic systems, and geopolitical stability. Yet control over these systems, and decisions about their deployment, remains concentrated within a small number of companies, and an even smaller number of people within them. 

This raises a structural legitimacy problem: who has the authority to shape technologies with systemic, transnational consequences?

Addressing this challenge requires moving beyond the procedural language of “stakeholders” towards a normative focus on “rights-holders” - those whose lives and interests are materially affected by AI systems, often without their knowledge or consent. These include not only communities hosting physical infrastructure, but also workers across data supply chains, end users, and populations exposed to broader societal risks.

International frameworks offer partial but instructive guidance here. The United Nations Guiding Principles on Business and Human Rights, which mark their 15th anniversary in 2026, require corporate human rights due diligence - a standard increasingly applied to AI-related harms. The UNESCO Recommendation on the Ethics of AI and the OECD AI Principles extend this further, addressing systemic risks and the need for inclusive and democratic governance.

Yet current corporate governance approaches struggle to meet this challenge. ESG and corporate responsibility frameworks tend to focus on what can be measured and standardised. But many of the most significant AI risks, from loss of agency to systemic bias, concentration of power, and geopolitical instability, are difficult to quantify, and fall outside existing accountability mechanisms.

From engagement to power-sharing

Bridging this gap requires rethinking engagement across two interconnected dimensions. 

At the local level, companies must move towards genuine rights-holder participation in decision-making. Tools such as Community Benefit Agreements (CBAs) offer one pathway, enabling enforceable commitments on issues like environmental protection, infrastructure investment, and local employment. CBAs are not a universal remedy. Their effectiveness depends heavily on community capacity and bargaining power, which are themselves unevenly distributed. But where conditions allow, they shift engagement from consultation towards negotiated, enforceable outcomes. In doing so, they can not only help improve social outcomes, but also strengthen risk management, reduce costly project delays or conflicts, and enhance project legitimacy.  

At the global level, a parallel shift is needed toward coordinated governance of frontier AI systems. The concentration of advanced capabilities in a handful of firms, combined with the transnational nature of associated risks, makes purely national or company-led approaches insufficient. More robust mechanisms are needed, including shared safety standards, pre-deployment risk disclosure, independent auditing, and forms of oversight that extend beyond voluntary self-regulation. 

Multilateral efforts on AI governance - including the AI Safety Summits and emerging UN processes - are gaining momentum. But they remain at an early stage, and struggle to keep up with the speed of technological change and the influence of powerful private actors, even as those most exposed to harm are underrepresented. The first UN meeting of the Global Dialogue on AI Governance this July will hear from many perspectives and will test the extent to which leaders appear ready to move in the direction of shared responsibility and greater accountability on a range of specific challenges.

Making engagement work at every level

Local and global governance aren’t parallel tracks but mutually reinforcing ones: trust built through meaningful community participation strengthens the broader legitimacy that international standards require, while robust multilateral and multistakeholder frameworks set shared expectations that local engagement cannot establish alone. 

As AI continues to evolve, engagement must evolve with it. That means moving toward approaches that are more inclusive, and more responsive to legitimate claims. It means linking local experiences with global governance frameworks in ways that reflect the full scope of AI’s societal impact. In other words, seeing to it that all those impacted by AI have a real say over how it is developed and deployed.

The test is not whether companies conduct engagement, but whether that engagement shifts outcomes to address the concerns of communities and wider public interests. On that measure, current practice - across the AI sector, its regulators, and the governments competing to attract it - falls short. Closing that gap is not just a reputational imperative for an industry whose social license is still being negotiated, it is a condition of legitimate governance.