India AI Summit: the missing human rights dimensions
23 February 2026 | 6 minute read
Last week, global leaders, developers and engineers, academics, NGOs, and tech billionaires gathered at the India AI Impact Summit, the fourth in a series and the first to be hosted in the Global South.
Artificial Intelligence, or AI, has become a major pressure point for responsible business. Governance is nascent and fragmented. Notable efforts like the EU AI Act are being delayed and weakened (under pressure from Big Tech). In the absence of regulation, greater responsibility sits with corporations to develop and deploy AI with robust, ethical guardrails in place.
The Summit’s agenda signalled that AI policy and deployment are no longer peripheral discussions. The shift from experimentation to large-scale AI implementation across governance, education, healthcare, agriculture, and digital public infrastructure was a focal point for discussion, reflecting a serious commitment to embedding AI within India’s national development priorities. India is positioning its approach carefully, describing it as one focusing on collective benefit (although the private sector, including private academia, is playing a huge part).
With the Summit now ended, the question to ask is not what was presented, but what was missing. The agenda demonstrated energy and scale, yet it stopped short of interrogating AI as a system of power that redistributes risks and benefits across society, often unfairly. Indeed, AI poses real and present dangers to human rights, and its impacts are already being felt by workers and communities, as IHRB’s latest Top 10 Business and Human Rights Issues for 2026 highlighted.
Issues such as labour displacement in formal and informal economies, algorithmic bias and exclusion from welfare systems, poor working conditions in data supply chains, and the health and environmental impacts of data centres were either marginal or absent from the 500+ sessions hosted throughout the week.
Even where “responsible AI” was discussed, the framing remained general and aspirational, restating loosely defined standards, without translating into concrete obligations around mandatory human rights due diligence, impact assessment, mitigation, remedy, corporate accountability, and independent oversight. The new OECD Due Diligence Guidance for Responsible AI, issued during the Summit, will be a welcome practical tool for this, particularly if it is continually updated with real examples.
Understanding the most pressing human rights risks is the first step in minimising AI harms. There are a growing number of such challenges highlighted below, all of which should be at the centre of future discussions on AI, including this summer as the UN hosts its first Global Dialogue on AI Governance in Geneva.
Poor conditions for 'ghost workers' in AI training and moderation
The human – and often hidden – workforce (data labelers, moderators, annotators) involved in AI models and other tech-based products, can face poor working conditions, low pay, and little legal protection. As research has documented, these workers are often subcontracted by multinationals through third-party outsourcing companies with offices in low-income countries, adding additional layers of complexity in terms of traceability and accountability. Companies that deploy this technology must broaden their human rights due diligence to their AI supply chains to root out abuse happening on the ‘digital factory floor’.
Discrimination in hiring and workplace AI
Marginalised job seekers now face new risks from AI-powered hiring tools (automated employment decision tools, or AEDTs), which are embedding bias and inequality deep into recruitment systems. Evidence is mounting that AI hiring systems are discriminating on the basis of gender, disability, and race and ethnicity, and do so largely undetected. These systems penalise non-native speakers, people with atypical facial expressions, or those with interruptions during their careers, largely women who have taken time out for caring duties. Automated employment decision tools need further interrogation, for example by carrying out bias audits of recruitment software.
Uneven labour protections and opportunities for workers
The concentration of AI/technological development in wealthy countries has raised concerns of digital colonialism, where access to the benefits of AI is concentrated in the Global North, reinforcing global inequality and widening the digital divide. On the other hand, transnational deployment of AI and its infrastructure also raises questions of jurisdiction, regulatory arbitrage, and uneven rights protection. Without harmonised safeguards, companies may operate in lower-regulation contexts, externalising risks onto vulnerable populations.
Meanwhile, conversations about job losses and reskilling due to automation have focused on so-called white-collar professions. Less attention has been paid to low-income, blue-collar workers or migrant workers, such as clerical staff, call centre employees, or transport workers, not to mention informal workers who are the backbone of labour markets in developing countries, and who have very little access to formal reskilling opportunities.
AI may also deepen precarity among gig workers using digital platforms that have their ‘gigs’ set by algorithms, the same algorithmic boss who surveilles them, and that may suddenly fire or ‘deactivate’ them. Algorithmic management systems within platforms, such as rating systems and dynamic pricing, have significant labour rights implications that are yet to be fully grasped. In response, human rights advocates have demanded stronger regulation, including a ban on algorithmic management practices that pose unacceptable risks to workers’ rights, such as exploitative wage-setting algorithms and unsafe work.
Extractive data reinforces bias
AI models depend on large pools of information, but extractive approaches to data gathering risk ‘stealing’ intellectual property from grassroots organisations, Indigenous peoples, and communities. On the other hand, Large Language Models (LLMs) are good at absorbing data that is available online and representing dominant narratives, meaning minority languages, Indigenous cultures, and other marginalised groups are at a disadvantage of being further unrepresented in society.
LLMs create a body of global memory based on existing digitised resources, increasing the risk of perpetuating dominant narratives and knowledge, at the cost of marginalised interpretations and alternative approaches. In practice this might result in algorithms excluding marginalised groups from welfare schemes, or misreading them, or failing to consult such views, distorting stakeholder analysis leading to misrepresentation in publicly held data. More attention must be paid to intellectual property rights of Indigenous communities, consent frameworks, and benefit-sharing mechanisms.
Accountable AI infrastructure
Governments and companies are investing heavily in data center infrastructure to support AI’s computational demands. Some 40% of data centres are concentrated in the U.S, but this infrastructure is spreading around the world, with major Indian conglomerates investing billions, and multinationals committing to invest similar amounts in India, the Middle East, and elsewhere. So far limited attention has been paid to managing the social, environmental, and governance risks associated with these ‘engines’ of AI. While water use in data centres and sustainable AI engineering are widely discussed as issues, AI’s material footprint needs greater attention: the impacts of land acquisition, water stress in suburban areas, and electronic waste are disproportionately borne by vulnerable communities and populations. Besides, as reports show, countries including India are extending the use of coal-based plants to power the data centres, which has lasting adverse impacts on the climate. Facilities are in some cases sited in regions with limited public consultation and uneven governance capacity. This rapid infrastructure expansion raises urgent questions about equity, environmental sustainability, and human rights, and what mechanisms exist to assess and mitigate risks, and who is responsible.
Using AI to identify supply chain risks
AI tools can help companies integrate human rights risk assessments, auditing, and remedy mechanisms across their procurement and deployment chains. But there is a risk that automated systems will offload high-risk suppliers rather than seeking to engage and use leverage to improve their practices. Moreover, flagging harm is useful only if this leads to action, and there are appropriate grievance mechanisms or remedies that involve participation of those affected; and those mechanisms are underinvested in or absent from digital approaches.
Military AI and dual-use products
A number of sessions at the Summit focused on AI and defence, with emphasis on using drones and other systems which have dual uses, including as weapons or during armed conflict. However, there was no reference to the body of knowledge that is emerging in applying international standards to ensure that modern warfare is consistent with international humanitarian law. Calls are growing for responsible guidelines on autonomous weapons systems and even a binding treaty to ensure compliance with international human rights and humanitarian law. Some 51 countries have endorsed the Political Declaration on Responsible Military Use of AI and Autonomy, calling for norms around lethal AI systems.
Embedding rights-based thinking in future forums
While India’s edition of the AI Summit series was successful in drawing attention to the technology’s potential in national development and global governance, it missed several crucial areas, particularly in addressing the real-world implications of AI on vulnerable populations such as migrant workers, low-income labourers, and marginalised communities. The event’s focus on economic growth and infrastructure development meant minimal attention was paid to human rights, social justice, and worker protection in AI’s deployment.
If AI is to serve society equitably, future engagements must integrate growth narratives with rights-based safeguards, including existing internationally agreed standards like the United Nations Guiding Principles on Business and Human Rights. Corporate technological leadership without accountability risks deepening inequality rather than addressing it. IHRB welcomes the decision by the Swiss Government to host the next AI Summit in 2027 in Geneva, as it will offer the opportunity to align the latest plans for AI development and deployment with organisations such as the UNOHCHR’s B-Tech initiative who can bring human rights, labour rights and humanitarian expertise into the debate.