The hidden disability bias in AI-powered recruitment

13 May 2026

Guest commentaries reflect the views of the author(s). See more


A lawsuit (Mobley v. Workday, Inc.) in California alleges that Workday's AI-powered screening tools discriminate against job applicants based on disability, age, and race. The complaint alleges that AI screening tools, including assessments, unfairly filtered out disabled individuals with mental health conditions. Plaintiff Derek Mobley claims he was rejected from over 100 jobs by AI tools that prioritise or reject applicants based on protected characteristics rather than qualifications. In May 2025, the case became a collective action suit and is still ongoing, allowing applicants with disabilities (such as anxiety or depression) to opt in. 

The case highlights the rise of AI-powered hiring tools and the potential negative consequences for people with disabilities, which are poorly understood - much less acted on. 

The implications of these tools for job seekers with disabilities, health conditions, facial differences, and neurodivergences are numerous. For example, some tools are used to detect facial expressions, others make assessments based on speech. So just imagine: you lose your dream job because your stutter causes you to go fifteen seconds over the video interview time limit and the algorithm automatically discards you. Or your visual impairment makes eye contact tricky, but you can't request that the video assessment ignores your eyes. Or facial recognition technology can’t identify your face because of a disfigurement. 

Unless the unintended impacts of AI-powered technologies used in hiring and other human resources (HR) functions are urgently addressed, hundreds of millions worldwide face significant economic and societal harms. 

The emergence of AI recruitment tools

AI recruitment tools have become the first line of defence for recruiters trying to manage high-volumes of online job applications. A recruiter’s priority is to discard as many applicants as quickly and as cheaply as possible, to narrow down to those who meet the criteria for human consideration. And an increasingly controversial multi-billion dollar industry stands ready to help.

The German Public Broadcaster BR undertook a well documented experiment showing how a candidate’s Behavioural Personality Profile, produced after a one-minute video interview, changed significantly depending on her appearance. The person who took part in the study lost 10 points just by putting on glasses; she gained 20 points by putting on a head scarf, which the company involved claims German recruiters find appealing. 

What if that camera spotted a hearing aid or wheelchair, or a birthmark or arthritic hands? Would this highly dubious ‘science’ score such candidates as more or less agreeable, neurotic, or conscientious? No one knows - and that’s a problem.

Neither the HR tech creators nor the employers buying and using these tools understand disability rights 

HR tech creators often claim their products remove human bias through the design of standardised processes that treat everyone exactly the same. But standard employment processes are inevitably discriminatory – recruiters must make reasonable adjustments at every stage if they are to employ disabled people on an equal basis.

This is not just about the data, which is always ‘disability biased’. While biased data is deeply problematic, it is different from the concrete reality of behaviours and procedures which discriminate against people with disabilities such as: refusing to adapt an automated process so that an autistic candidate can be accurately assessed; insisting that a graduate with sight loss look directly into a video interview camera she cannot see; or using surveillance tools to accuse an employee with ADHD of not working, just because they fidget in front of their computer screen.

Neither the recruiters buying these products, nor their tech suppliers understand disability discrimination: neither party seems to know how to ensure that recruitment is both barrier-free for people with similar access needs and adaptable for individuals who require reasonable flexibility if they are to demonstrate their potential on an equal basis. All, it seems, without consulting colleagues responsible for human rights, sustainability and ESG performance.

Regulators are beginning to catch up

AI creators are not legally obliged to prove their products are ‘fair’ for any disadvantaged job seekers, but thankfully Article 6 of the recent EU AI Act identifies HR tech as a ‘high risk’ business process – and not just for disabled people. Hopefully this provides new possibilities for raising the profile of the UN Convention on the Rights of Persons with Disabilities and the need for business to manage disability as both an economic and a human rights imperative.

As regulators move to put this issue more squarely on the agenda of companies in every sector, business leaders must demonstrate their own commitment to action. That includes building broader cross-business consensus that ‘equality’ in employment is not possible without due diligence to identify disability discrimination risks at every stage of the employment life cycle. 

All companies now need to take practical steps to ensure respect for the rights of people with disabilities, while enabling their contribution to business success. They should: 

  • Explicitly position ‘disability rights’ among core responsibilities and help the business remove disability-specific barriers as a commercial, sustainability and human rights imperative.
  • Communicate internally that disability rights due diligence, while mitigating growing legal and reputation risks worldwide, also enhances talent acquisition, productivity, employee engagement and employer brand. 
  • Include evidence of enabling disability rights as an employer and as a service provider in routine ESG reporting and share this with key stakeholders, including suppliers, regulators and civil society.
  • Work with colleagues in HR and Procurement to ensure that HR tech providers are required to prove that they have undertaken robust disability discrimination risk assessments.
  • Ensure that contracts with HR tech providers clarify which party will be held liable in the event that a candidate with a disability claims that the technology caused them to experience unfair discrimination.
  • Share lessons learned and emerging good practices with thought leaders and standard setters in relevant sectors.

Now is the time for Sustainability and HR teams to get ahead of regulation, and avoid the legal and reputation risks which will no doubt arise if AI continues to be used irresponsibly in their hiring practices. Ensuring HR technologies enable people with disabilities to contribute on an equal basis will inevitably enable marginalised workers more generally to access decent work.

I am delighted to be contributing to IHRB’s Global Forum for Responsible Recruitment taking place from June 30-July 1, 2026, where we will have the opportunity to look at these issues in more detail, with particular reference to their impact on migrant workers.

Systems that work for users with disabilities work better for everyone.


Susan Scott-Parker OBE is the Founder of business disability international and Disability Ethical? AI, Strategic Advisor to the ILO Global Business & Disability Network and an Advisory Board member of the Zero Project  Equitable AI Alliance.