November 22, 2025
As employers increase their reliance on AI and similar tools, it was inevitable that the use of AI would find its way to the courts. AI claims are on the rise. A recent case highlights the risks. In one of the first published cases, the Court found that the Company using AI (Workday) was an agent of the employer, and as a result may be subject to direct liability for employment discrimination.
The Underlying Facts
In Mobley v. Workday, Inc. a group of rejected applicants asserted a class action, alleging that they had received hundreds of rejections after applying for jobs through Workday (a human resource management software), without so much as an interview. The applicants assert that Workday’s AI-driven applicant filtering system unlawfully disqualified individuals over 40 from employment opportunities, creating a “disparate impact” on a protected group in violation of the Age Discrimination in Employment Act.
Workday opposed certification of the class on a variety of bases, including that the variations in the applicants’ qualifications and the types of jobs applied to made it impossible to consider the group similarly situated for purposes of certifying a class.
The Mobley decision
The court denied Workday’s motion to dismiss, finding that Workday was an agent of the employers who use the software system and that it was as a result subject to direct liability for employment discrimination. The court also found Mobley had alleged enough facts to bring a disparate impact claim, and that the claim can proceed even without evidence of intentional discrimination.
This isn’t the end of the story, this is just the certification stage. And the court was clear that the defense could challenge the certification of the class down the road, as more evidence was collected.
What this decision means for employers
So far, decisions on the use of AI in employment have favored the plaintiffs. As more employers move to use AI in the hiring and other employment processes, litigation on bias and discrimination is expected. And these cases are a cautionary tale for employers. Using AI is tempting, but be mindful not to rely to heavily . . .
- Ask questions of the AI provider before implementing an AI system. For example, how are candidates screened and rejected? Does the providing entity audit the outcomes of its software to determine whether there are disparate impacts?
- Conduct your own audit of the results you are getting from the AI-generated system: is your company receiving applications from diverse populations? Has the applicant pool shifted (suggesting a disparate impact) since implementing an AI-supported system?
- Consult with outside counsel. Any time you implementing a testing system, a recruiting system, a screening process . . . it is a wise idea to consult with your employment attorney on whether you inadvertently may be creating litigation risk for your company.
- Be sure to review the California Civil Rights Department’s new regulations on the use of Automated Decision Systems (including AI) to ensure you are in compliance. Here is a link to our recent Blog summarizing the new regulations.
AI is expanding quickly, as we all know. If we can help you navigate this minefield (and don’t worry, we provide guidance ourselves and not delegate it to ChatGPT) let us know.