
New Research on AI and Fairness in Hiring
Technology Reshapes the Meaning of "Fairness," Demanding Continuous Leadership Oversight
Although nearly 90% of companies are now employing some form of Artificial Intelligence (AI) in hiring with the aim of reducing human bias, a new study indicates that AI adoption doesn’t just eliminate bias but fundamentally reshapes what counts as fairness within an organization. If not continuously monitored, AI can lock in a single definition of fairness, narrowing the candidate pool and sidelining human expertise.
A three-year field study at a global consumer-goods company tracked how an algorithmic system replaced résumé reviews with blinded, gamified assessments. The algorithm was trained on current employee data to predict the “fit” of new candidates. Initially, the system was intended to deliver fairness through consistent, data-driven rules.
The Shifting Definition of Fairness
As the AI tool scaled, the definition of fairness began to diverge:
-
Human Resources (HR): Prioritized consistency and unbiased procedures across all candidates.
-
Frontline Managers: Valued local context, recognizing that a strong hire varies by role, market, and team.
These views had coexisted for years. However, the algorithm translated HR’s principles into rigid, dominant rules that were difficult to bypass. Over time, this algorithmic version of fairness grew dominant, crowding out other perspectives. This led to conflicts, such as the model flagging a promising intern whom an experienced manager believed would excel.
The Critical Questions Leaders Must Ask
The research concludes that leaders should move beyond asking whether humans or machines are fairer and instead pose deeper questions to ensure AI reinforces a comprehensive definition of fairness:
1. What versions of fairness exist in our organization?
Companies often assume fairness has one clear meaning, but in reality, multiple, non-aligned meanings usually exist.
-
Proposed Action: Leaders must uncover what “fair” and “unfair” means for different groups (HR, managers, legal, candidates) by having data scientists and project leaders shadow the hiring process.
-
Ethical Infrastructures: Create dedicated spaces for collective reflection and debate on ethical dilemmas, known as “ethical infrastructures” (e.g., H&M Group’s Ethical AI Debate Club), where diverse teams debate trade-offs as they arise.
2. Who gives AI the power to decide what’s “fair” and on what basis?
AI systems gain influence from the people and departments that choose to implement them, authorize their use, and position them as “objective” or “fair.”
-
Scrutinize Language: Leaders must scrutinize the language accompanying AI initiatives. When a system is promoted as “ethical,” “scientific,” or “unbiased,” it is crucial to slow down and ask: Who is making these claims, and whose interests or forms of expertise are being sidelined?
-
Balance Implementation Teams: Ensure implementation teams include a mix of voices—ethics experts, business partners, and representatives of those affected—rather than being driven solely by technology advocates. An example is Microsoft’s “Responsible AI Champs.”
3. Which version of fairness does AI strengthen, and what gets lost over time?
AI systems don’t just apply a definition of fairness; they strengthen it. Once principles are encoded in fixed thresholds and workflows, they take on new power.
-
Continuous Evaluation: Treat fairness as something to be continuously evaluated, not a one-time check. Schedule regular reviews where stakeholders examine real hiring cases alongside model outputs and adjust thresholds or data inputs when evidence suggests drift.
-
Empower Frontline Managers: Frontline managers must have a guaranteed response mechanism to signal when a system threshold is constraining a regional pipeline or filtering out high-potential, atypical candidates.
-
Transparency Tools: Utilize tangible tools (e.g., IBM’s AI Fairness 360) to keep fairness inspectable over time. These tools allow teams to evaluate models against multiple fairness measures and run “what-if” checks, preventing fairness from drifting by default.
In conclusion, the leader, not the technology, is the steward of fairness. By making multiple views of fairness visible during design, clarifying the system’s authority, and continuously evaluating which version of fairness is being reinforced, companies can build sustainable fairness claims and mitigate the risk of narrowing their talent pool.
Source: https://hbr.org/2025/12/new-research-on-ai-and-fairness-in-hiring?ab=HP-hero-latest-2

