Blog_The Dangers of Using AI in Hiring and Firing: What HR Leaders Need to Know

 

Hello, HR professionals!

Almost a year ago, I wrote in another post, AI in HR, that, “There will be bumps in the road, but that’s the price of any new technology.” Well, we’ve found some of those bumps.

As we continue to embrace AI in our work, it’s easy to get caught up in the hype. After all, AI promises to streamline processes, save time, and make our lives easier. But as with any powerful tool, there are hidden dangers lurking beneath the surface, especially when it comes to recruitment and hiring decisions. AI’s potential to perpetuate bias and inequality is real, and as HR leaders, it’s our job to recognize the risks and act responsibly.

I will point out one thing though: Many sources, and I agree with this, is that AI has the potential to reduce bias…but we’ll get to that.


HR: One of the 10 Industries Most Affected by AI


It’s no secret that AI is having a profound impact on multiple industries, and HR is no exception. In fact, it’s one of the ten industries most affected by AI. We are already seeing the technology being used to speed up hiring, suggest places to recruit from, and predict employee performance. While these advancements come with undeniable benefits, we must also be aware of how AI could, and does, inadvertently reinforce existing biases and inequalities.


The Promise of AI in HR: Streamlining, Chatbots, and Predictive Analytics


Let’s start with the benefits that AI promises for HR. From screening resumes to using chatbots to do initial pseudo-interviews, the technology can take over time-consuming functions, leaving HR teams free to focus on higher-level tasks. Predictive analytics is another area where AI shines, helping HR professionals forecast future performance, turnover, and even potential for promotions.

But hold on a minute—this predictive power comes with a catch. While AI can be a useful tool, its accuracy is only as good as the data it’s trained on. And when that data is biased, the predictions and suggestions AI makes will be, too.


The Dark Side: AI Could Perpetuate Inequality and Bias


While AI can help identify the best candidates for a job, there’s a growing concern about its potential to perpetuate existing biases. It’s not just a theoretical concern—studies have shown that AI systems used for resume screening, for instance, overwhelmingly favor white, male candidates. In one study, released by Aylin Caliskan, a University of Washington assistant professor, and Kyra Wilson, a doctoral student, white-associated names were preferred 85% of the time, while female-associated names only saw 11% preference. Worse, Black male-associated names were almost always rejected, showing a clear racial and gender bias in AI-driven hiring decisions. (sources: AI overwhelmingly prefers white and male job candidates in new test of resume-screening bias – GeekWire and View of Gender, Race, and Intersectional Bias in Resume Screening via Language Model Retrieval)

Even more troubling is that the authors of the study often used the IDENTICAL resume but with the first name changed…and white-associated male names ALWAYS beat black-associated male names WITH THE IDENTICAL RESUME. That’s right—AI was making its decisions based on an implicit bias against names associated with African American candidates.


How Did We Get Here? It’s All About the Training Data


So, where does this bias come from? The culprit is often the training data used to teach AI systems. I’d cite sources here, but a bunch of them all agree on this and have similar examples. Many companies use data from their current employees to train AI models. If the workforce is predominantly white and male, as it often is in many industries, then the AI will learn to favor candidates who fit that profile. This makes the hiring process not just automated, but also unfair—relying on past decisions to shape future outcomes, without any thought to diversity or inclusion.

And those decisions may not even have come from a bias!

It’s not unreasonable to expect white males to be the majority of the workforce in many industries. It’s expected, in fact. But AI learning from that data doesn’t yet differentiate between a group being larger just because it represents a larger portion of the population and a group being larger because it is a preferred type to hire. It assumes the latter, which is a BIG problem.

For example, when using AI to analyze resumes, companies may train it using resumes from current employees, who may be overwhelmingly white or male. The result? AI prioritizes candidates who resemble the majority of those existing employees, potentially overlooking highly qualified candidates from underrepresented groups. It’s a cycle that repeats itself, reinforcing the status quo and limiting diversity.


Recruitment Biases: When Algorithms Decide Who Sees Job Ads


It’s not just hiring decisions that are impacted—targeted recruitment advertising can also be a problem. AI-driven algorithms decide who gets to see job ads in places such as social media and Google searches, often based on the algorithm’s assumptions about who is likely to be a good candidate. What data do you believe the algorithms’ judgements are usually based on? The result is that qualified individuals may never even see the job listing because of these invisible biases. This was actually a problem on Facebook for a while. (They’ve since removed the bit that was causing the problem.)

And let’s face it—most of those who didn’t get the ad will never know there was something they didn’t get, much less know why, which makes it even harder to fix the problem. For example, an AI might decide that certain neighborhoods or zip codes or genders or followers of a certain celebrity are more “qualified” for a job posting, even though this decision is based on data that doesn’t tell the whole story. When AI takes control of the recruitment process, there’s no guarantee that the right people are even being reached in the first place.


The “Human” in Human Resources: AI Should Complement, Not Replace


This was something else I brought up in that AI in HR post earlier this year. I said, “You just can’t take the “human” out of “Human Resources.”” This is still super important. As stated by Jesse Stanchak in The Impact of AI on Talent Acquisition and Recruitment on the SHRM website, AI can never replace “the more nuanced understanding of human candidates that comes from a face-to-face interview or personal conversation.” It’s critical that we continue to make room for human judgment in our processes, especially when it comes to evaluating candidates and making employment decisions. That SHRM post rightly points out that AI should be a tool that complements human decision-making, not be a replacement for it.

For example, AI can speed up the screening of resumes, but only humans can interpret the subtleties of a candidate’s experience, body language during interviews, and the soft skills that are often just as important as technical qualifications.

(By the way, SHRM is putting on a course on AI, although I’m not sure it’s specific to HR use. Here’s the link.)


What’s the Solution? Fixing the Bias Problem in AI


So, how do we fix this? Unfortunately, there’s no simple answer. As stated by Kyra Wilson, who worked on that UW study, said to GeekWire, AI bias and trying to fix it “is a huge, open question.” Companies are still trying to solve it. Some companies claim that their commercial AI models have guardrails to reduce bias, but there’s still a long way to go before we can trust AI to make unbiased decisions.

One possible approach is to audit AI models regularly, ensuring they are tested and updated to minimize bias. And while removing names from resumes might help, it’s not a foolproof solution. Wilson notes that AI can still infer a candidate’s background based on other factors like residence, education, and even the words used to describe experience. In the end, we need to move beyond just eliminating names and focus on the broader issue of biased data.


The Benefits Are Real, But So Are the Risks


No doubt, AI has the potential to make HR processes more efficient. For example, Unilever’s hiring process went from taking four months to just four weeks thanks to AI. (Use of Artificial Intelligence as Business Strategy in Recruitment Process and Social Perspective | SpringerLink) It’s saved time, reduced costs, and even helped streamline recruitment efforts. But it also brings with it risks that cannot be ignored. If we aren’t careful, AI could end up reinforcing the same biases that we’re working so hard to eliminate.


Federal Contractors and AI for HR


In case you didn’t realize it, the Department of Labor (DOL), specifically the Office of Federal Contract Compliance Programs (OFCCP), is now actively including looking at AI in audits and reviews. ReedSmith has a great overview of the DOL’s guidance from April this year, 2024. The gist is this: Everything that applies to EEO and AAP compliance extends to the use of AI and the OFCCP will be investigating it. As stated by the OFCCP: “Irrespective of the level of technical sophistication involved, OFCCP analyzes all decision-making tools for possible adverse impact.”


Wrapping Up: Keeping AI in Check


As HR professionals, it’s our responsibility to be aware of these risks and advocate for fairness and transparency in AI-driven processes. The technology is powerful, but it’s not foolproof, and it certainly isn’t neutral. We must be proactive in ensuring that AI is used ethically and responsibly, complementing human expertise rather than replacing it.

Let’s continue to lead the charge in making our workplaces more inclusive, but let’s also make sure we’re not inadvertently using AI to push us backwards.

For further reading on AI’s impact on hiring, check out these resources, most of which were already linked to in-line:

Let’s continue to make AI in HR a tool for good, not a system that perpetuates the past.


HR Unlimited, Inc. specializes in helping federal contractors and employers effectively meet their AAP and EEO compliance obligations. Please contact us to discuss any of your questions, concerns, or needs in this area.  

 

 

Leave Your Comment