An Urgent Request: Add a Human Element to your AI Recruiting

There’s no argument that using Artificial Intelligence (AI) for recruiting has increased the efficiency of the hiring […]

There’s no argument that using Artificial Intelligence (AI) for recruiting has increased the efficiency of the hiring process. For any company dealing with a high volume of resumes, it can dramatically lower costs. In fact, according to the U.S. Equal Employment Opportunity Commission, four out of five companies now use automation to make employment decisions.

Here’s where the wheels fall off the cart. Without proper human oversight, there is a pricey and unintended impact of using AI, too: excellent candidates being dropped from consideration because of AI bias.

Some AI tools and models currently being used by recruiters and hiring managers disadvantage women, non-white men, and those with non-traditional career paths or gaps in their experience. To anyone seeking diversity and a more innovative workforce, this spells bad news.

Can we reap the benefits of AI and avoid bias? Of course, we can. I’m going to explain how. But first, I’m issuing an urgent request to all those in hiring roles: Make sure you have a human element involved in the decision-making of your AI recruiting process.

The Bias Problem in AI

The problem with AI is that it is built on historical data. That data can be flawed, skewed or inaccurate, so the model perpetuates those flaws, skews or inaccuracies.

Let’s say you analyze everyone at your company who succeeded at a specific role and feed that profile into AI to help filter resumes. Any hiring bias your company had in the past now continues.

There’s a great story in the book Range about West Point and their cadet scoring method. For years, the school combined standardized test scores, high school rank, physical fitness tests and demonstrated leadership to rank applicants. They didn’t consider that the school also had a steady dropout rate among students. By using the same method and data to select high-ranking cadets, they unwittingly drove the dropout rate higher.

Without human input, an AI tool could use a profile that is too narrowly focused to screen applicants. That means you might miss out on some great candidates simply because they didn’t go to the same college or have the same job title as your top employees.

If you’re using an AI tool built by someone else, you may have no idea what data was used or how it arrived at certain conclusions. Imagine the issues of an applicant tracking system that prioritizes the candidates who respond to an email first or an algorithm that analyzes candidates’ facial expressions, body language and responses during interviews only to disqualify candidates who wear glasses. Ask your vendor to share the steps they’ve taken to ensure their AI is continuously analyzed for bias.

Every hiring process deserves a gut check to ensure that candidates are being selected from a pool with diverse histories, diverse people or diverse skills.

I encourage companies to include the hiring manager in a discussion about the soft and hard skills of each job so that the recruiter can better identify a variety of work experiences that could exhibit those skills. The hiring manager also needs to do a consistent audit of the submissions to ensure that there’s variety.

One of the reasons I started my technology-driven recruitment platform was that in past roles, I felt like I was seeing the same resume over and over and over again. All the resumes blend together, too focused on one kind of candidate. If that’s happening to you, there’s a problem with your AI profile or persona.

The right mix of AI plus human interaction provides an agile method of recruiting. This method uses AI to speed up processes so more frequent feedback from a person or group can make each iteration better than the last. The human element can be used to adjust job descriptions or screening criteria.

Alternatively, hiring managers can create knockout descriptions for AI, eliminating candidates who don’t meet the most important criteria but include a broad variety of backgrounds or experience. Imagine the variety of applicants you might see if your only criteria were “we will only hire someone with a CPA,” or “we will only hire someone with experience in public relations.”

A Different Approach To AI Bias

I mentioned earlier that even the best AI tools reject well-qualified candidates. Since hiring managers never see those resumes, they don’t know what they are missing.

There’s an emerging solution for this issue that combats any possible bias. It presents candidates as a whole person, not just a sum of their past work experience. It’s a new type of profile that allows users to select their top skills and subskills, indicating areas of strength and revealing personal interests and passions. This approach also eliminates certain language cues to present a bias-free picture of the candidate. In my company, we use this type of profile to help working mothers returning to work after a gap in employment or for those that have had a non-traditional career show up in AI-power searches.

While biased AI puts companies at risk, there are some legitimately good uses for AI technology in recruiting. We should think of AI as a way to automate the job tasks that fall to the least-trained people on our staff. AI for chatbots and virtual assistants improves the candidate experience by scheduling interviews, sending follow-ups and answering common questions. That, in turn, frees up recruiters to review resumes, check social media profiles or conduct interviews—putting the human element back into an efficient hiring process.


 


The original article can be found at: Recruiting Daily