Skip Ribbon Commands
Skip to main content



 ‭(Hidden)‬ Catalog-Item Reuse

4 Things to Do Before Using AI in the Hiring Process

Artificial intelligence tools have been praised for saving hiring managers valuable time and creating a diverse pool of applicants. But it’s important to address some big concerns before implementing AI in your HR department.
Sponsored by
4 things to do before using ai in the hiring process

Conversations about artificial intelligence (AI) have been everywhere recently. But how does it impact companies and their human resources?

Sixty-two percent of Americans believe AI will have a major impact on workers, but only 28% believe it will impact them directly, according to the Pew Research Center. Unfortunately, AI is already impacting employees, with 4,000 jobs lost in May 2023 due to AI, according to Challenger, Gray & Christmas.

While most of the recent conversation involves AI-generated content, other AI formats have been used in the workplace for a while.

AI in the Hiring Process

AI tools used in the hiring process have been praised for saving hiring managers valuable time and creating a diverse pool of applicants by removing bias from the initial review process. Nevertheless, concerns have been raised that there is an unintentional bias built into these tools.

Resume review tools can use predictive analysis to determine what candidate profile would be the best fit for an open position and then compare received electronic resumes to find the “best available" candidates. However, if a candidate uses certain words or phrases which may not fit the AI tool's expectations, the candidate will receive a lower evaluation for no real reason.

More concerning are tools which analyze an applicant's personality, knowledge and communication skills using recorded responses to interview questions and facial expressions.

These tools assess a candidate's fit for a job by matching them to a profile of the company's “ideal employee" using appearance, communication skills, speech patterns, body language, and personality. However, some of these tools have been found to be biased, eliminating people of certain genders, races, ethnicities, and disabilities by giving lower scores for factors that do not match the “ideal" parameters in the programming—such as facial structure, accents, hairstyle, or wearing glasses or head coverings.

Regulations on the use of these tools are already in place. On April 25, four federal agencies—the U.S. Equal Employment Opportunity Commission (EEOC), Department of Justice (DOJ), Consumer Financial Protection Bureau (CFPB), and Federal Trade Commission (FTC)—issued a joint statement addressing the concerns about AI and its potential impacts.

The statement covered several topics, including defining AI, acknowledging its potential positive uses and negative impacts, highlighting potential areas for discrimination, and affirming each agency's commitment “…to monitor the development and usage of automated systems and promote responsible innovation…" as well as “…pledge to vigorously use our collective authorities to protect individuals' rights regardless of whether legal violations occur through traditional means or advanced technologies."

Illinois, Maryland, and New York City have already passed laws regulating the use of “automated employment decisions tools" in the hiring process, with many other states and cities considering similar laws.

AI-Generated Content

Most of the latest news is around chatbots and the AI-generated content they produce. Chatbots, such as OpenAI's ChatGPT, Microsoft's Bing, and Google's Bard, are available to the public by simply setting up an account.

In the HR department, chatbots can be used to research topics and generate content such as policies, procedures, emails, letters and disciplinary action. On the positive side, AI used for HR purposes can help effectively address legalities, uncomfortable topics and messages for general audiences.

However, AI has also been shown to generate content that lacks empathy, is non-specific, disregards the privacy of others, does not offer face-to-face interactions, or contradicts itself. Asking the same question in different ways can give different results, which could complicate or confuse the issue more.

Beyond these concerns are the inherent limitations of chatbots as they are built on a large language model (LLM), which relies on many available data sources. The end results are only as good and valid as the data it references, which is not always valid or accurate. For example, Wikipedia is an often-used resource, but since it relies on user-generated content, it has been proven to be only 80% accurate.

In some cases, chatbots have also created their own inaccurate reference material from which to develop and validate an answer even though it is incorrect or fictional.

To build its database, chatbots retain all entered information for future reference by any user. Since users must input specific information to get the best results, they may need to enter sensitive or confidential information or trade secrets which is added to the chatbot's database. Depending on the information entered and the parameters entered by a future user, companies may find their confidential data available to anyone asking the right questions.

What to Do Before Using AI?

As tools develop and improve, AI will find a place in most workplaces. As you determine how AI will be allowed in your workplace, consider taking the following actions:

1) Do your research into AI. Understand what defines AI, as well as the advantages and drawbacks to each tool. Consider reading resources to educate yourself as much as possible on AI, such as the New York Times' chatbot primer and prompts for more effective chatbot results or Conductor's article on using a chatbot.

2) Research your AI tools. Learn what AI is and how it is incorporated into tools you may use now or may rely on in the future. If you choose to use AI tools, be sure to understand their validity and limitations. For example, if you are going to use virtual analysis of recorded interviews, understand the science behind it, including if the tool has been properly tested to remove implicit biases.

3) Establish policies and procedures on AI use. Draft a policy to outline when and how AI can and cannot be used. Include clear statements prohibiting discrimination and revealing confidential information. While the policy can be general to cover any AI, develop exact procedures and expectations as you initiate AI tools.

4) Train employees and managers. As you expand the use of AI tools in your company, train your employees and managers when and how to use them properly and legally. Instruct users on what is and is not allowed, as well as expectations such as reviewing and fact-checking all content before releasing it or personalizing a letter to an employee or customer.

Paige McAllister is vice president, HR compliance, Affinity HR Group Inc. Affinity HR is the endorsed HR partner of Big “I" Hires, the Independent Insurance Agents of Virginia, Big I New York, and Big I New Jersey.

Reach out to Affinity HR Group via email or 877-660-6400 if you have questions about implementing AI in your agency.