It's annoying when AI chatbots become recruiters
Amanda Claypool applied to several restaurants in Asheville, North Carolina, but encountered an unpleasant situation because she had to interact with an AI chatbot full of errors.
McDonald's chatbot named Olivia approved Claypool for a live interview with a real person, but could not schedule it due to a technical error. Meanwhile, Wendy's restaurant chatbot arranged an interview for a job that was not suitable for her. The system at Hardees told her to come in for an interview with a manager who was on vacation.
"I showed up at Hardees and they seemed surprised. The operations team didn't know what to do to support me," Claypool recalls. She eventually found a job elsewhere, but says the hiring process has become much more complicated than it used to be.
Recruitment chatbots are increasingly being used in industries such as healthcare, retail and restaurants to pre-eliminate unqualified candidates and schedule interviews with suitable candidates. McDonalds, Wendy's, CVS Health and Lowes all use Olivia, a chatbot developed by Paradox, an AI startup company valued at $1.5 billion and headquartered in Arizona.
Most recruitment chatbots are inferior to modern chat chatbots like ChatGPT. They are mainly used to screen applications for crowded positions such as cashiers, warehouse managers and customer service assistants.
Chatbot AI helps screen candidate profiles. Photo: Forbes
These chatbots are very rudimentary and only ask basic questions, but often encounter errors and do not have real people to handle problems that arise. They only accept clear responses, which means many suitable candidates may be eliminated because they provide answers that do not match the results according to the chatbot's large language model.
Experts also warn of problems with people with disabilities, not fluent in English or who are older. Aaron Konopasky, senior legal counsel at the US Equal Employment Opportunity Commission (EEOC), said that chatbots like Olivia cannot offer alternative work solutions for people with disabilities or health problems.
"When talking to real people, candidates always have the opportunity to suggest solutions that meet their own needs. If the chatbot is too rigid and candidates need to make exceptions, they may lose the opportunity to apply for the job." , he said.
Biases in AI training data can also create bias and discrimination when deployed in practice. "If chatbots consider factors like response time, grammatical accuracy, and ability to use complex sentences, that's when to worry about bias. This is difficult to detect if companies don't transparent about the reasons for rejecting candidates," said Pauline Kim, professor of labor law at the University of Washington.
Local governments in the US have introduced laws to monitor and control automation in the recruitment process. New York City issued a new law in early July, requiring businesses that use resume scanners and interview chatbots to inspect their tools to avoid gender and racial discrimination. The state of Illinois in 2020 passed a law requiring companies that use AI to analyze interview videos to ask candidates for permission before recording.
However, companies that want to cut recruitment costs continue to turn to AI. "Human resources are always the source that consumes the most budget for businesses, it is not the factor that generates revenue. Chatbot is the next appropriate step to reduce the load on recruiters," Matthew Cherer, policy consultant at the Center for Democracy and Technology in the US, expressed his opinion.
This is one of the operating goals of Sense HQ, a company that provides AI chatbots to help businesses like Dell and Sony screen thousands of job applications. "We developed chatbots to help recruiters reach more candidates than regular humans. AI should not make hiring decisions on its own, that's when it becomes dangerous," Sense HQ co-founder Alex Rosen said.
RecruitBot is applying AI to recruitment by using machine learning algorithms to research a database of 600 million job applications posted publicly online, thereby helping companies find candidates similar to employees. Present.
However, recruiting too many people with common characteristics will also lead to risks, including discrimination. Amazon in 2018 had to remove its AI profile tracking system because it discriminated against women because the training data consisted mostly of only men.
Urmila Janardan, a policy analyst at the nonprofit Upturn, says some businesses are using personality tests to weed out candidates, many of which are completely unrelated to the job. "You may be eliminated from the recruitment round because of questions related to personality and attitude," she warns.
- What is a Web Application Firewall (WAF) difference between blacklist and whitelist?
- Guide setup Configure a web application firewall (WAF) for App Service
- News Cloud Storage Backup Data VPS | What’s new at Vultr?
- What is a cloud server and how does it work? Create your Cloud Backup business?
- Review service Products and pricing Platform Google Cloud Storage