Is AI is making it harder for workers to find jobs?
I am an assistant professor of management and organizations at Northwestern University’s Kellogg School of Management. My research investigates how AI, undergirded by algorithms, is impacting the nature of work and employment relationships in organizations and labor markets. Unfortunately, evidence suggests that for marginalized job seekers, finding a job in today’s economy is becoming even harder because of how organizations are leveraging AI in the hiring process. The latest employment numbers released by the Bureau of Labor and Statistics continue to flash a bright red alert. Many experts believe that the slowing demand for labor is a forewarning for more economic trouble ahead. For job seekers, finding a job in a difficult economy is hard enough. For vulnerable job seekers, such as historically marginalized groups including women, immigrants, refugees, and people with disabilities, finding a job in today’s economy will be even harder because organizations continue to embrace artificial intelligence for hiring. Recent reports suggest that over 80% of employers use some form of an automated tool in the hiring process. How AI impacts the job search process While organizations tout AI hiring systems as providing them the ability to expand the number of job applicants they evaluate, this approach can be problematic for every step of the job search process. When job seekers use online platforms to search for jobs, these platforms use AI systems to suggest jobs they believe match their prior employment history. For many vulnerable job seekers, this means AI systems can shoehorn their job opportunities by showing them openings that have lower wages and less career growth potential. A worker with retail service experience, for example, will likely be shown jobs in retail service, rather than jobs in industries that typically have higher wages and greater career growth opportunities such as healthcare and finance. For immigrants and refugees, AI systems struggle to understand how their qualifications and credentials they obtained in their native country relate to what organizations are looking for in their host country. As a result, despite having the requisite expertise, these workers may never show up as qualified matches on an organization’s hiring list. For women, many studies and reports highlight how AI systems embed biases that are rampant in the hiring process. Even when organizations train AI systems to be “gender blind” by ignoring the name on a resume, they can still discriminate against women. AI systems can even use something seemingly as innocuous as an address to discriminate against job seekers. We know ZIP codes strongly correlate with race, education, income, and opportunity. As a result, if organizations have predominantly hired candidates from certain zip codes, such as people from elite universities or large cities, AI systems can pick up on this correlation and use a candidate’s zip code to determine which job opportunities are shown to job seekers and which job seekers are deemed a good fit for an organization’s opening. Even if a job seeker can secure an interview, organizations are increasingly using AI tools to screen job candidates’ video recorded responses to interview questions. This approach is particularly disadvantageous towards job seekers with disabilities. AI video screening tools, for example, often require candidates to respond to questions within a short, fixed period. Job seekers with a speech delay, for example, could be penalized by an AI system if it deems their response as incompetent because they could not complete it in time. Thus, rather than AI expanding an organization’s ability to evaluate more job candidates, I would argue that it is creating an invisible cage for job seekers. It is invisible because the organizations use AI to embed the rules and criteria for success at an unprecedented speed and scale without ever providing job seekers with transparency or explanation for when or how these AI systems are being used. It is a cage because these AI systems control which opportunities job seekers can see, how they are ranked, and how they are evaluated before any human ever intervenes. Potential solutions While there are laws that prohibit discrimination in the hiring process, regulating how organizations use AI in the hiring process will be difficult. “Failure-to-hire” cases are rarely pursued because it is challenging for an individual who was never granted an interview to pinpoint the specific policy or practice responsible for their rejection. Rather than waiting for regulators and organizations to figure it out, job seekers could create their own platform that crowdsources information about how transparent organizations are about the AI systems used in the hiring process. Additionally, existing platforms many job seekers use to learn about organizations, such as Glassdoor and LinkedIn, should add rating criteria tha
I am an assistant professor of management and organizations at Northwestern University’s Kellogg School of Management. My research investigates how AI, undergirded by algorithms, is impacting the nature of work and employment relationships in organizations and labor markets. Unfortunately, evidence suggests that for marginalized job seekers, finding a job in today’s economy is becoming even harder because of how organizations are leveraging AI in the hiring process.
The latest employment numbers released by the Bureau of Labor and Statistics continue to flash a bright red alert. Many experts believe that the slowing demand for labor is a forewarning for more economic trouble ahead.
For job seekers, finding a job in a difficult economy is hard enough. For vulnerable job seekers, such as historically marginalized groups including women, immigrants, refugees, and people with disabilities, finding a job in today’s economy will be even harder because organizations continue to embrace artificial intelligence for hiring. Recent reports suggest that over 80% of employers use some form of an automated tool in the hiring process.
How AI impacts the job search process
While organizations tout AI hiring systems as providing them the ability to expand the number of job applicants they evaluate, this approach can be problematic for every step of the job search process.
When job seekers use online platforms to search for jobs, these platforms use AI systems to suggest jobs they believe match their prior employment history.
For many vulnerable job seekers, this means AI systems can shoehorn their job opportunities by showing them openings that have lower wages and less career growth potential. A worker with retail service experience, for example, will likely be shown jobs in retail service, rather than jobs in industries that typically have higher wages and greater career growth opportunities such as healthcare and finance.
For immigrants and refugees, AI systems struggle to understand how their qualifications and credentials they obtained in their native country relate to what organizations are looking for in their host country. As a result, despite having the requisite expertise, these workers may never show up as qualified matches on an organization’s hiring list.
For women, many studies and reports highlight how AI systems embed biases that are rampant in the hiring process. Even when organizations train AI systems to be “gender blind” by ignoring the name on a resume, they can still discriminate against women.
AI systems can even use something seemingly as innocuous as an address to discriminate against job seekers. We know ZIP codes strongly correlate with race, education, income, and opportunity. As a result, if organizations have predominantly hired candidates from certain zip codes, such as people from elite universities or large cities, AI systems can pick up on this correlation and use a candidate’s zip code to determine which job opportunities are shown to job seekers and which job seekers are deemed a good fit for an organization’s opening.
Even if a job seeker can secure an interview, organizations are increasingly using AI tools to screen job candidates’ video recorded responses to interview questions. This approach is particularly disadvantageous towards job seekers with disabilities. AI video screening tools, for example, often require candidates to respond to questions within a short, fixed period. Job seekers with a speech delay, for example, could be penalized by an AI system if it deems their response as incompetent because they could not complete it in time.
Thus, rather than AI expanding an organization’s ability to evaluate more job candidates, I would argue that it is creating an invisible cage for job seekers. It is invisible because the organizations use AI to embed the rules and criteria for success at an unprecedented speed and scale without ever providing job seekers with transparency or explanation for when or how these AI systems are being used. It is a cage because these AI systems control which opportunities job seekers can see, how they are ranked, and how they are evaluated before any human ever intervenes.
Potential solutions
While there are laws that prohibit discrimination in the hiring process, regulating how organizations use AI in the hiring process will be difficult. “Failure-to-hire” cases are rarely pursued because it is challenging for an individual who was never granted an interview to pinpoint the specific policy or practice responsible for their rejection.
Rather than waiting for regulators and organizations to figure it out, job seekers could create their own platform that crowdsources information about how transparent organizations are about the AI systems used in the hiring process.
Additionally, existing platforms many job seekers use to learn about organizations, such as Glassdoor and LinkedIn, should add rating criteria that job seekers can fill out to rate how transparent organizations are about how they use AI systems in the hiring process.
Vulnerable job seekers have arguably always encountered an invisible cage in the hiring process. Without any changes, AI may exacerbate the difficulties they have finding a job.
Hatim Rahman is an assistant professor of management and organizations at Northwestern University’s Kellogg School of Management. His research investigates how A.I., undergirded by algorithms, is impacting the nature of work and employment relationships in organizations and labor markets.