The recruiting process is broken, but tech isn’t to blame. People are
More often than I’d like this summer, I’d see an open job posting I knew I was qualified for, take the time to craft a personalized cover letter explaining why I was a good fit, and customize my résumé for the role. I’d apply . . . and get an automated rejection, often within minutes of submission. My summer started with an unexpected layoff. I spent most of it applying and interviewing for new roles. I’ve also spent a ton of time on LinkedIn, where the consensus is that the recruiting process is broken. In many ways, I agree with this. But I disagree with one of the top diagnoses, which is technology—AI recruiting tools and the automation provided by applicant tracking systems, for example—is what’s ruining the hiring process. Tech can’t reject candidates without input from people While on a very basic level, technology—most likely an applicant tracking system: hiring software that automates and optimizes recruiting—is technically the tool physically rejecting applications before a hiring manager can look at them, technology hasn’t yet evolved to the point where it can make decisions without human input. I know this better than most because I worked for a recruiting platform for the past three years. My last company sold onboarding software and an applicant tracking system that utilized AI-assisted capabilities to help organizations streamline their hiring. I helped conceptualize the term “AI-assisted” as part of the marketing team responsible for the company’s messaging, attempting to reflect that AI doesn’t work without input from people and hoping to telegraph to savvy hiring teams to use technology strategically, not simply adopt it and hope for a miracle. I know firsthand that, for example, applicant tracking systems can screen out and automatically reject résumés, but a team of robots doesn’t dream up the parameters for rejection. People decide what keywords a résumé must have or what missing qualifications warrant a rejection. In my role, I repeatedly saw that people did not know how to utilize the technology properly or, even more commonly, had outdated hiring practices or definitions of “quality candidates” that resulted in poor, frustrating job candidate experiences. Human bias While I can’t ascertain exactly why my applications were automatically rejected for jobs I knew I was highly qualified for, I can make some educated guesses. One might be that my résumé has a couple of month-long gaps, transparently reflecting other times I was between roles. While résumé gaps are something many of us have, there are still people with very old-fashioned mindsets who view anything but someone who has consistently worked for their entire adult life as a red flag. Technology isn’t the one telling people that résumé gaps mean a candidate is less than desirable. It’s people who ultimately make the call that those gaps are problematic, despite the fact that these often are caused by forces outside of our control, like layoffs (which happen to many people, regardless of performance), or life events (like a sick relative or a health problem). People set recruiting standards I’ll never forget the content marketing job posting that said, “While a college degree is required, we’ve found our best marketing writers hold MFAs.” First, requiring a college degree for many roles is an outdated gatekeeping tactic. Second, I’m writing for Fast Company just fine with a lowly bachelor’s degree. An MFA is a wild requirement for what was essentially a copywriting role, and while technology sends rejection emails for that job, it was a person who made said wild decision. A human recruiter—not some AI overlord—didn’t show up for our first screening call and was 40 minutes late for the rescheduled one. That recruiter then started the conversation by aggressively asking, “So, why exactly were you laid off?” A marketing manager—not a robot—completely ghosted me after I took the time to write a sample blog based on things we discussed during what felt like a productive 40-minute interview call. Similarly, people are responsible for posting “ghost jobs” (ads for jobs that don’t exist), overly lengthy interview processes, and arbitrarily requiring only in-office work. Conversely, people are the ones—with, sure, assistance from technology—who can implement tactics to improve the world of recruiting. It was people, for example, who made the decision to reach out to me when their company extended the search for candidates for a role I applied to explaining in a thoughtful email that we weren’t being ignored and to reach out with questions. And a huge reason I accepted the role I’m slated to start shortly is because all the people I dealt with during the interview process—ranging from the recruiter to future marketing colleagues—were thoughtful, kind, and professional. They showcased that hiring isn’t broken everywhere, and technology had nothing to do with that.
More often than I’d like this summer, I’d see an open job posting I knew I was qualified for, take the time to craft a personalized cover letter explaining why I was a good fit, and customize my résumé for the role. I’d apply . . . and get an automated rejection, often within minutes of submission.
My summer started with an unexpected layoff. I spent most of it applying and interviewing for new roles. I’ve also spent a ton of time on LinkedIn, where the consensus is that the recruiting process is broken. In many ways, I agree with this. But I disagree with one of the top diagnoses, which is technology—AI recruiting tools and the automation provided by applicant tracking systems, for example—is what’s ruining the hiring process.
Tech can’t reject candidates without input from people
While on a very basic level, technology—most likely an applicant tracking system: hiring software that automates and optimizes recruiting—is technically the tool physically rejecting applications before a hiring manager can look at them, technology hasn’t yet evolved to the point where it can make decisions without human input.
I know this better than most because I worked for a recruiting platform for the past three years. My last company sold onboarding software and an applicant tracking system that utilized AI-assisted capabilities to help organizations streamline their hiring. I helped conceptualize the term “AI-assisted” as part of the marketing team responsible for the company’s messaging, attempting to reflect that AI doesn’t work without input from people and hoping to telegraph to savvy hiring teams to use technology strategically, not simply adopt it and hope for a miracle.
I know firsthand that, for example, applicant tracking systems can screen out and automatically reject résumés, but a team of robots doesn’t dream up the parameters for rejection. People decide what keywords a résumé must have or what missing qualifications warrant a rejection.
In my role, I repeatedly saw that people did not know how to utilize the technology properly or, even more commonly, had outdated hiring practices or definitions of “quality candidates” that resulted in poor, frustrating job candidate experiences.
Human bias
While I can’t ascertain exactly why my applications were automatically rejected for jobs I knew I was highly qualified for, I can make some educated guesses. One might be that my résumé has a couple of month-long gaps, transparently reflecting other times I was between roles.
While résumé gaps are something many of us have, there are still people with very old-fashioned mindsets who view anything but someone who has consistently worked for their entire adult life as a red flag. Technology isn’t the one telling people that résumé gaps mean a candidate is less than desirable. It’s people who ultimately make the call that those gaps are problematic, despite the fact that these often are caused by forces outside of our control, like layoffs (which happen to many people, regardless of performance), or life events (like a sick relative or a health problem).
People set recruiting standards
I’ll never forget the content marketing job posting that said, “While a college degree is required, we’ve found our best marketing writers hold MFAs.” First, requiring a college degree for many roles is an outdated gatekeeping tactic. Second, I’m writing for Fast Company just fine with a lowly bachelor’s degree. An MFA is a wild requirement for what was essentially a copywriting role, and while technology sends rejection emails for that job, it was a person who made said wild decision.
A human recruiter—not some AI overlord—didn’t show up for our first screening call and was 40 minutes late for the rescheduled one. That recruiter then started the conversation by aggressively asking, “So, why exactly were you laid off?”
A marketing manager—not a robot—completely ghosted me after I took the time to write a sample blog based on things we discussed during what felt like a productive 40-minute interview call. Similarly, people are responsible for posting “ghost jobs” (ads for jobs that don’t exist), overly lengthy interview processes, and arbitrarily requiring only in-office work.
Conversely, people are the ones—with, sure, assistance from technology—who can implement tactics to improve the world of recruiting. It was people, for example, who made the decision to reach out to me when their company extended the search for candidates for a role I applied to explaining in a thoughtful email that we weren’t being ignored and to reach out with questions.
And a huge reason I accepted the role I’m slated to start shortly is because all the people I dealt with during the interview process—ranging from the recruiter to future marketing colleagues—were thoughtful, kind, and professional. They showcased that hiring isn’t broken everywhere, and technology had nothing to do with that.