The email arrived, a form rejection after weeks of anticipation. You re-read the qualifications, matching each one in your mind against your resume. How is it possible to be so perfectly qualified, yet so consistently denied?
Applying for a job is tough enough without wondering if an AI system is acting as gatekeeper, potentially blocking your application before a human even glances at it.
A new lawsuit aims to bring transparency to these AI hiring tools, arguing that automated applicant “scores” should be legally viewed as credit checks, subject to the same consumer protection laws.
Filed in California state court, the proposed class action involves two women in STEM fields who believe AI hiring screeners have unfairly filtered them out of roles they were qualified for.
“I’ve applied to hundreds of jobs, but it feels like an unseen force is stopping me from being fairly considered,” Erin Kistler, one of the plaintiffs, said in a press release. “It’s disheartening, and I know I’m not alone in feeling this way.”
She’s not alone. Companies are increasingly using AI in hiring. The World Economic Forum estimates that roughly 88% of companies now use some form of AI for initial candidate screening. That’s almost nine out of ten.
The lawsuit targets Eightfold specifically. Eightfold is an AI human resources company that provides employers with tools to manage recruitment and hiring. One of its products generates a numerical score predicting how well a candidate matches a specific job.
This scoring system is central to the case. Eightfold’s “match score” uses data from various sources, like job postings, desired skills, applications, and sometimes LinkedIn, to predict the match between a candidate and a job position. The score ranges from zero to five and is promoted to “helps predict the degree of match between a candidate and a job position.”
The lawsuit argues this process creates a “consumer report” under the Fair Credit Reporting Act (FCRA), a federal law regulating credit bureaus and background check companies. The claim suggests that because the score aggregates personal data and ranks candidates for “employment purposes,” Eightfold should adhere to the same rules as credit reporting agencies.
These rules involve notifying applicants about the report creation, obtaining their consent, and allowing them to dispute inaccurate information.
“Eightfold believes the allegations are without merit. Eightfold’s platform operates on data intentionally shared by candidates or provided by our customers,” an Eightfold spokesperson told Gizmodo in an emailed statement. “We do not scrape social media and the like. We are deeply committed to responsible AI, transparency, and compliance with applicable data protection and employment laws.”
The lawsuit seeks a court order for Eightfold to comply with state and federal consumer reporting laws, along with financial compensation for the plaintiffs.
“Qualified workers across the country are being denied job opportunities based on automated assessments they have never seen and cannot correct,” said Jenny R. Yang, a lawyer for the case and former chair of the U.S. Equal Employment Opportunity Commission. “These are the very real harms Congress sought to prevent when it enacted the FCRA. As hiring tools evolve, AI companies like Eightfold must comply with these common-sense legal safeguards meant to protect everyday Americans.”
AI Hiring: Modern Day Tea Leaves?
I spoke with a friend last week, a seasoned software engineer, who described the AI-driven job application process as reading tea leaves. You throw your resume into the system, hope it’s interpreted favorably, and wait for a sign. It’s a black box, and for many, deeply frustrating. The emotional toll can be significant.
This lawsuit isn’t just about legal compliance; it’s about fairness and transparency in a world increasingly influenced by algorithms. If AI is judging our qualifications, shouldn’t we have the right to understand how and why?
Should AI hiring tools be regulated like credit reporting agencies?
The core of the lawsuit rests on whether AI-driven “match scores” should be considered consumer reports. The Fair Credit Reporting Act (FCRA) grants consumers specific rights, including the right to know what information is being collected about them, the right to dispute inaccuracies, and the right to limit who has access to their credit information.
If the court sides with the plaintiffs, AI hiring companies like Eightfold (€ equivalent) could face significant changes in how they operate. They would need to provide candidates with access to their scores, explain how those scores were calculated, and allow candidates to challenge any incorrect data. The intent is simple: AI should be a tool, not an insurmountable obstacle.
The LinkedIn Factor and Eightfold’s Match Score
I know a marketing director who meticulously crafts their LinkedIn profile, spending hours optimizing keywords and endorsements, seeing it as their digital handshake. But what if that handshake is being interpreted, analyzed, and scored by an AI without them even knowing?
Eightfold’s “match score” pulls data from various sources, including LinkedIn. This raises questions about data privacy and control. How much influence does your LinkedIn profile have on your job prospects if AI is using it to generate a score you never see? Does this create a system where job seekers are forced to play an algorithmic game?
Tools like Eightfold aim to streamline the hiring process for companies. But if candidates don’t understand how these tools work, it creates a power imbalance.
What data is used to generate AI hiring scores?
AI hiring tools typically analyze a wide range of data, from resumes and cover letters to online profiles and even video interviews. The algorithms search for keywords, skills, experience, and other factors that match the requirements of the job description.
Eightfold, for example, uses data from job postings, employer preferences, applications, and, in some instances, LinkedIn profiles. This data is then used to generate a score that predicts the likelihood of a candidate being a good fit for the role. This score is like a digital fingerprint, but it’s a fingerprint many job seekers never get to examine.
The Future of AI and Job Applications: Transparency or Black Box?
Consider the plight of recent graduates who are entering a job market saturated with AI-driven hiring processes. They are learning the rules of a game that isn’t fully explained, and the stakes are incredibly high.
This lawsuit against Eightfold could set a precedent for how AI is used in hiring. If successful, it could force AI companies to be more transparent about their algorithms and data practices. If unsuccessful, it could solidify the “black box” approach, leaving job seekers in the dark about how their qualifications are being assessed.
How can job seekers navigate AI-driven hiring processes?
For now, job seekers should focus on optimizing their resumes and online profiles with relevant keywords, tailoring their applications to each specific job, and practicing their interviewing skills. Assume you’re speaking to an AI, and use keywords that resonate with the job description.
Furthermore, research the companies you’re applying to and try to understand the AI tools they use. While complete transparency may not yet be the norm, any information you can gather can help you tailor your approach. The market is becoming increasingly efficient; what are the impacts to human capital?