After applying in vain for nearly 100 jobs through the human resources platform Workday, Derek Mobley noticed a suspicious pattern.
“I would get all these rejection emails at 2 or 3 in the morning,” he told Reuters. “I knew it had to be automated.”
Click here to connect with us on WhatsApp
Mobley, a 49-year-old Black man with a degree in finance from Morehouse College in Georgia, had previously worked as a commercial loan officer, among other jobs in finance. He applied for mid-level jobs across a range of sectors, including energy and insurance, but when he used the Workday platform, he said he did not get a single interview or call-back and was often forced to settle for gig work or warehouse shifts to make ends meet.
Mobley believes he was being discriminated against by Workday’s artificial intelligence algorithms.
In February, he filed what his lawyers describe as a first-of-its-kind class action lawsuit against Workday, alleging that the pattern of rejection he and others experienced pointed to the use of an algorithm that discriminates against people who are Black, disabled or over the age of 40. In a statement to Reuters, Workday said Mobley’s lawsuit was “completely devoid of factual allegations and assertions,” and said the company is committed to “responsible AI.”
The question of what “responsible AI” might look like goes to the heart of an increasingly robust push-back against the unrestricted use of automation in the US recruitment market.
Across the United States, state and federal authorities are grappling with how to regulate the use of AI in labor hiring and guard against algorithmic bias.
More From This Section
Around 85 per cent of large US employers, now use some form of automated tool or AI to screen or rank candidates for hire.
This includes using resume screeners that automatically scan applicants’ submissions, assessment tools that grade an applicant’s suitability for a job based on an online test, or facial recognition or emotion recognition tools that can analyze a video interview.
In May, the Equal Employment Opportunity Commission (EEOC), the federal agency that enforces civil rights law in workplaces, released new guidelines to help employers prevent discrimination when using automated hiring processes.
In August, the EEOC settled its first ever automation-based case, fining iTutorGroup $365,000 for using software to automatically reject applicants over the age of 40. City and state authorities are also weighing in. “Right now, it’s the Wild Wild West out there,” said Matt Scherer, a lawyer with the Center for Democracy and Technology (CDT).
Algorithmic blackballing
Technology-enabled bias is a risk because AI uses algorithms, data and computational models to mimic human intelligence. It relies on “training data” and if there is bias in that data, which is often historical, this could be replicated in an AI program. In 2018, for instance, Amazon abandoned an AI resume screening product that had started to automatically downgrade applicants with the word “women’s” on their CVs, as in “women’s chess club captain.”
This was because Amazon’s computer models were trained to vet applicants by observing patterns over a decade .This is the kind of discrimination that worries Brad Hoylman-Sigal, a state senator in New York. “Many of these tools have been proven to unduly invade workers’ privacy and discriminate against women, people with disabilities, and people of color,” he said.
In April, the FTC and three other federal agencies, including the EEOC, said in a statement that they were looking at potential discrimination arising from data sets that train AI systems and opaque “black box"”models that make anti-bias diligence difficult.
Some advocates of AI acknowledge the risk of bias but say this can be controlled.