AI tools have become an integral part of the hiring process in recent years, impacting whether or not individuals secure jobs. These tools are used by companies and government agencies across various sectors, including housing, education, finance, law enforcement, and healthcare. It is reported that a significant number of companies, including 99% of Fortune 500 companies, have embraced AI-based and automated tools in their hiring processes, particularly in low-wage job sectors with high concentrations of Black and Latine workers.
The integration of AI-based tools occurs at every stage of hiring. They help target online job advertisements and match candidates to suitable positions on platforms like LinkedIn and ZipRecruiter. Additionally, these tools are utilized for resume screening, applicant rejection or ranking, as well as assessing personality traits. Some employers even make use of AI tools to analyze video submissions in order to measure traits such as tone, pitch, facial movements, and expressions.
While these tools are marketed as objective and less discriminatory, they pose a significant risk of exacerbating existing workplace discrimination based on race, sex, disability, and other protected characteristics. AI tools rely on vast amounts of data to generate predictions about future outcomes. However, if the training data used contains biases or reflects existing institutional and systemic biases, the tools themselves will perpetuate discrimination.
Furthermore, the correlations that AI tools uncover may not necessarily be causally connected to job success. For instance, a resume screening tool associated success with being named Jared and playing high school lacrosse. In addition, the personality traits measured by these tools may be culturally specific or inadvertently screen out candidates with disabilities.
There is also the concern that tools analyzing facial, audio, or physical interactions with a computer can lead to even greater discrimination. Accuracy in measuring personality characteristics through the sole use of mouse clicks, tone of voice, or facial expressions is highly questionable. Moreover, this type of analysis increases the risk of automatic rejection or lower scores based on disabilities or race.
Transparency and awareness surrounding the use of these tools are lacking. Applicants often remain unaware that AI tools are utilized in the hiring process and are unaware of the potential discriminatory decisions being made about them. Therefore, it is crucial for employers to cease using automated tools that have a high risk of screening out applicants based on protected characteristics. It is also necessary for employers to subject any potential tools to third-party assessments for discrimination and provide applicants with proper notice and accommodations.
To address these issues, strong regulation and enforcement of existing protections against employment discrimination are needed. Regulators possess the authority and obligation to protect individuals in the labor market from the harms of AI tools, and individuals have the right to assert their protections in court. Additionally, legislators play a vital role by considering legislation that ensures fairness and non-discrimination in the use of AI tools in employment. This legislation may focus on transparency, impact assessments, and privacy considerations, creating a framework that safeguards individuals from potential discriminatory practices.
Frequently Asked Questions (FAQ)
1. Are AI tools commonly used in the hiring process?
Yes, reports indicate that approximately 70% of companies and 99% of Fortune 500 companies incorporate AI-based and automated tools in their hiring processes.
2. Can AI tools perpetuate discrimination in the workplace?
Yes, AI tools pose a significant risk of exacerbating existing discrimination based on race, sex, disability, and other protected characteristics. These tools rely on data that may reflect biases and systemic inequalities, leading to discriminatory outcomes.
3. How do AI tools analyze personality traits in applicants?
AI tools utilize various methods to assess personality traits. This may include online multiple-choice tests, video analysis of facial expressions and movements, and voice analysis of tone, pitch, and word choice.
4. What can employers do to address discrimination risks associated with AI tools?
Employers should cease using automated tools with a high risk of screening out applicants based on protected characteristics. Additionally, they should subject potential tools to third-party assessments for discrimination and provide applicants with proper notice and accommodations.
5. What measures can be taken to regulate the use of AI tools in hiring?
Legislators can play a role in implementing regulations that ensure fairness and non-discrimination in the use of AI tools. This may involve transparency requirements, impact assessments, and privacy considerations to safeguard individuals from potential discriminatory practices.