EEOC Warns Against the Potential Adverse Impact of Artificial Intelligence
The resource can be found here.
As a reminder, Title VII prohibits discrimination in employment on the basis of race, color, religion, sex (including pregnancy, sexual orientation, and gender identity), or national origin. In addition to direct discrimination, the prohibition applies to unintentional discrimination resulting from the application of neutral tests or selection procedures that disproportionately exclude individuals based on one of the protected categories.
A new issue on the digital horizon is the myriad of potential unintentional consequences when utilizing technology, such as software, algorithms, and artificial intelligence, in making employment decisions, including recruitment, hiring, retention, promotion, transfer, performance monitoring, and other actions. Examples of these types of technologies include:
- Resume scanners that prioritize applications using certain keywords;
- Employee monitoring software that rates employees on their keystrokes;
- “Virtual assistants” that ask job candidates about their qualifications and reject those who do not meet pre-defined standards;
- Video interviewing software that evaluates candidates based on their facial expressions and speech patterns; and
- Testing software that provides scores for applicants or employees regarding their personalities, aptitudes, cognitive skills, or perceived “cultural fit.”
As employers increasingly lean on these tools, they must evaluate whether their application of the tools results in an adverse impact on protected individuals.
The EEOC recommends employers regularly assess whether their use of any technology causes a selection rate for individuals in a protected class at a “substantially” lesser rate than the selection rate for individuals in any other group. The EEOC guidance states that, as a general rule of thumb, one rate is substantially different from another if their ratio is less than four-fifths. This “four-fifths rule” may be used to draw an initial inference, but often some form of statistical significance analysis may also be necessary to determine whether an adverse impact is present. If the use of these tools results in an adverse impact, the use violates Title VII unless the employer can show that the use is “job related and consistent with business necessity.”
Critically, outsourcing the implementation of these tools to vendors, or administering a procedure developed by a third party, does not necessarily absolve employers of liability. Employers may be responsible for the actions of their agents — which can include entities such as software vendors — when employers allow such vendors to act on their behalf. At minimum, employers utilizing vendors to implement technologies should ask what steps have been taken to evaluate adverse impacts on protected individuals. Even when the vendor is wrong in its assessment, if the tool results in a disparate impact, the employer might be vulnerable to claims.
If an employer discovers that the use of any algorithmic decision-making tool would have an adverse impact, it should take steps to reduce the impact or select a different tool to avoid engaging in a practice that violates Title VII. Failure to adopt a less discriminatory algorithm that was considered during the development process therefore may give rise to liability.
The EEOC recommends employers evaluate, on an ongoing basis, whether their employment practices have a disproportionately negative effect on a basis prohibited under Title VII or treat protected groups differently. Employers should proactively make changes to their practices.
AFS attorneys are available to assist employers in reviewing their use of technology-based employment decision making tools and to provide recommendations for best practices moving forward.
Contacts
- Related Practices