By Katie O’Connor and Danielle Mrdjenovich, Buchanan Ingersoll
As technology continues to advance, the integration of Artificial Intelligence (AI) in the workplace has become increasingly prevalent. While AI offers a myriad of benefits such as improved efficiency, enhanced decision-making, and streamlined processes, it also creates a unique set of challenges and considerations that employers must address to ensure that they comply with employment-related laws in this evolving legal landscape.
Artificial Intelligence, or AI, refers to the development of computer systems and software that can perform tasks requiring human intelligence. It involves creating algorithms and models that enable machines to perceive, learn, reason, and make decisions in a manner that emulates human cognitive abilities. AI systems are designed to analyze vast amounts of data, recognize patterns, make predictions, and adapt to changing circumstances.
AI is already being widely used in the workplace. Some surveys report that nearly 6 out of every 10 employees use it in their jobs. AI has numerous applications in the workplace. For example, human resources professionals commonly use AI tools to help with recruiting and hiring efforts, where AI algorithms assist in identifying qualified candidates and streamlining the selection process. The use of AI tools at work, generally, and with recruiting and hiring efforts, specifically, can potentially expose employers to legal risk if the use of these tools runs afoul of federal, state, and local anti-discrimination statutes.
The use of AI in the workplace raises several legal considerations under existing employment laws. Some of the laws that may be implicated by the use of AI include:
Title VII of the Civil Rights Act of 1964 prohibits employment discrimination based on race, color, religion, sex, and national origin. In May 2023, the Equal Employment Opportunity Commission (EEOC) issued a technical assistance document providing some insight into Title VII's application to an employer’s use of artificial intelligence in the workplace. Specifically, it provides guidelines regarding when algorithmic decision-making tools may have a disparate impact on the basis of a protected characteristic. The EEOC instructs: “employers can assess whether a selection procedure has an adverse impact on a particular protected group by checking whether use of the procedure causes a selection rate for individuals in the group that is ‘substantially’ less than the selection rate for individuals in another group.”
The Americans with Disabilities Act (ADA) prohibits discrimination against individuals with disabilities in various areas, including employment. When it comes to the implications of the ADA for AI, there are considerations around ensuring accessibility and non-discrimination for individuals with disabilities in the design and use of AI systems. In May 2022, the EEOC issued a technical assistance document that explains the application of the ADA to an employer’s use of artificial intelligence in the workplace. The technical guidance explains that an employer's use of an algorithmic decision-making tool may be unlawful because (1) the employer does not provide a reasonable accommodation necessary for a job applicant or employee to be rated fairly and accurately by the algorithm; (2) the employer relies on an algorithmic decision-making tool that intentionally or unintentionally screens an individual with a disability, even though that individual is able to do the job with a reasonable accommodation; or (3) the employer adopts an algorithmic decision-making tool for use with its job applicants or employees that violates the ADA's restriction on disability-related inquiries and medical examinations.
The Age Discrimination in Employment Act (ADEA) prohibits employment discrimination against individuals 40 years of age or older. When considering the implications of the ADEA for AI, it is important to ensure that AI systems used in the workplace do not inadvertently discriminate against older workers or contribute to age-related biases. In fact, one of the EEOC’s earliest settlements followed allegations that the employer programmed its recruitment software to reject older applicants automatically.
The Fair Labor Standards Act (FLSA) requires that employers properly classify and pay their employees. The FLSA can have several implications for the use of AI in the workplace. By way of example, where AI is used for scheduling, timekeeping, or payroll processing, an employer must ensure that it complies with the FLSA to ensure accurate payment of wages and overtime.
AI tools that collect and process employee data must comply with various privacy laws to protect employee data rights.
Additionally, some state and local laws in the United States have imposed restrictions on the use of AI in the workplace. These laws aim to address potential biases, discrimination, and privacy concerns associated with AI systems. For example, the Illinois Artificial Intelligence Video Interview Act requires employers to notify job applicants when AI technology is used to analyze their video interviews and obtain consent for such use. And, a New York City local ordinance requires that if an employer is using AI tools in hiring, the AI tools must be subject to an independent bias audit showing that the algorithms are unbiased, and the employer must make a summary of the most recent bias audit results on its website. Other jurisdictions have proposed legislation that is winding its way through the legislature.
Organizations should consult legal experts or employment law specialists to ensure that the use of AI is consistent with applicable law. Furthermore, to the extent that the use of AI results in layoffs or reductions in force, employers should ensure that they are mindful of the federal Worker Adjustment and Retraining Notification Act and equivalent state and local laws.
While legislation regarding AI has been introduced, there is currently no existing federal legislation that comprehensively regulates the use of AI in the workplace. Nonetheless, various executive enforcement agencies have issued guidance related to the application of AI.
The Bureau of Consumer Financial Protection, the Department of Justice, the Equal Employment Opportunity Commission, and the Federal Trade Commission have issued a Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems. Within this Joint Statement, these enforcement agencies reaffirmed their commitment to protecting civil rights, fair competition, consumer protection, and equal opportunity. The Statement explains that potential discrimination in automated systems may come from different sources, including problems with (1) data and datasets, (2) model opacity and access, and (3) design and use.
In October 2022, the White House issued a Blueprint for an AI Bill of Rights. This Blueprint articulates several principles for the safe and effective use of AI in the United States. The five principles articulated in the AI Bill of Rights for the safe and ethical development of AI include (1) safe and effective systems, (2) algorithmic discrimination protections, (3) data privacy, (4) notice and explanation, and (5) human alternatives, considerations, and fallback.
In October 2023, President Biden issued an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, outlining the initial steps for the safe, secure, and trustworthy development and use of AI in the United States and operationalizing the principles articulated in the White House’s Blueprint for an AI Bill of Rights. Although the Order primarily applies to federal agencies and contractors, it sets a precedent for future AI development in other industries and sectors.
President Biden’s October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence tasks the National Institute of Standards and Technology (NIST) with developing guidelines, best practices, and testing standards for the safe and ethical deployment of AI. NIST has already published an Artificial Risk Management Framework that, while voluntary, should be used and reviewed by any company or industry developing AI.
Employers should stay informed about evolving federal legislation and regulatory developments related to AI in the workplace to ensure compliance with emerging requirements and best practices.
In conclusion, the integration of AI in the workplace represents a transformational shift that offers great potential for enhancing efficiency, productivity, and decision-making processes. However, this technological evolution also brings forth complex challenges that require a nuanced approach to mitigate risks such as bias, discrimination, and privacy violations. Employers will need to address these concerns using a multi-faceted approach that includes employee training, clear policies and practices, and robust governance frameworks to ensure that AI is law