AI & Employment Decision Making
By Faraz Amirani
With the rise in the use of artificial intelligence (AI), California is taking steps to regulate AI in employment. For over a year, the state has proposed draft regulations aimed at expanding California’s non-discrimination laws in employment decision-making and introducing multiple bills designed to regulate AI and require employers to assess and audit the use of AI, among other things.
California Civil Rights Council Proposed Regulations
In March 2022, the California Civil Rights Council (formerly the California Fair Employment & Housing Council) released draft regulations for California’s employment non-discrimination laws in order to govern the use of an automated-decision system in making employment decisions. The Civil Rights Council issued revised proposed regulations in February 2023, which primarily updated definitions and defenses for employers.
As it stands today, the draft regulations seek to make it an unlawful practice for employers to use selection criteria (including a qualification standard, employment test, automated-decision system, or proxy) if such use has an adverse impact on or constitutes disparate treatment of an applicant or employee or a class of applicants or employees on a basis protected by California law. The draft regulations expanded on defenses for employers if it is shown that the use of such selection criteria is job-related for the position in question and consistent with business necessity and there is no less discriminatory policy or practice that serves the employer’s goals.
The term “proxy” is defined as a technically neutral characteristic or category correlated with a protected class (e.g., race, sex, national origin). The term “automated-decision system” is defined broadly under the draft regulations to mean a “computational process that screens, evaluates, categorizes, recommends, or otherwise makes a decision or facilitates human decision making that impacts applicants or employees” and “may be derived from and/or use machine-learning, algorithms, statistics, and/or other data processing or artificial intelligence techniques.”
The draft regulations provide examples of unlawful practices (unless an affirmative defense applies) as follows:
- The use of an automated-decision system that measures an applicant’s skill, dexterity, reaction time, and/or other abilities or characteristics may constitute unlawful disparate treatment or have an unlawful adverse impact on individuals with certain disabilities or other protected characteristics.
- An automated-decision system that analyzes an applicant’s tone of voice, facial expressions, or other physical characteristics or behavior may constitute unlawful disparate treatment of or have an unlawful adverse impact on individuals based on race, national origin, gender, or other protected characteristics.
The draft regulations also make third parties that provide services related to hiring or employment decisions (e.g., decisions regarding recruiting, hiring, payroll, benefit administration, or the administration of automated-decision systems for an employer’s use in hiring or employment decisions). Such third parties would be considered an “agent” under the draft regulations and could face liability for violations, as “agents” would also be considered an employer. However, as drafted, there are differing views/interpretations on whether an employer would be liable for the actions of its agent or the agent being directly liable; additional guidance will be needed. As with so many things related to employment law, only time will tell.
Of note, the draft regulations would increase recordkeeping obligations from two years to four years and require employers (or other covered entity dealing with any employment practice and affecting any employment benefit of any applicant or employee) to preserve any personnel or other employment records created or received (including all applications, personnel, membership or employment referral records or files and all automated-decision system data). Further, any person who “sells or provides an automated-decision system or other selection criteria” to an employer or other covered entity, or who uses an automated-decision system or other selection criteria on behalf of an employer or other covered entity, must maintain relevant records (including, in part, any data used in the process of developing and/or applying machine learning, algorithms, and/or artificial intelligence that is utilized as a part of an automated-decision system).
Further, a new California bill (AB 331) was introduced this year to demonstrate California’s push to regulate the use of AI. According to Assembly member Bauer-Kahan’s (the drafter of AB 331) press release, AB 331, which is the first of its kind in the state, ensures the industry follows best practices and cracks down on bad actors by requiring developers and users to mitigate and assess automated decision tools. AB 331’s primary purpose is to prohibit employers from using an automated decision tool that results in algorithmic discrimination.
To begin, AB 331 requires developers and users of automated decision tools to conduct and record an impact assessment, including, among others, a statement of purpose, the intended use, the makeup of the data, a description of safeguards implemented, and the rigor of the statistical analysis. The data reported must also include an analysis of the potential adverse impact based on a protected class. This assessment must be completed on or before January 1, 2025, and annually after that.
In addition, AB 331 will require employers, in part, to notify any individual that is the subject of a consequential decision that an automated decision tool is being used to make or be a controlling factor in making such a decision. The required notice must include, among other things, a statement of the purpose of the automated decision tool, employer contact information, and a description of the automated decision tool that includes a description of any human components and how any automated component is used to inform a consequential decision. If a consequential decision is made solely based on the output of an automated decision tool, an employer must, if technically feasible, accommodate a natural person’s request not to be subject to the automated decision tool (i.e., opt-out) and to be subject to an alternative selection process or accommodation.
The term “consequential decision” is defined as means a decision or judgment that has a legal, material, or similarly significant effect on an individual’s life relating to the impact of, access to, or the cost, terms, or availability of, in relevant part, pay or promotion or hiring or termination.
Lastly, in a recent update to AB 331, there is a private right of action for any violations beginning on or after January 1, 2026. However, the plaintiff has the burden of proof to demonstrate that the employer’s use of the automated decision tool resulted in algorithmic discrimination that caused actual harm to the person bringing the civil action.
Financial institutions in California considering using AI tools in employment decision-making should keep a close eye on these proposed laws, which, if passed, will impact such intended uses. SW&M will continue to monitor developments concerning the use of AI in employment.
 In 2022, the United States Equal Employment Opportunity Commission (EEOC) provided technical assistance guidance regarding the use of AI to assess job applicants and employees. In such technical assistance, the EEOC stated that the use of AI, software, or algorithms may result in unlawful discrimination against people with disabilities in violation of the Americans with Disabilities Act (ADA).
 The Ninth Circuit Court of Appeals certified to California Supreme Court the following question: “Does California’s Fair Employment and Housing Act, which defines “employer” to include “any person acting as an agent of an employer,” permit a business entity acting as an agent of an employer to be held directly liable for employment discrimination?” (Raines v. United States Healthworks Med. Group, 28 F.4th 968, 969 (2022))