What Biden's Executive Order on AI Means for Employers

Author: Natasha K.A. Wiebusch

November 14, 2023

cyber-security.jpg

On October 30, 2023, President Biden issued an executive order kicking off official efforts to regulate artificial intelligence (AI) at the federal level. The order - panoramic in scope - sets the stage for a multi-agency effort to tame (without stifling) what has quickly become an explosive arena in US tech innovation. 

The order has several implications for AI providers, consumers and the general public, but its total impact has yet to come into focus. For employers, the order will at minimum add yet another layer to their current AI compliance strategies, which are no doubt still in flux, as states continue to pass regulations of their own.

In this time of regulatory proliferation, employers must make their preparations. They can start by understanding what the order has in store for them. 

The Executive Order

The Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence calls on several federal agencies to take specific actions to promote the safe creation and use of AI and prevent its most sinister dangers. According to former President Barak Obama, who released a statement supporting the order (and who reportedly helped draft the order while aiding the Biden Administration on all things AI), these dangers could include creating a new strain of smallpox, accessing nuclear codes, or attacking critical US infrastructure.

The agencies tasked with this important mission include, but are not limited to, the National Institute of Standards and Technology, the Department of Homeland Security (which will establish the AI Safety and Security Board), the Department of Energy, the Department of Commerce and the Department of Labor.

The order is organized into eight principles, which will act as the federal government's blueprint for policy creation and regulation:

  1. Safety and Security: Ensure that AI is safe and secure.
  2. Innovation and Competition: Promote responsible innovation, competition and collaboration.
  3. US Workforce: Remain committed to supporting American workers.
  4. Equity and Civil Rights: Continue to advance equity and civil rights.
  5. Consumer Protections: Uphold consumer protection laws and principles.
  6. Privacy and Civil Liberties: Protect privacy and civil liberties.
  7. Government Readiness: Ensure responsible AI use and upskilling of government employees.
  8. Global Leadership: Lead the way to global societal, economic, and technological progress.

It then directs relevant agencies to take certain actions based on these principles.

What's in It for Employers?

Though the above principles could all impact employers in some way, some provisions are more attenuated than others. Most relevant to employers are the provisions related to safety and security, supporting US workers and advancing equity and civil rights.  

Safety and Security

The order's safety and security provisions seek to build a framework for ensuring AI products meet quality, safety and security standards. This includes establishing guidelines for thorough AI testing and reporting systems. These requirements are mostly directed towards AI providers, however they will also impact employers that adopt and/or train AI in-house.  

These forthcoming requirements will help employers vet potential AI providers to ensure that the new tools do not expose the organization to additional cyber threats. In addition to protecting the organization against cyber threats, ensuring AI providers meet forthcoming standards could protect the organization from potential liability for using substandard AI products. Though not noted in this order, recent guidance from the Equal Employment Opportunity Commission reminds employers that they could be held liable for discrimination caused by AI products they use, even if they were created by an outside provider. This common legal rule of imputed liability - that one party could be held responsible for another party's actions - should not be ignored. 

To appropriately vet AI providers, employers should be prepared to familiarize themselves with safety and security guidelines and requirements and incorporate those requirements into their vetting process.  Employers that plan to train and create their own AI products should be particularly careful to ensure compliance with safety and requirements.

Key Safety and Security Actions Called for by the Executive Order

  • Develop industry standards for safe, secure and trustworthy AI systems.
  • Establish guidelines for red-teaming tests, where a designated team emulates a potential hacker and attempts to carry out an attack to test an organization's cyber defenses, for AI developers.
  • Create reporting systems for AI developers and organizations.
  • Increase cybersecurity regulation related to Infrastructure as a Service (IaaS) products and AI models.
  • Establish an AI Safety and Security Board.
  • Create safeguards against risks posed by synthetic content.
  • Solicit input from stakeholders on dual-use foundation models (including private sector).

Protecting the US Workforce

The order also includes several provisions focused on protecting and preparing the American workforce, specifically addressing the potential for job displacement and the need to prioritize development. The order calls on relevant agencies to prepare for workforce disruptions by ensuring government programs are prepared to manage job displacement and to prioritize employee upskilling to help the workforce take advantage of new opportunities in an AI-powered future of work.

Employers can support employees and their business by upskilling or reskilling employees most likely to be displaced, engaging in proactive succession planning, and adapting current positions to partner with AI (which will require certain employee skills).

Employers should also remain keen on Fair Labor Standards Act (FLSA) compensation requirements, as the order calls for additional guidance on compensation requirements as they relate to employee use of AI to aid in their work. It also signals support for employee labor rights, stating "all workers need a seat at the table, including through collective bargaining, to ensure that they benefit from" the opportunities provided by new jobs and industries created by AI. As a result, employers may benefit from refreshing their policies and practices to ensure compliance with collective bargaining requirements and protected activities under the National Labor Relations Act.

Finally, the order seeks to ensure that AI used in the workplace advances employee well-being. What does that look like?  According to the order, principles and best practices to mitigate AI's potential harms to employee well-being should address:

  • Job displacement and career opportunities,
  • Labor standards and job quality, and
  • Implications for those who work for organizations that use AI to collect data about them.

Key Workforce Actions Called for by the Executive Order

  • Prepare for AI-related workforce interruptions. Specifically, job displacement.
  • Ensure that AI deployed in the workplace advances employees' well-being.
  • Issue FLSA compensation guidance addressing the use of AI by employees.
  • Prioritize AI-related education and workforce development to foster a diverse AI-ready workforce.

Equity and Civil Rights

The executive order states that AI should advance equity and civil rights. Its provisions primarily address ensuring equity in public programs and benefits. However, it also asks for guidance for federal contractors to prevent unlawful discrimination caused by AI in hiring.

Employers that are federal contractors must be ready to comply with forthcoming guidance. This may involve evaluating current uses of automation in recruitment and hiring and creating new AI provider vetting practices to ensure compliance.

Though non-federal contractors are outside the scope of the order's hiring provision, they must not ignore non-discrimination requirements under existing laws. To help employers understand the role AI could play in discrimination, the EEOC has issued guidance on preventing AI-created discrimination under the Americans with Disabilities Act (ADA) and, as noted above, Title VII. Employers considering using AI in their organization should become familiar these requirements, particularly when using AI in recruiting new talent.

Key Equity and Civil Rights Actions Called for by the Executive Order

  • Prevent and address unlawful discrimination caused by AI in government programs and benefits administration.
  • Issue guidance to state, local, tribal and territorial public benefits administrators on the use of AI.
  • Publish guidance for federal contractors to prevent unlawful discrimination caused by AI in hiring.

Preparing for an AI-Powered Future of Work

Employers will continue to explore, adopt, and in some cases, create new AI tools to enhance their work and productivity. Along the way, they will be required to address new issues related to workforce development and labor rights, wage and hour laws, equal employment opportunity, employee well-being, and more.Though the road ahead may be difficult, Biden's blueprint for AI regulation can act as a roadmap for operating in a regulated AI-powered future of work.