On Oct. 30, 2023, President Biden issued a broad executive order designed to manage risks associated with AI. This is a mammoth order, touching nearly every industry and instructing entities (departments, agencies, etc.) across the government to take a range of action. This GT Advisory breaks down how the EO may affect various businesses, including some directives for proposed regulations, some actions agencies and departments must undertake, and key takeaways.
EOs are not laws and do not have the force of law; instead, they are directives to executive branch government entities (employees, agencies, departments, etc.), instructing them to leverage existing legal authority. This EO directs federal agencies and departments to create new standards and regulations for use and oversight of AI, including to ensure responsible and nondiscriminatory use of AI with respect to national security, employment, immigration, criminal justice, intellectual property, education, health care, and more. A major focus of the EO is establishing guidance on best practices, including for risk management. Some forthcoming guidance may not be legally binding per se, but may set expectations for protecting users, consumers, patients, employees, and others from impacts of AI adoption.
PROPOSED REGULATIONS AND PROCEDURAL CHANGES
Short of new regulation, the EO directs various actions for government entities:
- Promoting Innovation in Health Care and Life Sciences. Key departments are directed to prepare joint guidance on use of AI in health care and life sciences, including impacts on drug discovery, health care delivery, and public health, and are instructed to prioritize (to the extent possible under existing law) grantmaking and other awards to support responsible AI development and use. The guidance’s terms are currently open. There will also be a renewed emphasis on privacy compliance as AI is increasingly used by providers and payers, and the government will propose a framework for identifying clinical errors using AI and for building a central tracking repository that may identify patient harm incidents and develop clinical best practices that may be distributed to providers and payers. Two “AI Tech Sprint” competitions will be held to encourage innovation in veterans’ health care.
- Supporting Small Businesses. The Small Business Administration (SBA) is to prioritize funding regional support for AI development and allocate up to $2 million for accelerators supporting AI-related training and technical assistance. The SBA will consider revising eligibility loan criteria to cover expenses related to AI adoption.
- Protecting Consumers, Employees, Patients, Passengers, Students, Etc. There will be new efforts to protect Americans from fraud, discrimination, privacy threats, financial risks, and other risks arising from AI. Government agencies will consider requiring due diligence on and monitoring of any third-party AI services organizations use and will emphasize/clarify requirements related to transparency and explainability of AI products/services.
- Enabling AI Testing. The government will develop and help ensure availability of testbeds and other test environments for AI systems. The government will also establish AI model-evaluation tools capable of identifying certain threats and hazards.
- Attracting AI Talent. There will be new procedures to streamline processing times of visa applications, including to ensure sufficient timely appointments, for those who work in AI or other emerging technologies.
- Educating AI Workers. The National Science Foundation is to make available resources to support AI-related education and AI-related workforce development through existing programs (including fellowship programs and awards).
- Processing Federal Benefits. Guidance will issue for administrators of federal government benefits regarding use of AI in processing claims and for investigating and remedying unjust denials of benefits, and the government will analyze whether this use of AI achieves equitable and just outcomes.
Government entities will propose new regulations and implement procedural changes, including:
- Life Sciences Research. Government entities are to require organizations to, as a condition to receiving federal funding for life sciences research, procure synthetic nucleic acids per a safety framework to be identified.
- Computing Cluster Registration. Organizations are to report to the government any acquisition/development/possession of a large-scale computing cluster, including the existence and location of the cluster and the total computing power available.
- AI Model Registration. When developing large AI models, companies/organizations are to report to the government on training, developing, or producing the model; on ownership and possession of the model’s weights; and on vulnerability testing results.
- Foreign Usage. Companies are to report a foreign entity transacting with a U.S. infrastructure-as-a-service (IaaS) provider to train a large AI model.
- Foreign Reseller Identity. Identities of foreign resellers of U.S. IaaS products will need to be verified using to-be-proposed verification techniques.
Government agencies are also directed to at least consider rulemaking surrounding:
- Immigration. New regulation to attract workers with critical AI skills.
- Antitrust. How to ensure continued vigorous competition, prevent/address risks that could arise from concentrated control of key inputs, and mitigate the risk that dominant firms could present to competitors, particularly small businesses.
- Critical Infrastructure Risk Management. Following the government’s development of best practice guidelines for owners and operators of critical infrastructure to manage AI risk, government entities are to determine whether to mandate that those owners/operators follow those best practices.
The new regulations and procedural changes identified above may be promulgated through existing procedures for new regulation, providing opportunities to comment.
DEVELOPING GUIDELINES AND REPORTS
The EO also calls for agencies to develop guidelines that may not be binding per se but reflect best practices the public is encouraged to adopt. Agencies are also ordered to publish AI-related reports. For example:
- AI Safety and Security. The National Institute of Standards and Technology (NIST) is to establish (with the aim of promoting consensus industry standards) guidelines and best practices for safe, secure, and trustworthy AI systems, including guidelines for “red team” testing to identify vulnerabilities in a system.
- Financial Services Sector. A public report will issue on best practices for financial institutions to manage AI-specific cybersecurity risks.
- Critical Infrastructure and Cybersecurity.
– The Department of Homeland Security will report on how AI may make critical infrastructure systems more vulnerable to critical failures, physical attacks, and cyber-attacks, and on ways to mitigate these vulnerabilities.
– Homeland Security will also incorporate the AI Risk Management Framework (NIST AI 100-1) into relevant guidelines for critical infrastructure.
- AI-Generated Content. New guidance will issue for labeling content as AI-generated, such as using watermarking. U.S. government entities are to use the guidance when publishing AI-generated content.
- Innovation/Intellectual Property (IP).
– Publication and open source: The Department of Commerce is to investigate benefits and risks to making the weights for AI models “widely available,” including through publication.
– Patents: The Patent and Trademark Office is to issue guidance on the intersection of AI and IP, touching on patentability and patent-eligibility of inventions leveraging AI and other technologies.
– Copyright: There will be new recommendations on protection available for works produced using AI and on use of copyrighted works in AI training.
- Labor. The Labor Department will publish best practices for employers to mitigate AI’s potential harms to employee well-being, including guidance reinforcing that if AI is deployed to monitor or augment employee work, workers must still under existing law be compensated for their time worked.
- Equity, Discrimination, and Civil Rights. The Justice Department will analyze potential discrimination in criminal justice (e.g., AI in setting bail, probation, sentences, parole, and more) and more generally evaluate how to use their enforcement efforts to mitigate any risks related to AI and possible algorithmic discrimination. Housing and Urban Development will similarly issue guidance on tenant screening systems and potential discriminative effects of AI, and Labor’s guidance will address possible employment discrimination arising from applicant screening tools. Guidance will also address use of biometric data like gaze direction, eye tracking, gait analysis, hand motions, and more in a manner that will not have an adverse impact on persons with disabilities.
- Threats. A multi-department effort will evaluate risk posed by AI in developing and producing—and the potential for using AI to counter—chemical/biological/radiological/nuclear (CBRN) threats; biosecurity threats; synthetic nucleic acid threats; and other threats.
The EO demonstrates a commitment to both AI adoption and comprehensive AI regulation. It remains unclear when these proposed regulations will be implemented—and how they will be enforced—but we can expect to see regulatory activity between now and mid-summer 2024.
The EO may also impact legislative efforts in Congress to codify parts of the proposal. For example, three days after the EO was issued legislation was introduced in the U.S. Senate to codify certain requirements of the NIST AI 100-1 risk management framework. In the fact sheet accompanying the EO, President Biden called on Congress to pass bipartisan data privacy legislation related to AI risks. In public remarks the President also highlighted the bipartisan work in the Senate to develop comprehensive AI legislation encompassing many of the areas addressed in the EO.
Organizations should assess their AI usage to determine how the EO impacts them, both to plan for upcoming regulation and to look for forthcoming federal grant or contract opportunities.
Going forward, companies can expect a high degree of scrutiny from regulators and courts regarding the use of technology systems with AI-driven decision making or that produce AI-generated content. It can be reasonably anticipated that, pending the implementation of AI-specific regulations, U.S. governmental agencies and courts will attempt to interpret existing laws to attach liability when they deem that uses of AI technologies contribute to violations of those laws. The EO provides companies with a broad outline of the views that the U.S. government under this administration will take with respect to the use of AI, and the current absence of specific AI regulations does not indicate a lack of legal risk with respect to the implementation of AI tools in business operations.