Skip to main content

On March 28, the federal Office of Management and Budget (OMB) took two steps to further define federal government policy on the use of artificial intelligence (AI) products and services by federal agencies and the procurement of AI products and services. Both actions were directed by Executive Order 14110 (Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, Oct. 30, 2023).[1] Companies that will be providing AI technologies to the federal government should be familiar with the new policy and how it will guide federal agencies’ use of AI tools, and should also consider whether to submit comments on how the federal government should implement an AI procurement process.

‘Safety-Impacting AI and ‘Rights-Impacting AI’

An OMB memorandum to the heads of federal agencies (“Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence”) is designed to both promote AI innovation in federal agencies and create a risk-management framework for AI use. Federal agencies have until Dec. 1, 2024, to put “minimum practices” into place to manage risks, with heightened scrutiny given to “safety-impacting AI” and “rights-impacting AI.”

“Safety-impacting AI” includes “AI whose output produces an action or serves as a principal basis for a decision” affecting human life or well-being, climate or environment, critical infrastructure, or strategic assets or resources. The memorandum includes a list of 14 categories presumed to be safety-impacting, ranging from control of safety functions of critical infrastructure such as dams and electrical grids, to maintaining the integrity of election-related infrastructure, to physical movements of robots within the workplace.

“Rights-impacting AI” includes “AI whose output serves as a principal basis for a decision” affecting civil rights, equal opportunities, or access to critical government resources or services. The memorandum also includes a list of 14 categories presumed to be rights-impacting, including risk assessments in a law enforcement or immigration-related setting, conducting biometric identification, screening tenants or providing valuations for homes, and providing medical diagnoses or determining medical treatments.

A full list of categories of AI use that are presumed to be safety-impacting or rights-impacting can be found at Appendix I of the OMB memorandum.

The OMB memorandum does not apply to AI used in national security systems by defense or intelligence agencies, or to AI used in the context of basic or applied research.

Timeline for New Procurement Standards

OMB also released a Request for Information (RFI) on the “Responsible Procurement of Artificial Intelligence in Government.” OMB intends to develop procurement standards consistent with EO 14110 and the Advancing American AI Act (40 U.S.C. 11301 note).

The RFI asks for input on 10 threshold questions relating to strengthening the AI marketplace and managing the performance and risks of AI. The 10 questions are divided into categories, with the first four relating to “Strengthening the AI Marketplace,” and the remaining six relating to “Managing the Performance and Risks of AI.” The 10 questions seek comment on the following:

  1. how standard procurement practices and strategies and innovative procurement practices can be best used to reflect emerging practices in AI procurement;
  2. how to promote robust competition, attract new entrants to the federal marketplace (including small businesses), and avoid vendor lock-in across elements of the technology sector;
  3. whether the government should standardize assessments of the benefits and trade-offs between in-house AI development, contracted AI development, licensing of AI-enabled software, and use of AI-enabled services;
  4. how to develop and communicate metrics to enable performance-based procurement of AI;
  5. “access to documentation, data, code, models, software, and other technical components” used by vendors;
  6. which elements of testing, evaluation, and impact assessments should be performed by vendors and which should be performed by agencies;
  7. contract terms to protect the federal government’s rights and access to data, while maintaining protection of a vendor’s intellectual property;
  8. for rights-impacting AI, what contract terms governing information sharing among agencies, vendors, and the public should be used to implement the OMB memorandum’s requirements that agencies notify individuals when use of the AI results in an adverse decision or action that specifically concerns them (such as the denial of benefits or deeming a transaction fraudulent), and provide a right to appeal;
  9. how to structure procurements to reduce risks of acquiring an AI system or service that produces harmful, illegal, fraudulent, or deceptive content; and
  10. how to procure AI systems or services in a way that advances equitable outcomes and mitigates risks to privacy, civil rights, and civil liberties.

Interested companies have 30 days to submit comments. OMB is expected to issue proposed regulations on AI procurement standards later in 2024.


[1] See November 2023 GT Alert, Artificial Intelligence: Breaking Down President Biden’s First-of-Its-Kind Executive Order.