White House issues Executive Order on the development and use of artificial intelligence
This past week, the White House issued a sweeping Executive Order (EO) concerning the safe, secure, and trustworthy development and use of artificial intelligence (AI). The EO lays out numerous prerogatives for federal agencies to circumvent the potentially harmful use and effects of AI, including the impact on technology development, the U.S. workforce, national security, as well as privacy and civil rights.
The President acknowledges the “speed at which AI capabilities are advancing compels the United States to lead in this moment for the sake of our security, economy, and society.”
The EO sets forth eight guiding principles and priorities, among which include “Ensuring the Safety and Security of AI Technology,” which establishes directives that federal agencies must, within nine months, develop guidelines, standards, and best practices for AI and security. In particular, federal agencies must establish guidelines “to enable developers of AI, especially of dual-use foundation models, to conduct AI red-teaming tests to enable development of safe, security, and trustworthy systems.” The EO defines “red-teaming” as a testing effort to find flaws and vulnerabilities in an AI system in a controlled environment,” a “dual-use foundation model” as an “AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters.”
Additionally, the EO stipulates that the Secretary of Commerce is to require, within 90 days, companies “developing or demonstrating an intent to develop potential dual-use foundation models to provide the federal government, on an ongoing basis, with information, reports, or records” regarding:
- any ongoing or planned activities related to training, developing, or producing dual-use foundation models;
- the ownership of “model weights,” defined as the “numeral parameter[s] within an AI model that helps determine the model’s outputs in response to inputs,” and the cybersecurity measures to protect those model weights;
- the results of any developed dual-use foundation model’s performance in red-team testing based on guidance, to be developed by the National Institute of Standards and Technology (NIST), or before the NIST guidance is available, the results of red-team testing related to lowering the barrier to entry for the development, acquisition, and use of biological weapons by non-state actors; the discovery of software vulnerabilities and development of associated exploits; the use of software or tools to influence real or virtual events; the possibility for self-replication or propagation; and associated measures to meet safety objectives.”
The guidance also institutes a reporting obligation for any person or entity that acquires, develops, or possesses a “large-scale computing cluster,” or until a “large-scale computing cluster” is defined, any model trained using a quantity of computing power greater than 1026 integer or floating-point operations, or using primarily biological sequence data and using a quantity of computing power greater than 1023 integer or floating operations,” or a “theoretical maximum computing capacity of 1020 integer or floating-point operations per second for training AI.”
With the proliferation of synthetic content (e.g., generative AI, including images, videos, audio, text, and other digital content), the EO sets forth an eight-month deadline for the Secretary of Commerce to submit a report identifying existing standards, tools, methods, and practices, as well as the potential development of further science-backed standards and techniques, (i) authenticating content and tracking its provenance (ii) labeling synthetic content, such as using watermarking (embedded information for verifying the authenticity of AI output), and (iii) detecting synthetic content.
To promote innovation in safe and trustworthy AI, the EO establishes that the director of the United States Patent and Trademark Office (USPTO), within four months, publish guidance to USPTO Patent Examiners and Applicants addressing inventorship and the use of AI in the inventive process, and within nine months, issue guidance to USPTO Examiners addressing other considerations related to AI and patents, including the potential for updated guidance on patent eligibility.
Key industry takeaways
Assuming that the EO stays in effect, 2024 will likely introduce a long list of reporting and testing obligations for industry, particularly with respect to the use and testing of dual-use foundation models (and the ownership of any model weights thereof), and for industries utilizing or acquiring large scale computing power. It is also conceivable that requirements will be instituted to label generative AI (for authenticity verification).
With respect to product development and innovation, product developers and inventors should expect to see new guidelines from the USPTO concerning issues of inventorship and patent eligibility.
What to do in the meantime?
Companies should proactively think about how it will comply with the prerogatives outlined in the EO, particularly with respect to any testing, development, or operation of dual-use foundation models, or any safeguarding technology utilized to protect or circumvent the malicious use of AI.
For a summary FACT SHEET on the full EO, click here.
For a copy of the entire EO, click here.
Should you have any questions concerning your patent or other intellectual property matters, do not hesitate to contact the attorneys listed above or any member of the McDonald Hopkins Intellectual Property Department.