White House issues key artificial intelligence actions following executive order
On January 29, 2024, the Biden-Harris administration announced that Deputy Chief of Staff Bruce Reed convened with top officials from various federal departments and agencies to discuss the agencies’ progress on the directives set out in a recent executive order concerning the safe, secure, and trustworthy development and use of artificial intelligence (AI). On October 30, 2023, President Biden signed the sweeping executive order now being put into practice.
The Executive Order
The October EO provided federal agencies with a framework to help circumvent the potentially harmful use and effects of AI, including the impact on technology development, the U.S. workforce, and national security. The EO established programs, controls, and policies under eight main areas:
- Safety and Security
- Privacy
- Equity and Civil Right
- Consumer, Patient, and Student Fairness and Protection
- Workforce Training and Development
- Innovation and Competition
- Global Development of AI
- Responsible and Effective Government Use
Three months later: Where are they?
According to the White House, the agencies have completed “key AI actions” under the EO just three months later, marking “substantial” progress towards the EO’s mandate in the areas of safety and security and innovation.
To mitigate risks to safety and security, federal agencies have:
- Used Defense Production Act authorities to compel developers of the most powerful AI systems to report vital information, especially AI safety test results, to the Department of Commerce;
- Proposed a draft rule that proposes to compel U.S. cloud companies that provide computing power for foreign AI training to report that they are doing so; and
- Completed risk assessments covering AI’s use in every critical infrastructure sector.
To promote the innovation of AI and “seize AI’s enormous promise,” federal agencies:
- Launched a pilot of the National AI Research Resource, catalyzing broad-based innovation, competition, and more equitable access to AI research;
- Launched an AI Talent Surge to accelerate hiring AI professionals across the federal government, including through a large-scale hiring action for data scientists;
- Began the EducateAI initiative to help fund educators creating high-quality, inclusive AI educational opportunities at the K-12 through undergraduate levels;
- Announced the funding of new Regional Innovation Engines (NSF Engines), including with a focus on advancing AI; and
- Established an AI Task Force at the Department of Health and Human Services to develop policies to provide regulatory clarity and catalyze AI innovation in health care.
The EO included directives with deadlines ranging from 30 days, 60 days, 90 days, or a year. The White House has compiled and summarized a list of completed actions, which is available here.
What this means for you
Companies should proactively think about how it will comply with the prerogatives outlined in the EO, particularly if the organization contracts with the federal government. Employers should utilize the forthcoming AI guidance from federal agencies when constructing their own practices as to how AI is developed and deployed.
For a summary fact sheet on the full EO, click here.
For a copy of the entire EO, click here.
McDonald Hopkin’s key takeaways on the EO are available here.
For more updates on data privacy law from McDonald Hopkins, please subscribe to receive our publications or view the links below for recent updates on other state data privacy legislative updates. In addition, if you have questions about your company’s compliance with cyber regulations, concerns about vulnerability to a ransomware attack or other breach, or if you want to learn more about proactive cybersecurity defense, contact a member of McDonald Hopkin’s national data privacy and cybersecurity team.