Vice President Harris Rolls Out First Government-Wide Policy to Mitigate AI Risks
WASHINGTON — Vice President Kamala Harris on Wednesday rolled out the Biden administration’s first government-wide policy intended to mitigate the risks associated with artificial intelligence while still enabling its use to advance the public interest.
The new policy, which is being issued through the White House Office of Management Budget, builds on the vision of the future for the technology that Harris laid out last year at the AI Safety Summit in London.
It mandates that federal agencies implement concrete safeguards when using AI in any way that could negatively impact the rights or personal safety of American citizens.
Those safeguards, which must be in place by Dec. 1, 2024, include a range of actions through which departments and agencies can reliably assess, test and monitor the impact of artificial intelligence on the general public, mitigate the risks of algorithmic discrimination and provide the public with transparency into how the government is using the technology.
The policy goes on to state that if an agency cannot apply these safeguards, it must cease all use of AI unless agency officials can show that doing so would create an unacceptable impediment to critical operations.
“All leaders, whether they be in government, civil society or the private sector, have a moral, ethical and societal duty to make sure artificial intelligence is adopted and advanced in a way that protects the public from potential harm,” said Harris during a conference call with reporters Wednesday afternoon.
The new policy also directs federal agencies to manage risks in the procurement of AI by adopting policies that ensure fair competition, data protection and transparency.
A Request for Information published in the Federal Register on Thursday will collect input from the public on ways to ensure that private sector companies supporting the federal government also follow the best available practices and requirements.
“These new, binding requirements will promote the safe, secure and responsible use of AI by our federal government,” Harris said.
To illustrate how the new policy will work, the vice president offered a hypothetical in which the Department of Veterans Affairs wanted to deploy AI in all of its hospitals.
“They would first have to demonstrate that AI does not produce racially biased diagnoses,” she said.
“The second binding requirement that they, and any other agency using AI would have to meet, relates to transparency, which we believe should facilitate accountability to the American people,” she said.
Toward that end, the Biden administration is requiring that every year, U.S. government agencies publish online a list of the AI systems that they are using accompanied by an assessment of the risks those systems might pose and an explanation of how they are managing those risks.
“Finally, we are requiring that every federal agency designate a chief AI officer who has the experience, expertise and authority to oversee all AI technologies being used by that agency,” Harris said.
“This is to make sure that AI is used responsibly, understanding that we must have senior leaders across our government who are specifically tasked with overseeing AI adoption and use,” she said.
The vice president went on to explain that the new requirements were shaped in consultation with a wide range of leaders from across the public and private sectors, including computer scientists, civil rights leaders, legal scholars and business leaders.
She closed by saying it is her hope, and the hope of President Joe Biden that the new domestic policies will serve as a model for global action.
Harris has long been the administration’s point person when it comes to artificial intelligence technologies.
In November of 2023, she traveled to London for the first-ever Global AI Summit where she outlined the administration’s vision for a future of AI.
During her speech, Harris announced a series of initiatives to promote safe, secure and trustworthy AI.
They included launching a new U.S. AI Safety Institute, unveiling new draft policy guidance on government’s use of AI, sharing a commitment from 30 nations to join the U.S. in endorsing a Political Declaration on the Responsible Use of AI and Autonomy and announcing a $200 million funding pledge by 10 leading foundations toward efforts to mitigate AI harms and promote responsible use and innovation.
Six months earlier, she convened a meeting with the CEOs of companies at the forefront of AI innovation, resulting in voluntary commitments from 15 leading AI companies to help move toward safe, secure and transparent development of the technology.
Two months after that, Harris brought together consumer protection, labor and civil rights leaders to discuss the risks associated with the technology.
Shalonda Young, director of the White House Office of Management and Budget, said the new directive “places people and communities at the center of the government’s innovation goals.”
“Federal agencies have a distinct responsibility to identify and manage AI risks because of the role they play in our society, and the public must have confidence that the agencies will protect their rights and safety,” she said.
Young also announced that the administration is about to launch a national talent search for AI professionals in the hope of hiring at least 100 of them this summer.
“Today’s announcements represent a major milestone in implementing President Biden’s landmark executive order on AI,” Young said. “OMB will be taking further action later this year to address federal procurement of artificial intelligence technology.”
Dan can be reached at [email protected] and at https://twitter.com/DanMcCue