White House unveils new AI regulations for federal agencies
The Biden administration announced the Office of Management and Budget (OMB) is rolling out new artificial intelligence (AI) regulations for federal agencies, building off the president’s executive order last year that requires AI developers to share certain information with the government.
In a press call Wednesday afternoon, Vice President Kamala Harris said the new series of regulations, which include mandatory risk reporting and transparency rules informing people when agencies are using AI, would ‘promote the safe, secure and responsible use of AI.’
‘When government agencies use AI tools, we will now require them to verify that those tools do not endanger the rights and safety of the American people,’ Harris said.
‘I’ll give you an example. If the Veterans Administration wants to use AI in VA hospitals to help doctors diagnose patients, they would first have to demonstrate that AI does not produce racially biased diagnoses.’
Federal agencies will also be required to appoint a chief AI officer to oversee technology used in their departments ‘to make sure that AI is used responsibly.’
Every year, agencies will also have to provide an online database listing their AI systems and an assessment of the risks they might pose.
Harris said the new regulations were shaped by leaders in the public and private sectors, including computer scientists and civil rights leaders. A White House fact sheet says the new policy will ‘advance equity and civil rights and stand up for consumers and workers.’
OMB Director Shalanda Young said the new AI policy will require agencies to ‘independently evaluate’ their uses of AI and ‘monitor them for mistakes and failures and guard against the risk of discrimination.’
‘AI presents not only risks but also a tremendous opportunity to improve public services and make progress of societal challenges like addressing climate change, improving public health and advancing equitable economic opportunity when used and overseen responsibly,’ Young said on the press call.
Each federal agency could use different AI systems and will need to have an independent auditor assess its risks, a senior White House official said on the call.
The Biden administration has been taking more steps recently to curtail potential dangers of AI that could put users’ data at risk. In October, President Biden signed what the White House called a ‘landmark’ executive order that contains the ‘most sweeping actions ever taken to protect Americans from the potential risks of AI systems.’
Among them is requiring that AI developers share their safety-test results — known as red-team testing — with the federal government.
Last month, a coalition of state attorneys general warned that Biden’s executive order could be used by the federal government to ‘centralize’ government control over the emerging technology and that that control could be used for political purposes, including censoring what they may deem as disinformation.
In a letter to Commerce Secretary Gina Raimondo, Utah Attorney General Sean Reyes, a Republican, and 20 other state attorneys general, warned that the order would inject ‘partisan purposes’ into decision-making, including by forcing designers to prove they can tackle ‘disinformation.’
‘The Executive Order seeks — without Congressional authorization — to centralize governmental control over an emerging technology being developed by the private sector,’ the letter states. ‘In doing so, the Executive Order opens the door to using the federal government’s control over AI for political ends, such as censoring responses in the name of combating ‘disinformation.”
Fox News’ Greg Norman and Adam Shaw contributed to this report.