The White House has announced plans to launch seven new National AI Research Institutes to focus on the development of ethical and transformative AI for the public good. The move comes amid growing concern about the impact of AI, fueled in part by the success of generative AI systems like ChatGPT. The White House also plans to support a mass hacking exercise at the Defcon security conference this summer, which will probe generative AI systems from companies including Google, Nvidia, and Stability AI.
The event will see thousands of participants, including hackers and policy experts, explore how generative models align with the Biden administration's AI Bill of Rights and a National Institute of Standards and Technology risk management framework. Points will be awarded under a “Capture the Flag” format to encourage participants to test for a wide range of bugs or unsavory behavior from the AI systems.
The renewed interest in AI by federal regulators is welcome, according to Sarah Myers West, managing director of the AI Now Institute. However, she warns that it remains to be seen how meaningful their actions will be. She is also wary of the close involvement of tech companies seeking profits with the White House's new attention to the technology. She stresses that regulators and the broader public must define what responsible development of technology looks like.
In addition to companies developing AI for profit, federal agencies also have some work to do on their own use of AI. A study done recently discovered almost no federal entities had replied to an executive order to provide AI plans to members of the public. It also found that barely half had shared an inventory of how they use AI. New guidelines are to be released in the approaching months by The White House Office of Management.
The White House's intervention comes as the appetite for regulating the technology grows around the world. In the European Union, lawmakers are negotiating updates to a sweeping AI Act that will restrict and even ban some uses of AI, including generative AI. Brazilian lawmakers are also considering regulation geared toward protecting human rights in the age of AI. Draft generative AI regulation was announced by China’s government last month.
Just last week, a Democrat senator Michael Bennett had brought in a bill that could enable an AI task force that would focus on the protection of privacy and civil rights of citizens.
Also last week, four US regulatory agencies, including the Federal Trade Commission and Department of Justice, jointly pledged to use existing laws to protect the rights of American citizens in the age of AI.
The office of Democrat senator Ron Wyden has, this week, confirmed their plans to try to reintroduce the Algorithmic Accountability Act. This would mean companies would need to assess their algorithms and disclose when automated systems are in use.