President Joe Biden signed an artificial intelligence executive order on Monday, marking the nation's largest attempt to rein in a technology that has sparked fear and hype as it finds its way into a sprawling number of real world applications.
The executive order lists guiding principles for legislators and government agencies who will be responsible for crafting rules that govern the industry — a process that has already begun with engagement from Carnegie Mellon University's Block Center for Technology and Society.
Thought leaders from the center have given congressional testimony and held summer meetings with government officials, companies and other stakeholders on issues addressed in the executive order, which seeks transparency from companies that are developing AI technologies that could threaten national security. Developers must perform safety tests and share those results with the government, the document says.
Another goal is to make it obvious when content is made using AI.
The U.S. Department of Commerce will create standards that companies like Google and OpenAI could use to label AI-generated images, text and audio.
Ramayya Krishnan, the Block Center’s faculty director and dean of CMU’s Heinz College, called the presidential order a "comprehensive" first step.
"The commitments on AI safety, security and reliability are the strongest I have seen globally," he said. "I look forward to rule making to follow as well as legislation from Congress aligned with the themes highlighted in this order. It is essential for both our economic and national security."
Mr. Biden's order came days before tech executives and government leaders gathered in Britain for an international summit. The European Union is expected to finalize an AI regulation package by the end of the year that would require generative AI systems like ChatGPT to be reviewed before commercial release.
Similar to an executive order signed by Pennsylvania Gov. Josh Shapiro in September, Mr. Biden's order leverages the government's role as a top purchaser of AI products to push developers in a safer direction.
It will require testing for several safety thresholds — thresholds likely to be set by the National Institute of Standards and Technology, which first released a framework for managing the risks of AI in January.
Congress is still in the early stages of crafting bipartisan legislation to respond to AI.
The White House first announced plans for the executive order in July. Two months later it brokered a set of voluntary commitments agreed to by 15 companies, including Google and OpenAI, which made ChatGPT.
It's no coincidence that President Biden unveiled his policy days before the international safety summit in Britain, Mr. Krishnan said.
"We are as much in a technology innovation competition as much as we are in a public policy race related to AI," he said.
Mr. Krishnan is part of Biden’s national AI advisory committee, a 25-member group composed of academics and company executives appointed to serve through 2025.
Mr. Biden's order also includes efforts to build a domestic workforce of AI expertise.
Silicon Valley has been pushing for years for better talent-based immigration. The government allots 65,000 H-1B visas each year to skilled workers who can work in the U.S. for up to 6 years.
Earlier last month, the Department of Homeland Security proposed changes that would help streamline H-1B applications without increasing the number of slots. About 1 in 10 applicants are accepted through the H-1B lottery system.
Without a strong talent base, it will also be impossible for lawmakers to understand the tools they seek to regulate, Mr. Krishnan said.
In the meantime, he said, some safety steps can still be taken.
AI models should have transparent guides akin to food nutrition labels, so that users understand how the models have been trained, Mr. Krishnan said — a point he raised this summer in Congressional testimony.
The Block Center is also helping industries create guidelines for individual AI use cases. It will release a report in the next few weeks on "operationalizing AI" across various sectors.
Using AI in hiring is a different application from healthcare as it is from autonomous vehicles, said Steve Wray, Block Center’s executive director.
"That's where it actually becomes a mechanism for an outcome," he said.
The main takeaway from the forthcoming report is that "there haven't been firm decisions," Mr. Wray said. "People are still working to understand the risks."
Mr. Krishnan said some regulators believe use cases like crowd-based facial recognition and autonomous weapon systems should be banned outright. (The EU AI Act seeks to ban real-time facial recognition.)
As risk matrices are constructed, those might be deemed "an unacceptable risk," whereas autonomous vehicles might be acceptable within reason.
But even autonomous vehicles are raising alarm bells.
A week before Biden's order, the California Department of Motor Vehicles barred Cruise from testing its driverless taxis in San Francisco, following a gruesome pedestrian accident earlier in the month. The DMV determined that the vehicles “are not safe for the public's operation.”
Evan Robinson-Johnson: ejohnson@post-gazette.com or @sightsonwheels
First Published: November 4, 2023, 9:30 a.m.
Updated: November 5, 2023, 1:22 a.m.