© 2024 Ideastream Public Media

1375 Euclid Avenue, Cleveland, Ohio 44115
(216) 916-6100 | (877) 399-3307

WKSU is a public media service licensed to Kent State University and operated by Ideastream Public Media.
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

As Congress lags, California lawmakers take on AI regulations

Samuel Altman, CEO of OpenAI, looks on during a Senate Judiciary Subcommittee on Privacy, Technology, and the Law oversight hearing to examine artificial intelligence, on Capitol Hill in Washington, DC, on May 16, 2023.
ANDREW CABALLERO-REYNOLDS
/
AFP via Getty Images
Samuel Altman, CEO of OpenAI, looks on during a Senate Judiciary Subcommittee on Privacy, Technology, and the Law oversight hearing to examine artificial intelligence, on Capitol Hill in Washington, DC, on May 16, 2023.

It's been eight months since Sam Altman, CEO of ChatGPT-maker OpenAI, urged U.S. Senators to pass laws to force accountability from the big players like Amazon, Google and OpenAI investor Microsoft.

"The number of companies is going to be small, just because of the resources required, and so I think there needs to be incredible scrutiny on us and our competitors," Altman said in May 2023.

Though the federal government has studied the issue, the scrutiny and regulation suggested by Altman hasn't happened yet.

That's even though large AI models are expanding and doing lots of exciting things: developing new antibiotics and helping humans communicate with whales. But also, raising worries about turbocharging election-season fraud and automating hiring discrimination.

In 2023, many world-leading experts signed a statement on AI risks, warning policymakers of possible disaster.

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," it reads.

Democratic state Senator Scott Wiener, of San Francisco, says California lawmakers are rolling out legislation that could provide a model for other states to follow, if not the federal government.

"I would love to have one unified, federal law that effectively addresses AI safety. Congress has not passed such a law. Congress has not even come close to passing such a law," he said.

Wiener argues his proposal is the most ambitious so far in the country. And, as the new chair of the state's Senate Budget Committee, he says he hopes to use his position to pass aggressive legislation.

The California measure, Senate Bill 1047, would require companies building the largest and most powerful AI models to test for safety before releasing those models to the public.

AI companies would have to tell the state about testing protocols, guardrails and if the tech causes "critical harm," California's attorney general could sue.

Wiener says his legislation draws heavily on the Biden Administration's 2023 executive order on AI.

There are more than 400 AI-related bills pending across 44 states, according to BSA Software Industry Alliance. But with many of the largest companies working on generative AI models based in the San Francisco Bay Area, measures working their way through the Capitol in Sacramento could become legal landmarks, should they pass.

According to the think tank Brookings, more than 60% of generative AI jobs posted in the year ending in July 2023 were clustered in 10 U.S. metro areas, led far and away by the Bay Area.

In the absence of federal oversight, there are industry efforts afoot to allay concerns about AI, including a recent collective promise to combat deceptive use of AI in 2024 elections around the world. But this is a voluntary effort, raising the question of who will hold the companies accountable — especially as the technology gets better and better. OpenAI recently introduced a text-to-video model called Sora that features stunning capabilities leagues ahead of models released just a year ago.

In the meantime, the FTC and other regulators are exploring how to use existing laws to rein in AI developers and nefarious individuals and organizations using the technology to break the law, but many experts say that's not going to be enough.

Lina Khan, chair of the Federal Trade Commission, raised this question during an FTC summit on AI last month: "Will a handful of dominant firms concentrate control over these key tools, locking us into a future of their choosing?"

Hany Farid, a UC Berkeley School of Information professor specializing in digital forensics, misinformation and human perception, questioned how effective a patchwork of state regulations can be at reining in the industry.

"I don't think it makes sense for individual states to try to regulate in this space, but if any state is going to do it, it should be California. The upside of state regulation is that it puts more pressure on the federal government to act so that we don't end up with a chaotic state-by-state regulation of tech," he said.

Grace Gedye, an AI policy analyst at Consumer Reports, added that, in the current political climate, states might have to take the lead on the issue. "We definitely can't hold our breath [for Congress to act], because we could be waiting 10 or 20 years," she said.

Copyright 2024 KQED

Tags
Rachael Myrow