Calamos Supports Greece
GreekReporter.comScience'AI Can Go Quite Wrong' Sam Altman Warns Congress

‘AI Can Go Quite Wrong’ Sam Altman Warns Congress

Altman AI
Altman called for parameters for AI creators to avoid causing “significant harm to the world.” Credit: Video screenshot/YouTube/C-Span

Open AI CEO Sam Altman warned on Tuesday the US Congress that artificial intelligence developments can go quite wrong and called on the government to regulate the sector.

“If this technology goes wrong, it can go quite wrong, and we want to work with the government to prevent that from happening,” said Altman, whose company is on the extreme forefront of generative AI technology with its ChatGPT tool.

Testifying in front of the Senate Judiciary Committee he said he is ultimately optimistic that the innovation will benefit people on a grand scale.

He called however on lawmakers to create parameters for AI creators to avoid causing “significant harm to the world.”

“We think it can be a printing press moment,” Altman said. “We have to work together to make it so.”

When pressed on his worst fear about AI, Altman was frank about the risks of his work.

“My worst fears are that we—the field, the technology, the industry—cause significant harm to the world. I think that can happen in a lot of different ways,” Altman said.

He did not elaborate, but warnings from critics range from the spread of misinformation and bias to bringing about the complete destruction of biological life.

He also admitted the impact that AI could have on the economy, including the likelihood that AI technology could replace some jobs, leading to layoffs in certain fields.

Altman told legislators he was worried about the potential impact on democracy, and how AI could be used to send targeted misinformation during elections.

Altman proposes a plan to regulate AI

He laid out a general three-point plan for how Congress could regulate AI creators.

First, he supported the creation of a federal agency that can grant licenses to create AI models above a certain threshold of capabilities, and can also revoke those licenses if the models don’t meet safety guidelines set by the government.

Second, Altman said the government should create safety standards for high-capability AI models (such as barring a model from self-replication) and create specific functionality tests the models have to pass, such as verifying the model’s ability to produce accurate information, or ensure it doesn’t generate dangerous content.

And third, he urged legislators to require independent audits from experts unaffiliated with the creators or the government to ensure that the AI tools operated within the legislative guidelines.

Altman, 38, has become a spokesman of sorts for the burgeoning industry. He has not shied away from addressing the ethical questions that AI raises and has pushed for more regulation.

He was named one of the 100 most influential people in the world by Time magazine in 2023, one of the “Best Young Entrepreneurs in Technology” by Businessweek in 2008, and the top investor under 30 by Forbes magazine in 2015.

AI dangers and the need for oversight

Joining Altman in testifying before the committee were two other AI experts, professor of Psychology and Neural Science at New York University Gary Marcus and IBM Chief Privacy & Trust Officer Christina Montgomery. The three witnesses supported the governance of AI at both federal and global levels, with slightly varied approaches.

Tesla and Twitter CEO Elon Musk and Apple co-founder Steve Wozniak were among several tech experts who recently called for a pause on AI development.

In a letter, the experts warned of potential risks to society and humanity as tech giants such as Google and Microsoft race to build AI programs that can learn independently.

“Powerful AI systems should only be developed once we are confident that their effects will be positive and their risks will be manageable,” the letter said.

Earlier in May AI pioneer Geoffrey Hinton left Google, warning about the growing dangers of developments in the field.

Hinton, who nurtured the technology at the heart of chatbots like ChatGPT for half a century, told the New York Times (NYT): “It is hard to see how you can prevent the bad actors from using it for bad things.”

See all the latest news from Greece and the world at Greekreporter.com. Contact our newsroom to report an update or send your story, photos and videos. Follow GR on Google News and subscribe here to our daily email!



Related Posts