President Joe Biden has said artificial intelligence (AI) “could be” dangerous but it remains to be seen how the technology will affect society.
Speaking at the start of a meeting with science and technology advisers on Tuesday, Biden said technology companies had a responsibility to ensure their products are safe before their release.
“Tech companies have a responsibility, in my view, to make sure their products are safe before making them public,” Biden said at the opening of a meeting of the President’s Council of Advisors on Science and Technology.
Asked if AI was dangerous, Biden said it “remains to be seen” but “it could be”.
Biden: AI developers need to address “potential risks”
Biden said AI could help tackle challenges like disease and climate change but that developers of the technology would also have to address “potential risks to our society, to our economy, to our national security”.
The president said the effects of social media on young people’s mental health showed the harm new technologies can inflict if safeguards are not in place.
Biden’s remarks come amid growing debate about how to regulate AI, with some prominent voices calling for a pause on the development of the technology until safeguards can be put in place.
In an open letter published last month, a number of tech leaders including Tesla founder Elon Musk and Apple co-founder Steve Wozniak called for a pause on the rollout of AI due to the technology’s “profound risks to society and humanity.”
“Powerful AI systems should only be developed once we are confident that their effects will be positive and their risks will be manageable,” the letter said.
The US has adopted a hands-off strategy on AI. Lawmakers have not shown any urgency in attempts to regulate AI, and have relied on existing laws to regulate its use.
The US Chamber of Commerce recently called for AI regulation, to ensure it doesn’t hurt growth or become a national security risk, but no action has been taken yet.
Italy last week became the first Western country to ban ChatGPT after its data protection watchdog said there appeared to be “no legal basis” for its mass collection of data.
European Union legislators are negotiating regulations to govern the use of the technology across the 27-nation bloc.
The pros and cons of AI regulation
Writing in The Conversation, leading experts from Australia, France and Germany say that there are several arguments both for and against allowing caution to drive the control of AI.
On one hand, AI is celebrated for being able to generate all forms of content, handle mundane tasks and detect cancers, among other things. On the other hand, it can deceive, perpetuate bias, plagiarise and – of course – has some experts worried about humanity’s collective future.
Some scholars have argued excessive regulation may hinder AI’s full potential and interfere with “creative destruction” – a theory that suggests long-standing norms and practices must be pulled apart in order for innovation to thrive.
Likewise, over the years business groups have pushed for regulation that is flexible and limited to targeted applications, so that it doesn’t hamper competition. And industry associations have called for ethical “guidance” rather than regulation – arguing that AI development is too fast-moving and open-ended to adequately regulate.
But citizens seem to advocate for more oversight. According to reports by Bristows and KPMG, about two-thirds of Australian and British people believe the AI industry should be regulated and held accountable.