Science & Tech
Sinead Butler
Jan 26, 2023
content.jwplatform.com
An expert has warned MPs about the negative consequences of "superhuman" artificial intelligence - and we're talking the end humanity kind of bad.
The House of Commons Science and Technology Committee listened to Michael Cohen, a doctoral student at Oxford University who spoke of the " particular risk" AI poses and so should be regulated like nuclear weapons.
“With superhuman AI, there is a particular risk that is of a different sort of class, which is... it could kill everyone,” Cohen said, as per The Independent before elaborating with a canine analogy.
"If you imagine training a dog with treats: it will learn to pick actions that lead to it getting treats, but if the dog finds the treat cupboard, it can get the treats itself without doing what we wanted it to do.
Sign up to our free Indy100 weekly newsletter
"If you have something much smarter than us monomaniacally trying to get this positive feedback, and it's taken over the world to secure that, it would direct as much energy as it could to securing its hold on that, and that would leave us without any energy for ourselves."
"I think the bleak scenario is realistic because AI is attempting to bottle what makes humans special - that has led to humans completely changing the face of the Earth," Cohen added.
He also provided an example of a species which became extinct as a result of humans - the dodo.
"If we're able to capture that in technology, then, of course, it's going to pose just as much risk to us as we have posed to other species: the dodo is one example," the doctoral student explained, as per The Telegraph.
Researchers also noted how the development of AI had become a "literal arms race" as countries and technology companies vie to create the most advanced (and dangerous) technology in order to gain a military and commercial advantage.
"I think we're in a massive AI arms race, geopolitically with the US versus China, and among tech firms there seems to be this willingness to throw safety and caution out the window and race as fast as possible to the most advanced AI."
“Artificial systems could become as good at outfoxing us geopolitically as they are in the simple environments as games.”
While a pretty depressing picture has been painted, Cohen still remains positive for the future when looking at how nuclear weapons have been regulated.
“There are some reasons for hope in that we have been pretty good at regulating the use of nuclear weapons,” he added.
Have your say in our news democracy. Click the upvote icon at the top of the page to help raise this article through the indy100 rankings.
Top 100
The Conversation (0)
x