A British scientist known for his contributions to artificial intelligence has told Sky News that powerful AI systems are “out of control” and are “already causing harm”.
Professor Stuart Russell was one of more than 1,000 experts who attended last month signed an open letter Calls for a six-month moratorium on developing a system more powerful than OpenAI’s newly unveiled system GPT-4 – Successor of ChatGPT, an online chatbot powered by GPT-3.5.
Title Features of the New Model is its ability to recognize and interpret images.
Speaking to Sky’s Sophy Ridge, Professor Russell said of the letter: “I signed it because I think the point is, we don’t understand these [more powerful] The system works. We don’t know what they are capable of. That means we can’t control them, we can’t make them play by the rules. “
“People are concerned about disinformation, racial and gender bias” being output by these systems, he said.
he argued artificial intelligenceIt will take time to “develop regulations to ensure these systems are good for people and not harmful”.
One of the biggest concerns, he said, is disinformation and deepfakes (videos or photos of a person whose face or body has been digitally altered to look like someone else — often used maliciously or to spread disinformation).
He said that while disinformation for “propaganda” purposes has existed for a long time, the difference now is that, in the case of Sophy Ridge, he can ask GPT-4 to try to “manipulate” her so that she ” Not very pro-Ukrainian”.
He said the technology would read Rich’s social media presence and what she had said or written, and then work out incremental campaigns to “tweak” her newsfeed.
Professor Russell told Ridge: “The difference here is that I can now ask GPT-4 to read everything about Sophy Ridge on social media, everything Sophy Ridge has ever said or written, everything about Sophy Ridge’s friends information, and then by tweaking your news feed, maybe sending some fake news in your news feed every once in a while, so that you’re a little less supportive of Ukraine, you start pushing harder on people who say we should be fighting Politicians who support Ukraine and others in Russia’s war.
“It’s going to be so easy to do. What’s really scary is that we can do this to a million different people before lunch.”
“Manipulating people in ways they don’t even realize can have a huge impact on these systems and make them worse,” the UC Berkeley computer science professor warned.
Rich described it as “really, really scary” and asked if it was happening now, to which the professor replied: “Very likely, yes.”
China, Russia and North Korea have huge teams to “spread disinformation” and AI “we give them a powerful tool,” he said.
“The letter is really focused on next-generation systems. Right now, those systems have some limitations in their ability to build complex plans.”
What is GPT-4? How does it improve ChatGPT?
Elon Musk reveals plans to build ‘TruthGPT’ despite AI danger warnings
He suggested that under the next-generation system or beyond, companies could be run by artificial intelligence systems. “You can see AI systems organizing military operations,” he added.
“If you’re building systems that are more powerful than humans, how do humans retain control of those systems forever? That’s the real concern behind the open letter.”
Click to subscribe to the Sophy Ridge on Sunday podcast
The professor said he was trying to convince governments of the need to start planning ahead at a time when “we need to change the way the whole digital ecosystem … works”.
Since its release last year, Microsoft-backed OpenAI’s ChatGPT has prompted competitors to accelerate the development of similarly large language models and encouraged companies to integrate generative AI models into their products.
UK unveils proposals for ‘light touch’ regulation around AI
When the British government recently announced Proposals for a “light touch” regulatory framework around artificial intelligence.
A policy paper outlining the government’s approach will distribute responsibility for managing AI among its human rights, health and safety and competition watchdogs, rather than creating a new agency dedicated to the technology.