Typically, when representatives of tech companies appear in the U.S. Senate, they tend to complain about the prospect of regulation and resist claims that their technology is causing harm.
That’s what it takes for this committee to hear artificial intelligence A rare thing.
Sam Altman, OpenAI’s chief executive, admitted Tuesday: “My biggest concern is that we…the industry…to the world.”
He went on to say that “government regulatory intervention is essential to mitigate the risks of an increasingly powerful model”.
Worried American politicians, of course, welcome this.
The hearing on artificial intelligence began with a pre-recorded statement from Democratic Sen. Richard Blumenthal, who spoke about the technology’s potential benefits and serious risks.
but not he talking – This is an AI trained on recordings of his speeches to read statements generated by GPT4.
Artificial intelligence is making us increasingly familiar with another creepy party trick.
Senators worry — not just about the safety of individuals at the mercy of AI-generated ads, misinformation or blatant fraud — but democracy itself.
What could artificial intelligence, trained to discreetly influence the political views of targeted voter groups, do with elections?
Mr. Altman of Open AI says this is one of his biggest concerns.
In fact, he agreed with nearly every concern expressed by senators.
The only thing he does differ is that he believes the rewards outweigh any risks.
The unlikely inspiration for controlling artificial intelligence
Well, if they all agree, how do you regulate AI?
In fact, how do you create laws to restrict a technology that even its creators don’t fully understand?
This is an issue the EU is currently grappling with, considering the size of regulation based on the risks of AI use.
Healthcare and banking will be high stakes; creative industries, low.
Today we have an interesting insight into American practice: food labelling.
Senator Blumenthal asked whether future AI models — whatever their purpose — should be checked by independent testing labs and labeled based on their nutritional content?
In this case, nutrition is the data used by the model.
Is it a junk diet or is everything on the internet – like GPT4 and Google’s Bard AI – trained?
Or high-quality data from the healthcare system or government statistics?
Even if organic and free range, how reliable are the results of an AI model fed that data?
read more:
Geoffrey Hinton: Who is the “Godfather of AI”?
Google boss Sundar Pichai admits AI dangers ‘keep me up at night’
Trust in AI looms
Mr. Altman said he agreed with the senator’s thinking and looked to a future where the public and regulators would have enough transparency to understand what’s inside AI.
But Mr Altman’s evidence is contradictory. And looming questions about AI regulation.
While he undoubtedly shares his deep-seated beliefs, the way his AI and others are currently deployed doesn’t reflect that.
OpenAI struck a multibillion-dollar deal with Microsoft, which will embed GPT4 into its search engine, Bing, to compete with Google’s Bard AI.
We know very little about how these AIs manage their junk food diets or how reliable their reflux is.
Would representatives of these companies take different positions on regulatory issues if they sat before the committee?
For now, other big tech companies have resisted attempts to regulate their social media offerings.
Their main advantage, especially in the US, is the First Amendment law protecting free speech.
An interesting question for US constitutional experts is whether artificial intelligence has the right to free speech?
If not, will the regulations that many AI creators say they would like to see actually be easier to implement?