A ragtag group of public figures has united under one message: the big tech world should not rush towards AI superintelligence.
Superintelligence (also known as AGI) is a hypothetical AI system that could outperform human intelligence on virtually all scales. It is the holy grail of the AI industry, and big tech companies have spent countless money and resources to be the first to achieve it. Meta, for example, has a whole division and a multibillion-dollar spending spree dedicated to this goal. Meta CEO Mark Zuckerberg claims superintelligence is “in sight,†but other experts are more skeptical of the timeline or even the potential for the technology to ever reach that level of sophistication.
But even those who think superintelligence is achievable don’t agree with the way AI is evolving towards it. That includes the more than 1,300 and counting signatories to the “Statement on Superintelligence,†put forth by the Future of Life Institute.
“We call for a prohibition on the development of superintelligence, not lifted before there is 1. broad scientific consensus that it will be done safely and controllably, and 2. strong public buy-in,†the statement reads.
Both those conditions are currently lacking. Many top computer scientists were among the signatories of this letter raising concerns about the safe development of superintelligence. Chief among them were Apple co-founder Steve Wozniak, Geoffrey Hinton and Yoshua Bengio, two scientists who are deemed “the godfathers of AI,†and Stuart Russell, a computer science professor at UC Berkeley who is considered one of the most respected figures in the AI world.
“This is not a ban or even a moratorium in the usual sense. It’s simply a proposal to require adequate safety measures for a technology that, according to its developers, has a significant chance to cause human extinction. Is that too much to ask?†Russell said in a public statement accompanying the letter.
There is also limited public buy-in. According to a Pew Research survey published last week, public concern about AI’s increased use in daily life outweighs excitement globally. When broken down into countries, Americans were the most concerned.
The signatories are not against AI; the statement even applauds the many benefits it can bring to society. But they say that the tech industry’s rush to build superintelligence raises concerns “ranging from human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity, and control, to national security risks and even potential human extinction.â€
The statement has brought together an eclectic group of people from all industries and both sides of the political divide. Among the signatories are right-wing media figures and Trump allies Steve Bannon and Glenn Beck, and Obama-era national security advisor and Biden-era Domestic Policy Council director Susan Rice. There are former congressmen from both sides of the aisle, the former UN High Commissioner for Human Rights Mary Robinson, the Pope’s AI advisor, Friar Paolo Benanti, and even the Duke and Duchess of Sussex, Prince Harry and Meghan.
The signatories also include actors like Joseph Gordon-Levitt, who published an op-ed against Meta’s AI chatbots with the New York Times, musicians like Will.I.am and Grimes, and authors like Yuval Noah Harari.
“Superintelligence would likely break the very operating system of human civilization – and is completely unnecessary,†Harari said. “If we instead focus on building controllable AI tools to help real people today, we can far more reliably and safely realize AI’s incredible benefits.â€
It’s not the first letter in which public figures have come together to warn the public and the industry of the dangers of AI. Back in 2023, a group of AI executives, including OpenAI CEO Sam Altman and Anthropic CEO Dario Amodei, signed a letter asking governments around the world to make the mitigation of the risk of extinction from AI a top priority in the same way pandemics and nuclear war are treated.
Another 2023 open letter, also by the Future of Life Institute, had more than 33 thousand signatories, including the likes of Elon Musk, calling for a six-month pause on AI experiments that were training models more powerful than GPT-4. The pause was ignored, and OpenAI released GPT-4o last year and GPT-5 earlier this year. Both models were at the center of controversy this year when users revolted in grief after GPT-5 replaced GPT-4o, a model that has been criticized for evoking emotional reliance and addictive behavior in users.
Pointedly, this new Statement on Superintelligence includes comments from notable people in the AI industry who did not sign. Those names are Altman, Amodei, Mustafa Suleyman (CEO of Microsoft AI), David Sacks (White House AI and Crypto Czar), and Elon Musk (you know who he is).
The AI industry is innovating at breakneck speed and hurtling towards superintelligence with no regulatory guardrails. Even many leading figures who haven’t signed the statement on superintelligence have themselves warned of the potential risks associated with this.
“Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity,†OpenAI CEO Sam Altman said in a blog post from 2015.
Original Source: https://gizmodo.com/prince-harry-and-steve-bannon-join-forces-against-superintelligence-development-2000675466
Original Source: https://gizmodo.com/prince-harry-and-steve-bannon-join-forces-against-superintelligence-development-2000675466
Disclaimer: This article is a reblogged/syndicated piece from a third-party news source. Content is provided for informational purposes only. For the most up-to-date and complete information, please visit the original source. Digital Ground Media does not claim ownership of third-party content and is not responsible for its accuracy or completeness.