In an unexpected and wide-ranging coalition that ranges from right-wing strategist
Steve Bannon to social figure
Meghan Markle, more than 800 global figures have issued an open warning to stop the race to build ‘superintelligence’ or artificial intelligence (AI) that would surpass humans. Their message is clear that this is not simply an ethical advisory about algorithms but a major political event calling for a global ban on developing machines that could eclipse human intelligence.
“Superintelligence”, in this context, means an artificial system capable of outperforming humans across virtually all cognitive tasks, including learning, reasoning, planning, and creativity. Its unchecked creation is deeply unsettling to experts because it risks loss of human control, autonomous self-improvement beyond oversight, and even existential threats if something goes wrong.
Why this call stands out is that, rather than another industry letter urging caution, it is a politically diverse, high-profile appeal demanding prohibition of so-called “superintelligent” machines until safety and consensus are assured.
What’s the latest
A new statement coordinated by the non-profit Future of Life Institute (FLI) brings together over 800 signatories, including scientists, politicians, tech leaders, entertainers and faith-figures, all calling for a prohibition on developing superintelligent AI until there is “broad scientific consensus that it can be done safely and controllably”.
The letter urges that development should not resume until there are reliable safety mechanisms and societal agreement. Among the high-profile names are AI pioneers Geoffrey Hinton and Yoshua Bengio, tech leaders such as Steve Wozniak, former Irish President Mary Robinson, royal figures Prince Harry and Meghan Markle, religious advisors, and political figures like Susan Rice.
Accompanying the letter is polling data reportedly commissioned by FLI, showing only about 5 per cent of Americans favour development of unregulated superintelligence, while roughly three-quarters support robust regulation.
What is ‘superintelligence’
The term “superintelligence” refers to a stage of AI where systems outperform humans in nearly all cognitive domains, including learning new tasks, reasoning about complex problems, planning long-term, and being creative. This goes beyond today’s “narrow AI” systems. For example, a chatbot or autonomous vehicle which excels in specific tasks but is limited outside them.
Experts often use the term together with Artificial General Intelligence (AGI), a system capable of understanding or learning any intellectual task a human can, with superintelligence being the next step, where such systems significantly exceed human capability.
What are the pros and cons of superintelligence
However, risks associated with superintelligence include the possibility of human operators losing control, self-improving systems acting in ways that humans cannot predict, and if safety mechanisms fail, consequences could even turn existential. While proponents of advanced AI development argue it could solve major problems like cures for diseases, mitigating climate change, and boosting productivity, critics warn that regulation and safety frameworks remain far behind the pace of development.
Why the call for a ban on superintelligence now
There have been earlier appeals to pause training superintelligence models, but this time the call differs. In March 2023, FLI and others issued a letter calling for a six-month pause on training “giant AI experiments”.
In contrast, the current statement is calling for a blanket ban on the systems until proven safe. The organisers argued there is a race among major tech firms, including OpenAI, Google, Meta Platforms, Anthropic, xAI, to build AGI and beyond, and the concern is that competition may override safety.
Global response and who’s behind it
What makes this movement notable is the broad ideological spectrum it has managed to bring together. On the one hand, you have Steve Bannon and on the other, Meghan Markle and Prince Harry, who previously have been extremely unlikely political bedfellows, being joined by religious leaders, Nobel laureates, tech pioneers and former high-ranking security officials.
Chinese signatories reportedly include figures such as Andrew Yao and Ya-Qin Zhang, underlining concerns about loss of control over advanced AI crossing national-security boundaries.
On the policy side, signs of government interest are growing. US policymakers such as Susan Rice and Admiral Mike Mullen are among the signatories or referenced in the effort, pointing to increasing recognition of advanced AI as a national-security issue.
Regulatory backdrop
In the
European Union, the EU AI Act remains the most advanced legislative framework for AI governance, though technology industry push-back is strong.
In the United States, regulation is fragmented. Some states, like California, Utah, and Texas, have passed AI-specific laws, but federal efforts remain stalled. A proposed ten-year moratorium on AI regulation inserted into the federal budget bill was dropped, highlighting policy divides.
What it means
This campaign marks a potential turning point in the public debate over AI safety. Until now, the discussion has been largely confined to technical circles and regulatory grey zones. With a coalition spanning tech insiders, public celebrities and political heavyweights, the issue of AI superintelligence is entering the mainstream as a shared human risk rather than just a tech industry issue.
It brings new political weight and media visibility to the question of how we govern advanced AI and suggests that fears of loss of control and autonomous machine power may finally mobilise policymakers to act faster than before.