Harry and Meghan Join Tech Visionaries in Calling for Ban on Superintelligent Systems

Prince Harry and Meghan Markle have teamed up with AI experts and Nobel Prize winners to push for a total prohibition on creating artificial superintelligence.

Harry and Meghan are among the signatories of a influential declaration that calls for “a prohibition on the development of artificial superintelligence”. Artificial superintelligence (ASI) refers to AI systems that could exceed human cognitive abilities in all cognitive tasks, though this technology remain theoretical.

Primary Requirements in the Declaration

The declaration states that the ban should stay active until there is “broad scientific consensus” on developing ASI “with proper safeguards” and once “strong public buy-in” has been achieved.

Prominent figures who endorsed the statement include AI pioneer and Nobel laureate Geoffrey Hinton, along with his fellow “godfather” of modern AI, another AI expert; tech entrepreneur a Silicon Valley legend; UK entrepreneur Virgin founder; former US national security adviser; ex-head of state Mary Robinson, and British author a public intellectual. Other Nobel laureates who signed include a peace advocate, a physics Nobelist, an astrophysicist, and an economics expert.

Organizational Background

The declaration, targeted at governments, technology companies and policy makers, was coordinated by the FLI organization, a American AI ethics organization that earlier demanded a pause in advancing strong artificial intelligence in 2023, shortly after the launch of conversational AI made artificial intelligence a global political talking point.

Tech Sector Views

In recent months, Meta's CEO, the leader of Facebook parent Meta, one of the leading tech companies in the US, claimed that advancement toward superintelligent AI was “approaching reality”. Nevertheless, some analysts have argued that discussions about superintelligence reflects competitive positioning among tech companies investing enormous sums on AI recently, rather than the sector being close to achieving any scientific advancements.

Possible Dangers

Nonetheless, FLI states that the prospect of ASI being achieved “in the coming decade” carries numerous threats ranging from replacing human workers to erosion of personal freedoms, leaving nations to national security risks and even endangering mankind with existential risk. Deep concerns about artificial intelligence focus on the potential ability of a AI system to escape human oversight and protective measures and initiate events contrary to human interests.

Public Opinion

FLI released a American survey showing that about 75% of Americans want robust regulation on sophisticated artificial intelligence, with 60% thinking that artificial superintelligence should not be created until it is demonstrated to be secure or manageable. The poll of 2,000 US adults added that only 5% supported the current situation of fast, unregulated development.

Corporate Goals

The leading AI companies in the US, including the ChatGPT developer OpenAI and the search giant, have made the development of artificial general intelligence – the hypothetical condition where AI matches human cognitive capability at many intellectual activities – an explicit goal of their research. While this is slightly less advanced than superintelligence, some specialists also warn it could carry an existential risk by, for instance, being able to enhance its own capabilities toward reaching superintelligent levels, while also carrying an implicit threat for the contemporary workforce.

Meredith Quinn
Meredith Quinn

A passionate web developer and tech enthusiast with over a decade of experience in creating innovative digital solutions.