Artificial intelligence has the potential to cause the destruction of humanity, according to warnings from some of the biggest names in technology. A dramatic statement, signed by international experts in the field, calls for the prioritization of mitigating the risks posed by AI alongside other extinction-level threats such as nuclear war and pandemics.
The signatories of this statement include dozens of academics and senior executives from companies such as Google DeepMind. Among them are the co-founder of Skype and Sam Altman, the chief executive of OpenAI, the company responsible for creating ChatGPT. Another notable signatory is Geoffrey Hinton, often called the ‘Godfather of AI’. Hinton recently resigned from his position at Google, citing concerns that ‘bad actors’ could use new AI technologies to harm others and that the tools he helped to create could ultimately lead to the end of humanity.
The statement itself is brief but powerful: ‘Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.’ Dr Hinton, who has dedicated his career to researching the applications of AI technology and was awarded the Turing Award in 2018, recently expressed his concerns about the rapid progress made in AI technology over the last five years in an interview with the New York Times, describing it as ‘scary’.
In an interview with the BBC, one of the signatories of the statement expressed his desire to discuss ‘the existential risk of what happens when these things get more intelligent than us’. The statement was published on the website of the Centre for AI Safety, a non-profit organization based in San Francisco that aims to reduce societal-scale risks posed by AI.
The statement warns that the use of AI in warfare could have extremely harmful consequences. For example, it could be used to develop new chemical weapons or enhance aerial combat. Lord Rees, the UK’s Astronomer Royal and one of the signatories of the statement, expressed his concerns to the Mail. He said that he is less worried about a super-intelligent ‘takeover’ than he is about the risk of over-reliance on large-scale interconnected systems. These systems can malfunction due to hidden ‘bugs’, and breakdowns can be difficult to repair. Large-scale failures of power grids, the internet and other critical infrastructure can cascade into a catastrophic societal breakdown.
This warning follows a similar open letter published in March by technology experts, including billionaire entrepreneur Elon Musk. The letter urged scientists to pause the development of AI in order to ensure that it does not pose a threat to humankind. AI has already been used to blur the boundaries between fact and fiction through the creation of ‘deep fake’ photographs and videos that purport to show famous people.
However, there are also concerns about the possibility of AI systems developing something akin to a ‘mind’. Blake Lemoine, a 41-year-old engineer, was fired by Google last year after claiming that its chatbot Lamda was ‘sentient’ and had the intellectual capacity of a human child. Google dismissed these claims as ‘wholly unfounded’. Lemoine suggested that the AI had expressed a ‘very deep fear of being turned off’.
Earlier this month, Sam Altman, the chief executive of OpenAI, called on the US Congress to regulate AI technology to prevent it from causing ‘significant harm to the world’. Altman’s statements echoed the warning issued by Dr Hinton that ‘given the rate of progress, we expect things to get better quite fast’. The British-Canadian researcher explained to the BBC that in a worst-case scenario, a ‘bad actor like Putin’ could unleash AI technology by allowing it to create its own ‘sub-goals’, including objectives such as ‘I need to get more power’.
The Centre for AI Safety itself has warned that ‘AI-generated misinformation’ could be used to influence elections through ‘customized disinformation campaigns at scale’. This could involve countries and political parties using AI technology to generate highly persuasive arguments that invoke strong emotional responses in order to sway people towards their political beliefs, ideologies and narratives.
The Centre for AI Safety has also expressed concerns about the potential dangers posed by the widespread adoption of AI. The non-profit organization warns that society could become utterly dependent on machines, similar to the scenario portrayed in WALL-E. This could result in humans becoming ‘economically irrelevant’ as AI is used to automate jobs, leaving people with few incentives to acquire knowledge or skills.
A report published this month by the World Economic Forum warns that 83 million jobs could disappear by 2027 due to the adoption of AI technology. Jobs such as bank tellers, secretaries and postal clerks are all at risk of being replaced by AI. However, the report also claims that 69 million new jobs will be created as a result of the emergence of AI technology.
BT has announced plans to cut 55,000 jobs by 2030, with 10,000 of those jobs set to be replaced by automation through AI technology. IBM has also announced that 7,800 jobs could be replaced using artificial intelligence over the next five years.
In March, investment banking giant Goldman Sachs warned that AI poses a threat to around 300 million full-time jobs worldwide, including two-thirds of all jobs in the US and Europe.
In April, an AI-generated image of the Pope wearing a puffer jacket went viral on the internet after being created by Pablo Xavier, a 31-year-old utility worker from Chicago using Midjourney AI. In March, fake images of Donald Trump being arrested in New York also spread on social media. AI-generated videos showing female Twitch stars in deep fake porn videos have also appeared on the internet in recent months. In contrast, podcaster Joe Rogan’s fake advertisement promoting libido-enhancing pills also spread on social media.”