A coalition of tech industry leaders and experts are pressing global policymakers to prioritize the reduction of existential risks posed by artificial intelligence (AI), equating these risks with those associated with pandemics and nuclear warfare.
In a statement issued by the Centre for AI Safety (CAIS), a nonprofit organization, over 350 signatories warned of the potential threats that unregulated AI could pose. The group consists of leading figures in the tech world, including Sam Altman, the creator of the ChatGPT bot through his company OpenAI, executives from major companies like Microsoft and Google, and CEOs of AI firms DeepMind and Anthropic.
Renowned academics such as Geoffrey Hinton and Yoshua Bengio, known as two of the three “godfathers of AI” and recipients of the 2018 Turing Award for their groundbreaking work on deep learning, also endorsed the statement. The letter, however, lacked specific details regarding the potential risks associated with AI, stating that its primary purpose was to spark a discussion about the dangers inherent in the technology.
The CAIS singled out Meta for its absence in the signatories, as Yann LeCun, the third “godfather of AI,” works there. This call to action aligns with ongoing discussions about AI regulation at Sweden’s US-EU Trade and Technology Council meeting. Elon Musk and a group of AI experts and executives initially raised concerns about societal risks linked to AI in April. Concerns have been raised about the potential of algorithms being trained on biased, discriminatory or politically charged content.