A new study by Google DeepMind predicts that Artificial General Intelligence (AGI) AI with human-like intelligence could emerge by 2030 and pose a serious threat to humanity.
A study released by Google DeepMind on Sunday warns of “severe harm” that could arise from artificial general intelligence (AGI). The study highlights existential risks without detailing how these scenarios might unfold.
Co-authored by Shane Legg from DeepMind, the paper avoids doomsday predictions and urges Google and other AI companies to address potential threats such as misuse, misalignment, mistakes, and structural risks. The study emphasizes that “society must decide what’s severe,” advocating for collective risk management.
Researchers at #Google #DeepMind have shared #risks related with #AGI & how to stop the #technology from harming humans.The concern is evident as gradually people do not like writing something original , using Gen-AI ,the data soon will be similar across https://t.co/idzRYp3cZ1
— Chaitali DP (@chaitali_debp) April 6, 2025
In February, Demis Hassabis, CEO of DeepMind, expressed the urgency of developing artificial general intelligence (AGI) and predicted that it could be achieved within five to ten years. He proposed the establishment of a global research hub for AGI, similar to CERN, to ensure its safe development. This hub would be paired with a watchdog organization like the International Atomic Energy Agency (IAEA) and a UN-like body for oversight.
Read: OpenAI Raises $40B, Hits $300B Valuation
The study’s mitigation plan focuses on preventing the misuse of AGI, particularly the risk of weaponization. As AGI moves beyond task-specific AI towards human-like versatility, its vast potential and associated risks become increasingly significant.
Our safety research team at Google DeepMind made a free course on AGI safety.
It's just 75 minutes, and it covers a super important area. Anyone interested can watch it👇 pic.twitter.com/Of67oXVmsC
— Patrick Loeber (@patloeber) April 3, 2025
Unlike narrow AI, artificial general intelligence (AGI) can learn and adapt across various domains, resembling human intellect. DeepMind’s warning highlights the urgency of harnessing its power safely before the dawn of 2030.