Industry leaders, including Sam Altman and Demis Hassabis, have come forward to emphasise that we must prevent the “risk of extinction from AI.”
The developmental growth of AI has surged recently. ChatGPT’s launch in late 2022, preceding the GPT-4 large language model, has seen industry funding and focus directed to artificial intelligence. Now, you can’t go anywhere without seeing the words, and it doesn’t look like that’s going to change anytime soon. Nvidia have just announced their DGX GH200 supercomputers, sure to build more giant models, while Microsoft readies their own.
AI will be the future, and whether or not we’re going to exist in that future, technology leaders don’t really know.
According to this statement signed by AI scientists and other notable figures: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
AI’s threat has not gone unnoticed, with Sam Altman (OpenAI CEO) notably demanding tighter regulation on its regulation, alongside also publishing a statement that “given the possibility of existential risk, we can’t just be reactive,” suggesting that action needs to be taken now.
Image credited to safe.ai