Opinion | We Need a Manhattan Project for AI Safety
On this day in history, June 5, 1968, presidential hopeful Robert F. Kennedy is fatally shot in Los Angeles
Ruptured brain aneurysm lands social media influencer in medically induced coma after emergency C-section
Worries about artificial intelligence have suddenly seized Washington: The White House just hauled in a roster of tech CEO’s to press them on the safety of their new AI platforms, and Congress is scrambling for ways to regulate a possibly disruptive and risky new technology.
There are a lot of immediate concerns about the latest generation of AI tools — they could accelerate misinformation, job disruption and hidden unfairness. But one concern hovers over the rest, both for its scale and the difficulty of fixing it: the idea that a super-intelligent machine might quickly start working against its human creators.
It sounds fanciful, but many experts on global risk believe that a powerful, uncontrolled AI is the single most likely way humanity could wipe itself out.