One of many concerns about accelerating AI development is the risk it poses to human life. The worry is real enough that numerous leading minds in the field have warned against it: More than 300 AI researchers and industry leaders recently issued a statement asking someone (except them, apparently) to step in and do something before humanity faces—and I quote—”extinction.” Skynet scenarios are usually the first thing that leaps to mind when the subject comes up, thanks to the popularity of blockbuster Hollywood films. Most experts, though, seem to think the greater danger lies in, as professor Ryan Calo of the University of Washington School of Law put it, AI’s role in “accelerating existing trends of wealth and income inequality, lack of integrity in information, & exploiting natural resources.”

But it seems like a Skynet-style apocalyptic end of the world might be more plausible than some people thought. During a presentation at the Royal Aeronautical Society’s recent Future Combat Air and Space Capabilities Summit, Col Tucker “Cinco” Hamilton, commander of the 96th Test Wing’s Operations Group and the US Air Force’s Chief of AI test and operations, warned against an over-reliance on AI in combat operations because sometimes, no matter how careful you are, machines can learn the wrong lessons.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *