When describing an experiment in which an AI-enabled drone chose to attack its operator in order to achieve its objective, a US Air Force colonel "misspoke," the service has claimed.
Image Source - Google | Image by <br><a href= BBC News |
At a meeting hosted by the Royal Aeronautical Society, Colonel Tucker Hamilton, the US Air Force's chief of AI test and operations, delivered a keynote address.
It was the subject of a viral story.
According to the Air Force, no such experiment was conducted.
He had previously discussed a hypothetical situation in which a human drone operator continually prevented an AI-enabled drone from successfully destroying Surface-to-Air Missile installations.
In the end, he claimed, the drone destroyed the communication tower so that the operator could no longer communicate with it, despite having been programmed not to kill the operator.
Col. Hamilton later stated in a statement to the Royal Aeronautical Society, "We've never done that experiment, nor would we need to in order to appreciate that this is a real consequence.
AI WARNINGS
Image Source - Google | Image by <br><a href= BBC News |
People in the industry have recently released a series of warnings regarding the risk that AI poses to humans, albeit not all experts agree on how big a problem it is.
Prof. Yoshua Bengio, one of three computer scientists dubbed the "godfathers" of artificial intelligence after receiving the coveted Turing Award for their work, told the BBC earlier this week that he believed the military should not be permitted to use AI in any way.
It was "one of the worst places we could put a super-intelligent AI," according to him.