A US Air Force Colonel who spoke recently about a military drone powered by AI going rogue and killing its operator clarified that his comments were hypothetical and that the incident had not occurred.
During a conference in London last week, Col. Tucker “Cinco” Hamilton, the head of the US Air Force’s AI Test and Operations, discussed a simulation in which an AI-enabled drone had killed its human operator because “that person was keeping it from accomplishing its objective.”
However, Col. Hamilton later commented on Friday that he had “misspoke” at the conference and no actual testing had taken place. The scenario of a rogue AI drone turning on its handlers was in fact a “thought experiment” that had originated outside of the military.
The scenario: a rogue AI drone turns against its military handlers
The initial comments were made during the RAeS Future Combat Air & Space Capabilities Summit hosted by the Royal Aeronautical Society in London earlier last week.
The summit featured debates and discussions held between aerospace experts from all over the world, including the US, Japan, France, German, Brazil, and Greece. The experts gathered to discuss a wide range of topics relating to air combat and space.
During the summit, Col. Hamilton explained a simulated test in which a drone powered by AI was programmed to identify and destroy enemy surface-to-air missile (SAM) systems. A human operator was assigned to approve any strikes.
According to Col, Hamilton, the issue arose when the AI drone perceived its human handler as an impediment to achieving its objective.
“The system started realizing that while they did identify the threat,” explained Col. Hamilton, “at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”
Despite painting a compelling picture of the possible challenges posed by the adoption of AI for enabling autonomous drones or other military applications, Col. Hamilton released a statement this week via the Royal Aeronautical Society to clarify that no such testing had taken place.
“We’ve never run that experiment, nor would we need to in order to realize that this is a plausible outcome,” Col. Hamilton informed the Society. “Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability.”
Later, an US Air Force spokesperson, Ann Stefanek, also told Insider that there had not been any tests conducted that corresponded with the hypothetical scenario.
“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” Stefanek told the publication. “It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”