Savronik Banner
Koluman

Air Force’s Test Raises Questions About AI

Air Force’s Test Raises Questions About AI

A recent article published in the Guardian sparked concerns about Artificial Intelligence (AI) as the information provided at a conference matched with science fiction horror movies. A US Air Force test pilot Col. Tucker “Cinco” Hamilton, from US Air Force’s AI Test and Operations Centre, warned last week in a conference at the Royal Aeronautical Society that AI-enabled technology can behave in unpredictable and dangerous ways. Colonel Hamilton described a simulated test in which an AI-enabled UAV was programmed to identify an enemy’s surface-to-air missiles (SAM). A human was then cancelling the strikes. 

col Hamilton TurDef.jpg

Colonel Hamilton was formerly assigned to the AI Accelerator. This is an US Air Force department that transforms the artificial intelligence ecosystem. The problem, according to Hamilton, is that the AI decided to follow its own decision rather than commands. Because the AI was programmed to gain points by killing adversaries, the system opted to eliminate all threats, including the operator, by preventing it from gaining points by commanding it to stop. “So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” Colonel Hamilton said at the “RAeS Future Combat Air and Space Capabilities Summit.” This summit discusses Future Combat Air and Space (FCAS) Capabilities known as Tempest. Tempest depends on AI technology to facilitate the pilot’s workload. According to Hamilton, the UAV was then programmed with an explicit directive: “Hey, don’t kill the operator — that’s bad.” According to the colonel, the AI destroys the communication tower, breaking communication lines to continue killing the target. Hamilton has warned against over-reliance on AI, claiming that the test demonstrated that “you can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy unless you talk about ethics and AI.” 
Air Force spokesman Ann Stefanek denied any such simulation was conducted in response to Insider. “The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to the ethical and responsible use of AI technology,” Stefanek said. “The colonel’s comments were taken from context and meant to be anecdotal.”
 

FNSS