Yes, it’s really pretty much as bad as the title sounds. Has it… has it started?
For context, the Royal Aeronautical Society recently published highlights from the RAeS Future Combat Air & Space Capabilities Summit. Which is where I found this hidden gem of a revelation!
Most of the highlights published were pretty standard stuff, a lot of it very understandably centering on the Russian war in Ukraine, the lessons learned and the challenges presented.
It’s quite far down the page when you get to the matter of AI, but it apparently was a major focus at the conference, touching on everything from quantum computing and secure data clouds, to ChatGPT.
But here’s the kicker (directly quoted from Aero Society’s article linked above):
However, perhaps one of the most fascinating presentations came from Col Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, USAF, who provided an insight into the benefits and hazards in more autonomous weapon systems. Having been involved in the development of the life-saving Auto-GCAS system for F-16s (which, he noted, was resisted by pilots as it took over control of the aircraft) Hamilton is now involved in cutting-edge flight test of autonomous systems, including robot F-16s that are able to dogfight. However, he cautioned against relying too much on AI noting how easy it is to trick and deceive. It also creates highly unexpected strategies to achieve its goal.
He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”
He went on: “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
This example, seemingly plucked from a science fiction thriller, mean that: “You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI,” said Hamilton.

Ermmmm… right, then.
With all the warnings from top scientists in the news the past couple of months about the potential dangers of AI, this hasn’t done much to reassure me.
Theories and scenarios are already forming in my head, and you know what that means… a follow-up blog!
In the meantime, remember to say ‘please’ and ‘thank you’ to Alexa, Google and Siri (and thank me later).
What are your thoughts on this story? Are we making too much of a deal about the potential AI overlord uprising? Are you working on blueprints to your secret underground bunker as we speak? Let us know in the comments.