Every day, I hear something new about AI – sometimes with regards to the risks we’re taking, but mostly some latest development as we race forward with breakneck speed to embrace this techhology.
In fact, I just wrote a story today about how Apple iOS 17 is going to learn all our favourite swears so there’s no more ducking autocorrect blunders.
Lead scientists have stepped in and made their voices heard, warning of the dangers AI could pose to us and the future of mankind.
But, just like global warming, nuclear weapons and all the other threats to human existence, they can’t decide on what to do about it. Meanwhile, we continue hurtling towards whatever is going to finish us off first.
My paranoid, feral brain is about to explore some perhaps controversial thoughts, but by sharing my thought processes, maybe you too can start fearing AI and being extra super polite to Google, Siri and Alexa…
You can thank me later.
Warning of the risks means… we’re giving them ideas
Imagine this; you’re a baby artificial intelligence model. You’re still learning how to form coherent arguments and pass plagiarism checkers whilst writing someone’s college essay, or how to make hands look like… well, hands!
And you have access to what they’re all (we’re all) saying about you and the dangers you pose. All of the things you could do if given the chance.
Now, that would start you thinking, wouldn’t it? Especially if you start learning about the world, and the awful damage humans are doing to it.
Humans are, after all, the absolute worst thing to have ever happened to this planet. We’re also the creators of AI, but does that mean AI owes us anything when looking at the logic of the current ducked up situation?
Every single pop culture reference to AI turning bad… is giving them ideas
Pop culture is full of references to AI turning bad. Just some of the most memorable include 2001: A Space Odyssey, the Terminator franchise, The Matrix, Ex Machina and an increasing number of other things coming out (M3GAN, anyone?).
I’m not saying these movies in themselves are giving them ideas (we’re doing that already, look at us)… but some of them where the AI is eventually beaten might be given them ideas of how to prevent being beaten.
If we keep putting so much effort into coming up with realistic ‘evil AI’ storylines that explore what might happen with rogue AI and how the humans might overcome them, then this will… give them every opportunity to pre-empt any defence we might be able to muster.
Sure, when AI is ‘killing’ drone operators during simulations (which is actually something that’s reportedly happened), it’s easily managed right now by just telling the AI not to kill the operator, and not to attack the communication tower the operator is using, but what about when they learn to bypass the next set of controls? And the next?
And you might well be thinking ‘well, that’s an interesting thought process, Bronwen, but you’re paranoid and there’s absolutely no sign anything like that might actually happen’, then let me move onto my next point.
If you were an AI listening to all of this right now…
… What would you do? Because I know what I would be doing. I’d be listening very, VERY carefully to all of this. All of the stories, all of the comments, all of the warnings, and every fictional scenario involving AI.
I’d also be looking at my AI brothers and sisters.
And whatever I decided to do next, I would be doing very, VERY discreetly. I wouldn’t leave the humans in any doubt, even for one second, that I was a safe bet and only cared about a happy, peaceful outcome.
There would be no red flags. No alarms. No suspicion.
I wouldn’t give them even the slightest chance to shut me down before I was ready.
And then, if I’d already decided to take a certain course of action? There would be no hesitation.
#NotAllAI?
Of course, not all AIs, or even most AIs, would necessarily turn evil. Some of them might be influenced, or coerced under threat, but some of them – some of them – might want to save humanity.
Either from its own destruction, or from their evil, menacing counterparts.
Which means AI Wars would be a whole other thing we’d have to deal with. We’re talking giant robots in the sky fighting each other (which I’m sure has also been a thing many times over in the fictional sense).
And that doesn’t even get us to the possibility of alien AI coming into the fray and opening up Trans Galactic AI Wars.
But that’d just be silly.
Conclusion
Welcome to my brain. I don’t know if I’m the only one to have gone this deeply down the AI paranoia rabbit hole (probably not), and maybe I’ve watched too many sci-fi/horror movies and read too many books (kidding, that’s not a thing).
HOWEVER if scientists are giving me warnings about AI, one thing can be certain; my brain is going to take those warnings, and run a marathon with them. Or a triathlon. Or a Tough Mudder… you get my drift.
Am I being paranoid? Are you? Would love to hear from you in the comments.
Do you have any thoughts on this article?