Generative AI, often talked up by doomsayers as a world-ending job bomb, definitely has its limitations. Even the people who create these large language models (LLMs) have already said that we’re very fast approaching the limits of what it can do.
Open AI’s CEO, Sam Altman, stated very clearly at an MIT event in April 2023 that any further progress won’t come from making models bigger. “I think we’re at the end of the era where it’s going to be these, like, giant, giant models […]We’ll make them better in other ways.”
True AI, self-aware machines that can think and learn like Skynet, are most likely decades away.
In fact, according to this expert survey on progress in AI, which polled 4271 researchers and experts in the field of AI, we have a 50% chance of reaching Artificial General Intelligence by 2059.
In other words, this generation of so called AI isn’t capable of being the world ending, civilization destroying threat it was touted to be.
That’s not to say it’s not going to be, and hasn’t already been, disruptive. But the industries that we were sure were going to be wiped out by the end of 2023 probably have some legs left in them.
And that makes sense. The reason why these generative LLMs can do these seemingly magical feats is because they’re trained on a wide pool of data. Human data at that.
And there is only so much of that floating around.
While it’s certainly tempting to think of the internet as an inexhaustible supply of human generated content, the fact of the matter is that it isn’t. And LLMs like Chat GPT are already having trouble scraping together the data it needs to train on.
Market and job disruptor?
Chat GPT has proven to be wildly popular, and since about March of 2023, discussions around what it can do have been nearly inescapable. I saw them all over Twitter and LinkedIn, whether I wanted to or not.
The applications for it are seemingly only limited by the imagination of the user.
Variously, it’s been used to create months of marketing copy in seconds, helped students cheat by writing their essays for them, and in a more worrying turn, given cyber criminals a brand new way to give the humble phishing email a dangerous new lease on life.
Whereas before, you could spot a phishing email by its bad grammar and terrible spelling, AI now churns out grammatically correct, perfectly legible english. And because AI can even do basic coding for you, those phishing emails can now link to a perfectly passable spoofed landing page to catch an unfortunate person’s details.
AI Chatbots, in a very short space of time, have flooded the market with content. And they’ve done it so successfully that companies are jumping on the AI bandwagon.
Buzzfeed, once one of the most powerful and influential companies in digital media, have struggled in recent times. In January of 2023, they took the incredibly bold step of laying off 12% of their staff, replacing them with Open AI and Dall-E tools.
They aren’t the only ones. In July 2023, Indian start-up Dukaan replaced 90% of its staff with an AI powered chatbot called Lina, in a move branded ‘heartless,’ and ‘insensitive’. I can’t say I disagree with that assessment, given the CEO Suumit Shah’s gleeful boasting about it on the platform formerly known as Twitter:
Both Buzzfeed and Dukaan won’t be the last companies to get rid of their human staff in favour of this generation of AI. And that’s a move that might prove short sighted and premature.
Problem the first: Everything looks the same around here…
Inevitably, if you have a large proportion of the population using AI tools to generate content, they’re going to create content that looks, sounds, and reads the same as everything else.
And that’s because these LLM’s are all drawing from the same pool of training data. And as we discussed earlier, these ‘AI’ chatbots aren’t true AI. They can’t be truly creative, or rewrite the same content so that it seems like it’s written by a completely different writer, or in a different tone of voice.
If you’re looking to have your content widely read by everyone above and beyond the content your competitors produce, that’s a problem.
Google is still the world’s most popular search engine, and it isn’t even close. It’s so ubiquitous that in everyday parlance, if you need to know a particular piece of information, (such as the airspeed velocity of an unladen swallow) you might be told to ‘Google it.’
And importantly, Google’s algorithms are trained for uniqueness, actively search for that uniqueness, and punish samey/not useful content. And they do that by dropping it from the first page of your Search Engine Results Pages (or SERPs).
Basically, you’ll have trouble getting your content to stand out from the crowd if you’re using the same tools as the crowd to write it.
Problem the second: So, you’re a cannibal…
So, problem number one. Flooding the market with AI produced content makes everything look the same.
And if, say for example, you’ve just fired your writing staff in favour of AI, it’s possible that that decision may be one you’re regretting. Because now your content doesn’t stand out and isn’t ranking on page one of the SERPs.
But there’s also the unintended consequence of cannibalism
There was only so much human produced content available to Generative AI models. Chat GPT will itself tell you that it only has working knowledge of the world up to 2021.
While this does mean there’s two extra years of training data Chat GPT has yet to get its hands on, in the meantime, it has to make do with what it has available. And what’s available is human data up to 2021, and AI generated content.
A very obvious question arises: What happens when AI starts to train on content written by AI, instead of on content written by humans?
A group of researchers from the UK and Canada decided to take a more in depth look at the problem, and recently published a paper on their work in the open access journal arXiv.
Spoiler: It starts getting worse. Very quickly.
“Over time, mistakes in generated data compound,” wrote one of the paper’s leading authors, Ilia Shumailov in an email to VentureBeat “…and ultimately force models that learn from generated data to misperceive reality even further,”
Basically, the more an AI training model is exposed to AI-generated data, the worse it performs over time. It produces more errors in the content and responses it generates. And this ‘model collapse,’ as the authors call it, is both inevitable, and catastrophic.
So, do we abandon AI chatbots?
Just as it’s somewhat premature to fire your writing team in favour of AI, it’s also premature to swear off generative AI. Like all technological advancements, it’s already here. You can’t uninvent the internal combustion engine, after all.
The people who will be the most successful will be the ones who don’t view it with suspicion, and learn to adapt and incorporate it into their businesses.
For example, in the same study that predicts catastrophic model collapse of AI trained on AI, they point out that you can solve this problem – though at great time and expense – by either keeping the data that it trains on uncorrupted, or injecting fresh new human generated data.
Secondly, while Google is still top dog in the search engine yard, it’s probably not a good idea to give your entire content creation over to what some people unkindly call “glorified autocorrect.”
And like any new and powerful technology, it will disrupt the market. One of the ways I’ve personally noticed is that it appears to have eliminated busy work in content production.
When I first started out in content writing, I did a lot of busy work for content mills, like Copify, Upwork and Fiverr.
At the time, I hated it. The type of writing you could do didn’t exactly set my world on fire, and it didn’t pay very well. There’s only so many 200 word product descriptions you can write without wanting to claw your eyes out.
But content mills were a great way to get started, help you build discipline with its strict word counts, and you never had a shortage of work. And it certainly helped to pay the bills.
On a whim, I checked one of my old content mill accounts. Where once I’d have six or seven pages with hundreds of jobs I could do, there were none. Zero. Zip. Nada.
AI chatbots will undoubtedly completely get rid of the sort of busy work I did to get into copywriting, so I feel bad for anyone looking to get into the field. Because now, the barrier to entry has gotten higher, and it was an intimidating field to start in in the first place.
However, there will probably be opportunities the tech will throw up that we can’t yet predict. I’ve already seen job postings on Indeed looking for people to literally work with AI to create content.
Final thoughts
I rather suspect that in the future, being able to ask AI chatbots the right questions – or ‘prompts’ – to create the best AI generated content will be a highly marketable skill.
Is AI coming for your writing jobs? It’s too early to tell, but probably not. Still, it can’t hurt to future-proof yourself.
Get to know your AI chatbot, learn how to ask it questions, and get comfortable using it to assist your writing.
And if you’re not already, develop your editing skills. In a world flooded with content, this will be worth its weight in gold.
Find out what Geek Native readers say about this in the comments below. You're welcome to add your own.