How to spot AI generated text

Nick Harland
,
February 2024

How can you tell if a piece of text was written by AI? It's not easy, but here are 6 tips to help you do it.

1. Jargon, long words and fluff

The best and easiest way to spot AI generated text is to look for fluff - which is a classic case of bad copywriting. Pop any command into ChatGPT and it'll likely come up with plenty of jargon (unless you expressly tell it not to). When I asked it how to detect AI generated copy, here are some snippets it came up with:

Despite improvements in natural language processing, AI-generated text may still have a slightly unnatural flow or rhythm compared to human writing. This could manifest as awkward transitions between ideas or abrupt changes in tone.
This could indicate that the model is drawing from its training data without fully understanding the relevance or appropriateness of the information.

And when I asked it about the future of copywriting, here's one sentence that caught my eye:

The future of copywriting holds immense potential driven by advancements in technology and changes in consumer behavior.

Manifest? Appropriateness? Natural language processing? The? It's all a little heavy and unnatural. Real humans don't talk (or type) like this. Even if they tried to, their humanness would seep into the text eventually. With ChatGPT, it never does. Because it's not human.

2. It holds no opinions on anything

Tools like ChatGPT are steadfastly neutral, and never hold strong opinions on anything. One of the underlying principles of the tool is to always give you both sides of the argument, to the point where it gets slightly maddening. Even when you ask it, ChatGPT can't give you an opinion on anything. Because it's not human. Let's ask it what it thinks about Switzerland (a random choice of country that was in no way influenced by previous sentences in this article).

As an AI language model, I don't have personal opinions or beliefs.

Sigh. But to be fair, it does then go on to list plenty of benefits about living in Switzerland. It finishes with a suitably neutral conclusion:

Overall, while Switzerland enjoys many advantages as a desirable place to live, it also faces various challenges that require attention and action from policymakers, businesses, and civil society to ensure continued prosperity and well-being for all residents.

And what are those challenges? Well it does tell me quite a few, but never makes a definitive judgment either way. Of course it's never going to, because as it explained earlier (to be fair), it doesn't have personal opinions or beliefs. Because it's not human.

3. Words and phrases are repeated, and repeated

The longer you use ChatGPT, the more you begin to notice certain words and phrases popping up again, and again, and again. And they're normally the same type of jargon-y words that you might've already noticed. Leverage. Integrate. Manifest. Humans have a natural tendency to try and vary their writing - not just in the words they use, but also things like sentence length and text structure. ChatGPT can't do that consistently, and always falls back on its bank of jargon eventually.

4. No emotion or personality in the writing

Humans have their own mannerisms and ways of speaking, which will always find their way into written text eventually. You might use certain words or phrase things in a certain way. Often, these mannerisms go against normal grammatical rules or established ways of talking.

On the other hand, ChatGPT never bends the rules. When you ask it to write in a more informal tone, it sounds like your embarrassing older relation trying to get down with the kids. Case in point: here's what happened when I asked it to explain the challenges of living in Switzerland:

Politics, Swiss-Style: Switzerland's political scene is like a funky puzzle with direct democracy and federal jive. But sometimes, getting stuff done feels slower than waiting for your phone to update. Time for some political swag upgrades, maybe?

No comment.

5. Its 'research' is always surface-level

When you ask ChatGPT to burp out a block of text, there's no doubt that it sometimes sounds impressive. I mean, it can really seem like it knows what it's talking about. But if you dig little deeper into where it's getting its information from, you'll soon see that its research is very superficial. To illustrate the point, you won't find any of these things in AI-generated text:

  • Quotes from actual humans. If you want to write a piece of content about a certain topic, it always helps to include quotes or perspectives from experts in the field. That way it becomes a balanced, informative piece taken from several sources, and not just your opinion. However, ChatGPT will never include original quotes like this.
  • Reliable sources. ChatGPT may come across as trustworthy, but it regularly makes mistakes. It even includes a little disclaimer on its website: 'ChatGPT can make mistakes. Consider checking important information.' So let's say you're writing a blog for a health clinic. Any time you make a questionable medical or scientific claim, you should include a link to the source. And that source should always be from scientific journals or clinical studies. Not a random blog you found on the internet. ChatGPT often can't quote its sources, or if it does they can be unreliable. That's because it has no real way of judging whether a source is reliable or not. But you can, because you're human.
  • Independent research. Instead of just Googling things, try to do a bit of research that doesn't easily pop up on Google. Medical studies. Newspaper clippings. It may be time-consuming, but this is the kind of research ChatGPT can't and won't ever do.

These tips also double up as great ways of writing better than AI.

6. No slang or informal language

We've kinda touched on this already, but one of the best ways of spotting AI generated text is the lack of slang or informal language.

Humans are constantly innovating with language, finding new ways of expressing things and bending the language as we see fit. And a lot of the time, those subtle little changes in language can't be detected by an AI language model. Firstly because a lot of slang and informal language comes out in spoken rather than written form. And secondly, because AI models can't scan where this language is often used: in private chats and messages between friends.

It's why you get cringey examples like this when it does try to write in what it thinks is an informal tone of voice:

Housing Hustle: Finding a sweet pad in Switzerland, especially in hip cities like Zurich and Geneva, can feel like searching for a unicorn. It's like, "Where's the affordable housing party at?" 'Cause rent and home prices are shooting up faster than a SpaceX rocket.

Sorry to keep using this Switzerland answer, but it really was a goldmine of cringe.

If all else fails, there are also plenty of AI detection tools that claim to recognise AI generated text. However, try not to rely on these too much. Use them to confirm your own suspicions using the above techniques, and you should become an AI spotting machine in no time.

By
Nick Harland