Recently, we read reports that educators are using ChatGPT to create assignments for high school kids. In return, those same kids are using AI to write their replies.
By technological evolution, AI has found a way to engage humans in a loop where it controlled the narrative.
Cat now out of bag, humans returned to the fight with AI content detectors. If you think it might be a machine talking to you, you can check…
…for now.
Kind of.
Why AI content needs detecting
Education isn’t the only place where AI content hides:
1: If you paid for humans, you should be getting humans
There’s a reason our clients pay us to write. We are writers. We craft Key Messages, carefully plan and carry out website rewrites, and work hard at copywriting to get the right words in the right places. Nobody who pays us would be happy if we pushed a button while we were at lunch and came back to 100 blog posts. (BTW this is an easy ChatGPT function – for the mediocre-est content just ask it to “give me ten blog titles for a site about jellyfish.. now give me ten talking points for each title.. now for each talking point, expand into five talking points each and use those to write blogs”).
2: Intentional misinformation and sabotage
Imagine a large-scale misinformation campaign posting intelligently across tens of thousands of social accounts, in many cases in response to comment threads.
We asked GPT3: Write a vitriolic counterpoint to the following tweet:
Adding a splash of milk to your cup of coffee can have anti-inflammatory effects, a new study reports. Researchers say the combination of polyphenols and proteins doubles anti-inflammatory properties in immune cells.
BN Sullivan PhD on Twitter
Without much thought, davinci-003 turns into a not-quite-mansplainer quick enough to be troublesome at scale:
It’s important to note that adding milk to your coffee also increases your daily calorie intake and saturated fat intake. If you’re trying to maintain a healthy lifestyle, you should consider other alternatives to get the anti-inflammatory benefits, like adding cinnamon or turmeric to your cup of joe.
GPT3 Davinci003
A quick combination of automated sentiment analysis looking for positive reviews of a product, combined with an angry bot trained to respond and refute – not great.
3: Unintentional misinformation
ChatGPT can produce such convincing and accurate output, it makes checking for inaccuracies almost as time-consuming as writing.
For example – we asked it something about Foyles Bookstore (which, obviously we know pretty well). It not only mentioned the brothers William and Gilbert Foyle, it added a convincing third sibling – Christopher Foyle. If you didn’t know there were just two bookstore Foyle brothers you might not realise the fictional detective Chris Foyle wasn’t related. Or that grandson Christopher Foyle was a long way from being born in 1903.
4: Bots posing as humans
Customer service bots seem harmless enough. But when they present as human, we face some new threats.
You might believe your new Samantha-bot friend is empathetic to your issue and will go out of “her” way to help because of the story you just told her about Granny and the lost glasses. She doesn’t care. When you discover she’s a machine, your expectations and trust in the brand collapse.
Samantha-bot is also most likely biased. She’s basically a cult member with no experience of the world outside of her training data. Just like humans, without the social graces to temper herself, she might be quite opinionated in an ugly way.
Human scammers are bad enough, but imagine a bot scammer? Able to tap into vast training data for social engineering scams, the Samantha-bot will have your secrets in no time.
5: Sometimes, you might want to trust the machines?
Honestly, I can’t think of any reasons why AI content would be good for the reader right now. I thought I could, but I can’t.
Detect, detective!
Other than the slightly dull feeling when you read AI posts, how can you tell if something has been written by AI?
Well, that’s currently tricky.
Use your head
If you’ve just been sent a load of copy and you suspect it might be AI content rather than human, start by looking at the language itself.
- Is it repetitive? And does it sound like an essay you wrote at school (not you personally, obvs)? Is it a bit samey in language and format?
- AI’s not great at starting paragraphs without them sounding contrived.
- Is it “bursty” – humans tend to write creatively in bursts, followed by a bit of dull text afterwards
Current AI models lack the ability to generate new ideas about an event or concept without having experienced it. Creativity is problematic for them. Even though they seem to be producing something new, we often feel that we’re being fed a multi-layered mashup of ideas.
Use the machines
AI content detection tools are a very long way from perfect and will be as interesting a battleground as SERPS were, language-wise.
OpenAI themselves explain why it’s tough to call them out. Their own classifier has trouble recognising itself in short texts if the human is just a dull writer, if you’re not writing in English or if someone has edited the text.
These tools are evolving too. So what was definitely human six weeks ago is definitely machine now. We saw this for ourselves when testing 500 words written by a combination of three AI tools talking to each other around five prompts rewritten by humans. In late 2022 it tricked them all into thinking the copy was human. Now, in 2023, Copyscape and OpenAI can more confidently say “No, this is AI content”.
Current AI detection tools
We fed some mostly AI generated text with human oversight into these and got found out right away. Some were very angry with us – “99% machine! 1% human!” Curiously, OpenAI’s own recently released detector was most cautious and stuck with “most likely” machine.
(NB There’s no point listing them – they change a lot and the best are generally in the first two pages of Google).
And if you’re a writer?
We use generative text for very specific cases. It’s particularly useful for ideation and writing prompts. Some of this piece was inspired by conversations with the machine – that’s why it’s origin mark labelled HUMA.
Run your own content through some of the machine detection systems just in case it triggers false positives. Mark your work with Origin Marks. And if you want all-human copywriting, call us.