How good are marketers at detecting AI content?

Contents

AI content is everywhere.

Let’s skip the speculation over how it’s going to change everything, why it’s ruining/saving society, and whether all creative work is going to be replaced by robots.

The only question that matters: Can we actually identify AI content when we see it?

That’s part of what we wanted to know when we surveyed over 150 marketers about all things AI In December of 2022. You can read the full survey results to learn things like how marketers are currently using AI, what the ethical problems of AI content are, and whether they think AI content will harm their SEO.

Just want to prove how much better you are than other marketers at distinguishing humans from the machines? Here’s the same quiz we gave in the survey:



How everyone else did

Didn’t do quite as well on this as you did on your AP English test?Neither did (almost) anyone else.

On any given question, only five percent of respondents were able to successfully identify which statements were AI-generated and which were human. The most common outcome was to successfully identify one of the AI-generated statements, miss the other, and mistakenly assume one of the other, human-generated sentences had been written by AI.

In other words, it was a crap shoot.

A couple other interesting observations from the data:

  • For each of the four questions, the most commonly selected sentence was always AI-generated. (That’s a win for the humans.) Beyond that, though, marketers did no better than chance.
  • Marketers were far more likely—more than four times as likely, in fact—to completely miss all of the AI-generated statements, than to correctly identify all of them. (That’s a win for the robots.)

Of course, humans created the robots, so a win for robots is a win for all of us. Theoretically. (But the us-versus-them narrative lets us feel like the flesh-and-blood underdogs defending ourselves against the cold metal of the robots in an apocalyptic sci-fi novel, so we’re sticking with that framing for now.)

Caveats and disclaimers

#1: Currently, AI does much better at generating short bits of text than at long-form writing. Case in point: when I asked AI to write an article comparing video conferencing platforms to ice cream flavors. While we may not do so well at identifying AI-generated sentences, we could undoubtedly do much better if we were reading full articles and identifying its author as either AI or human.

#2: We sent out our survey around the same time ChatGPT was released. AI content had already been a quickly growing phenomenon in recent years, but over the last several weeks, it has exploded exponentially in the public eye. Both the technology itself and the marketing industry’s perception of it are changing daily, so if you’re dabbling in the space, try not to get whiplash. (And read our review of the top AI content services to learn which tools are worth trying, and which aren’t.)

#3: We have a horse in this race.We’ve been going deep on AI over the last year to figure out whether, and how, it can help our human writers create quality content for our customers. The answer? Our Human-Crafted AI Content, which leverages the best of both AI and humans to produce content at a lower cost than our fully human solution but without the major pitfalls of other existing AI content. It’s not for everyone, but if you’re interested, we’d love to hear from you.

P.S. What about automated AI detection?

As weird as it may sound, our best chance for distinguishing between human and AI content is likely AI itself. Especially since the release of ChatGPT, we’ve seen a rise in AI detection tools that use machine learning algorithms to identify content that was likely written by AI.

These tools aren’t yet perfect, and it will likely turn into an arms race: Even as AI detection models get better at predicting AI-generated content, other AI models will get better at producing content that fools those same detectors.

It’s not always an adversarial relationship between AI generation and AI detection, though. OpenAI, the company behind ChatGPT and GPT-3 that powers most of the existing AI writing tools on the market today, is working on a way to watermark content produced by their model. This would help prevent their technology from being used for nefarious purposes, whether that’s plagiarism at a university or misinformation propaganda from a bad political actor.

So, if your content strategy depends on passing off pure AI content as your own—well, that’s not a great strategy for a lot of reasons. Even if you don’t care about your content not providing value to your readers, though, you should at least care that Google may not like it.

Avatar photo

Megan Skalbeck

Megan traffics in words. Whether that’s spinning up a story on the blog or paring down a conversation on the podcast, it’s all elementary math in the end: She adds, subtracts, multiplies for effect, and divides for readability. When she’s not helping words live their most meaningful life, she’s usually in the woods, in the ocean, on a rock, or on the road.

Questions? Check out our FAQs or contact us.