Back to Blog
Research

ChatGPT vs Human Writing: Can You Tell the Difference?

Jan 5, 20255 min read
ChatGPT vs Human Writing: Can You Tell the Difference?

We conducted a blind study with 500 participants, presenting them with pairs of text passages and asking them to identify which was written by ChatGPT and which by a human. The results were fascinating and reveal important truths about the state of AI writing.

Overall, participants correctly identified AI text only 52% of the time, barely better than chance. However, when we broke down the results by text type, interesting patterns emerged. Participants were better at spotting AI in creative writing (61% accuracy) but struggled with factual content (48% accuracy).

The tells that gave AI away were subtle but consistent. AI text tended to use more hedging language ("It is worth noting," "One might argue"), had more uniform paragraph lengths, and avoided strong personal opinions. Human writing, by contrast, was messier, more opinionated, and more varied in structure.

When we ran the same texts through AI detectors like GPTZero and Originality.ai, the machines performed significantly better than humans, with detection rates around 85-90%. This confirms that AI detection relies on statistical patterns invisible to the human eye.

The implication is clear: while AI writing can fool human readers, it cannot yet fool specialized detection tools. This is why humanization tools are so valuable. By altering the statistical fingerprints of AI text while preserving readability, tools like WriteHumane bridge the gap between AI efficiency and human authenticity.

Share this article: