stub New Study Shows People Can Learn to Spot Machine-Generated Text - Unite.AI
Connect with us

Ethics

New Study Shows People Can Learn to Spot Machine-Generated Text

Published

 on

The increasing sophistication and accessibility of artificial intelligence (AI) has raised longstanding concerns about its impact on society. The most recent generation of chatbots has only exacerbated these concerns, with fears about job market integrity and the spread of fake news and misinformation. In light of these concerns, a team of researchers at the University of Pennsylvania School of Engineering and Applied Science sought to empower tech users to mitigate these risks.

Training Yourself to Recognize AI Text

Their peer-reviewed paper, presented at the February 2023 meeting of the Association for the Advancement of Artificial Intelligence, provides evidence that people can learn to spot the difference between machine-generated and human-written text.

The study, led by Chris Callison-Burch, Associate Professor in the Department of Computer and Information Science (CIS), along with Ph.D. students Liam Dugan and Daphne Ippolito, demonstrates that AI-generated text is detectable.

“We've shown that people can train themselves to recognize machine-generated texts,” says Callison-Burch. “People start with a certain set of assumptions about what sort of errors a machine would make, but these assumptions aren't necessarily correct. Over time, given enough examples and explicit instruction, we can learn to pick up on the types of errors that machines are currently making.”

The study uses data collected using “Real or Fake Text?,” an original web-based training game. This training game transforms the standard experimental method for detection studies into a more accurate recreation of how people use AI to generate text.

In standard methods, participants are asked to indicate in a yes-or-no fashion whether a machine has produced a given text. The Penn model refines the standard detection study into an effective training task by showing examples that all begin as human-written. Each example then transitions into generated text, asking participants to mark where they believe this transition begins. Trainees identify and describe the features of the text that indicate error and receive a score.

Results of the Study

The study results show that participants scored significantly better than random chance, providing evidence that AI-created text is, to some extent, detectable. The study not only outlines a reassuring, even exciting, future for our relationship with AI but also provides evidence that people can train themselves to detect machine-generated text.

“People are anxious about AI for valid reasons,” says Callison-Burch. “Our study gives points of evidence to allay these anxieties. Once we can harness our optimism about AI text generators, we will be able to devote attention to these tools' capacity for helping us write more imaginative, more interesting texts.”

Dugan adds, “There are exciting positive directions that you can push this technology in. People are fixated on the worrisome examples, like plagiarism and fake news, but we know now that we can be training ourselves to be better readers and writers.”

The study provides a crucial first step in mitigating the risks associated with machine-generated text. As AI continues to evolve, so too must our ability to detect and navigate its impact. By training ourselves to recognize the difference between human-written and machine-generated text, we can harness the power of AI to support our creative processes while mitigating its risks.

Alex McFarland is a tech writer who covers the latest developments in artificial intelligence. He has worked with AI startups and publications across the globe.