Connect with us

Artificial Intelligence

Researchers Develop Set of Questions Able to Confuse the Best Computers

Updated on

Researchers from the University of Maryland were able to create a set of questions that are easy for people to answer but hard for some of the best computer answering systems that exist today. The team generated the questions through a human-computer collaboration, and they were able to create a database of more than 1,200 words. If a computer system is able to learn and master these questions, it will have the best understanding of human language among any computer systems that currently exist. 

The work was published in an article in the journal Transactions of the Association for Computational Linguistics.

Jordan Boyd-Gaber, an associate professor of computer science at UMD and senior author of the paper, spoke about the new developments. 

“Most question-answering computer systems don’t explain why they answer the way they do, but our work helps us see what computers actually understand,” he said. “In addition, we have produced a dataset to test on computers that will reveal if a computer language system is actually reading and doing the same sorts of processing that humans are able to do.” 

As of right now, questions for these programs and systems are generated by human authors or computers. The problem is that when humans are the ones generating questions, they aren’t aware of all of the different elements of a question that are confusing to computers. Computer systems on the other hand, they use formulas, write fill-in-the blank questions, or make mistakes all which can generate nonsense. 

In order to get cooperation between humans and computers that allowed them to generate the questions, Boyd-Garber and the team of researchers created a special computer interface. According to them, it is able to tell what a computer is “thinking” while a human types out a question. The writer is then able to edit and change the question based on the computer's weaknesses. This is able to generate confusion for the computer. 

As the writer types the question, the computer’s guesses are put in a ranked order. The words that are responsible for the computer’s guesses are highlighted. 

The system can correctly answer a question and the interface will highlight the words or phrases that led to the answer. With that info, the author is then able to edit the question to make it more difficult for the computer, but the question will still have the same meaning. While the computer will eventually be confused, expert humans would still be able to answer. 

When the humans and computers worked together, they were able to develop 1,213 computer questions that the computer was not able to answer. The researchers tested the questions in a competition between human players and the computers. The human players included high school trivia teams and “Jeopardy!” champions. The weakest human team was able to defeat the strongest computer system. 

Shi Feng, a computer science graduate student from UMD and co-author of the paper spoke about the new research. 

“For three or four years, people have been aware that computer question-answering systems are very brittle and can be fooled very easily,” she said. “But this is the first paper we are aware of that actually uses a machine to help humans break the model itself.” 

The questions used were able to reveal six different language phenomena that confuse computers. There are two different categories. The first one is linguistic phenomena that includes paraphrasing, distracting language, and unexpected contexts. The second is reasoning skills and includes logic and calculation, mental triangulation of elements in a question, and putting together multiple steps to form a conclusion. 

“Humans are able to generalize more and to see deeper connections,” Boyd-Garber said. “They don’t have the limitless memory of computers, but they still have an advantage in being able to see the forest for the trees. Cataloguing the problems computers have helps us understand the issues we need to address, so that we can actually get computers to begin to see the forest through the trees and answer questions in the way humans do.” 

This research lays the foundation for computer systems to eventually master the human language. It will undoubtedly keep getting developed and improved. 

“This paper is laying out a research agenda for the next several years so that we can actually get computers to answer questions well,” Boyd-Garber said. 


Alex McFarland is an AI journalist and writer exploring the latest developments in artificial intelligence. He has collaborated with numerous AI startups and publications worldwide.