02 Aug 2021
Release
5 minute read

AI, you have a lot of explaining to do

For machines to develop capabilities like common sense, they must be able to do more than pick correct answers—they must be able to justify their decisions. IBM researchers have developed ways for AI to explain the reasoning behind common-sense decisions.

For machines to develop capabilities like common sense, they must be able to do more than pick correct answers—they must be able to justify their decisions. IBM researchers have developed ways for AI to explain the reasoning behind common-sense decisions.

Imagine a scenario where a child and an AI system are asked the same common-sense question:

If you are hungry, what should you do?

  • Eat?
  • Or play?

Given the advancement made by today’s AI systems, both child and AI are likely to logically conclude a hungry person should eat.

When asked to justify their answers—which are correct—the child would have little difficulty explaining why she should eat when hungry. The AI, however, would be hard pressed to rationalize its response.

At this week’s Association for Computational Linguistics (ACL) conference, we’re presenting a new dataset to help bridge the AI explainability gap in the context of common sense question answering,1 as inherent in this example. We have created and publicly released a one-of-its-kind dataset, Explanations for CommonsenseQA (ECQA), to teach AI systems how to reason about the correct and incorrect answers to everyday common-sensical questions.

The idea is to improve AI’s trustworthiness by giving it the ability to explain answers—correct, as well incorrect.

Introducing AI to the concept of an explanation

One of our greatest challenges was clearly defining what we mean when we ask an AI system to explain why one answer is correct while another is incorrect.

Working closely with the Indian Institute of Technology (IIT) Delhi as part of IBM’s AI Horizons Network, we developed the notion of positive and negative properties as the defining framework for an “explanation.” In this setup, a property is a common sense fact. For example, here is a set of possible common sense facts to explain the correct answer of “eat food” in our question posed about hunger:

  • Eating food gives the body energy.
  • Energy is what the body needs to soothe its hunger.

These properties could either be supporting facts in favor of the correct answer choice (positive properties) or maybe justification towards refuting incorrect answer choices (negative properties).

The science behind common sense

The emergence of large QA datasets, combined with powerful pre-trained language models, has helped rapidly advance the field of automated question answering in the past few years. The CommonsenseQA (CQA) dataset created in 2019 by researchers at Tel Aviv University and Allen Institute for Artificial Intelligence, for example, lists a series of common-sense questions and the human-annotated answers for them.

For all its strengths, CQA doesn’t enable AI to explain what makes a given answer correct or incorrect. 2

Our aim is to retrieve and generate explanations for a given set of triplets, each of which includes a question, correct answer choice and incorrect answer choices.

To create ECQA dataset, we used crowdsourcing to human-annotate more than 11,000 QA pairs from the CQA dataset with positive and negative properties, as well as free-flow explanations written in natural language. This crowdsourced data enabled us to build AI models that offer explanations for the correct, as well as incorrect answer choices for a given common sense question.

Figure 1, below, illustrates an example from the CQA dataset, along with our human-annotated explanation, containing positive properties to support the correct answer choice (in green), negative properties to refute the incorrect choices (in red), and free-flow natural language explanation (in blue). This figure also captures the CoS (common sense) explanation from a prior work (Rajani et al., 2019) for the same example.3

An illustrative example of ECQA annotation.Figure 1: An illustrative example of ECQA annotation.

We also created a retrieval system, eXplanation Retriever (XR), shown in Figure 2, that represents properties in a latent space, where data is transformed to using deep learning techniques, and retrieves the facts against a CQA example from a given common-sense knowledge corpus.

An illustrative diagram for the architecture of XR system.Figure 2: An illustrative diagram for the architecture of XR system.

Our generation system, eXplanation Generator (XG), is based on the OpenAI’s GPT-2 model. XG can generate the common-sense properties for a given question, answer choice, and the (in)correctness flag for the answer choice (as shown in upper part of Figure 3). XG also has a free-flow explanation generation model to generate the explanations in natural language (as shown in the lower part of Figure 3).

An illustrative diagram for the architecture of XG system.Figure 3: An illustrative diagram for the architecture of XG system.

Evaluating the common sense verdict

XR outperformed one popular information retrieval method—known as BM25—by a relative gain of 100% when retrieving explanations for the correct answers from our ECQA corpus of annotated properties.

Our “property generation model” earned a respectable alignment F1 score of 36.4 between generated properties and gold properties. Here, an F1 Score captures the quality of the overlap between the generated set of properties vis-à-vis the gold set of properties, averaged with the entire test data. Further, our “free-flow explanation generation model” achieved an STS-BERT score (a semantic similarity score) of 61.9 with gold free-flow explanations.

To ensure that performance numbers for both of our generative models capture the human perception about generated explanations quality, we picked those semantic similarity metrics (from among multiple metrics including STS-BERT, SPICE, CIDEr, METEOR, ROUGE) that we found to be maximally correlated with human judgement.

Our research opens up several avenues for further usage by researchers, as well as practitioners in the field. One of the prominent real-world applications is in the field of primary education, where the techniques developed in this work could be used to build novel AI apps that can converse with children to boost their general understanding of the world around them, using common sense explanations. This could include helping them better understand why some of the routine phenomena in their lives behave as they do—e.g., why one should give way to a speeding ambulance.

Further, the underlying concept for introducing explainability into AI could likewise be applied beyond common sense questions. With the right domain expertise added to the ECQA dataset, AI could be made to explain right and wrong answers in any number of areas, including science, medicine, or finance.

Learn more about:

Knowledge and Reasoning: At IBM Research, we’re working on systems to help AI better reason with the tasks it’s presented, such as understanding context and analogies, comprehension, and planning through scenarios.

Conversational AI: The demand for virtual agents that can handle customer needs has continued to increase dramatically. At IBM Research, we’re building the next generation of artificial intelligence systems.

References

  1. Shourya Aggarwal, Divyanshu Mandowara, Vishwajeet Agrawal, Dinesh Khandelwal, Parag Singla, Dinesh Garg. Explanations for CommonsenseQA: New Dataset and Models. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). (2021).

  2. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. CommonsenseQA: A question answering challenge targeting commonsense knowledge. In Proceedings of NAACL-HLT, pages 4149–4158.

  3. Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain yourself! leveraging language models for commonsense reasoning. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. (2019).