Noam Slonim Principal Investigator, Project Debater team, IBM Research - Haifa
Project Debater Datasets
The development of an automatic debating system naturally involves advancing research in a range of artificial intelligence fields. This page presents several annotated data sets developed as part of Project Debater to facilitate this research. It is organized by research sub-fields explained below.
Argument Mining is a prominent research frontier. Within this field, we distinguish between Argument Detection - the detection and segmentation of argument components such as claims and evidence; and Argument Stance Classification – determining the polarity of an argument component with respect to a given topic.
Beyond argument mining, a debating system should face the challenge of interactivity i.e., the ability to understand and rebut the text of the opponent’s speech. Debate Speech Analysis is a new research field that focuses on this challenge.
Another important aspect of a debating system is the ability to interact with its surroundings in a human-like manner. Namely, it should be able to articulate arguments and listen to arguments made by others. Regarding the former, the Text to Speech system must demonstrate human-like expressiveness to keep human listeners engaged. The latter may call for Speech-to-text systems that are especially designed for a debating scenario.
Finally, a debating system should naturally rely on more fundamental NLP capabilities. One example is the ability to assess the semantic relatedness of various pieces of texts and glue these into a coherent narrative. The system should also have the ability to identify the basic concepts mentioned in the text. The corresponding benchmark data we released thus far in this context are described in the section on Basic NLP.
The various argument detection datasets differ in size (e.g., number of topics), type of element detected (claims, claim sentences, or evidence), and method used for detection (pre-selected articles vs. automatic retrieval). The table below lists the different datasets and provides information on their characteristics:
Argument Stance Classification and Sentiment Analysis
A debating system must distinguish between arguments that support its side in the debate and those supporting the opponent’s side. The following datasets were developed as part of the work on Project Debater’s stance classification engine.
The claim stance dataset includes stance annotations for claims, as well as auxiliary annotations for intermediate stance classification subtasks.
Manually identified and annotated claims from Wikipedia
Sentiment analysis is an important sub-component of our stance classification engine. The following two resources address sentiment analysis of complex expressions, which goes beyond simple aggregation of word-level sentiments. The first resource is a sentiment lexicon of idiomatic expressions, like “on cloud nine” and “under fire”. The second resource addresses sentiment composition – predicting the sentiment of a phrase from the interaction between its constituents. For example, in the phrases “reduced bureaucracy” and “fresh injury”, both “reduced” and “fresh” are followed by a negative word. However, “reduced” flips the negative polarity, resulting in a positive phrase, while “fresh” propagates the negative polarity to the phrase level, resulting in a negative phrase. Accordingly, “reduced” is part of our “reversers” lexicon, and “fresh” is part of the “propagators” lexicon.
5,000 frequently occurring idioms with sentiment annotation
Manually annotated idioms from Wiktionary
Sentiment composition lexicons containing 2,783 words and sentiment lexicons containing 66K unigrams and 262K bigrams.
Automatically learned from a large proprietary English corpus
Expert evidence (premise) is a commonly used type of argumentation scheme. Prior knowledge about the expert’s stance towards the debate topic can help predict the polarity of such arguments. For example, an argument made by Richard Dawkins about atheism is likely to have a PRO stance, since Dawkins is a well-known atheist. Such information can be extracted from Wikipedia categories: Dawkins, for instance, is listed under “Antitheists”, ”Atheism activists”, “Atheist feminists” and “Critics of religions”. The Wikipedia Category Stance dataset contains stance annotations of Wikipedia categories towards Wikipedia concepts representing controversial topics.
In order to respond to an opponent’s speech, the system must process the opponents’ voice and `understand’ its content. The provided dataset focuses on the Automatic Speech Recognition (ASR) component.
The following datasets relate to basic NLP tasks, addressed as part of Project Debater.
Predicting semantic relatedness between texts is a basic NLP problem with a wide variety of applications. Relatedness can be measured between several types of texts, ranging from words to documents. The relatedness datasets listed below differ in the type of elements considered (words, multi-word-terms, and concepts), number of topics from which the pairs were extracted, and number of annotated pairs.
The goal of Mention Detection is to map entities/concepts mentioned in text to the correct concept in a knowledge base. This process involves segmenting the text (as some concepts span multiple words) and the disambiguation of terms with more than one meaning.
3000 (500 train and 500 test for each of the three text sources)
Mix of Wikipedia articles and ASR/manual transcripts of speeches by expert debaters
Text clustering is a widely-studied NLP problem. Clustering can be applied to texts at different levels, from single words to full documents, and can vary with respect to the clustering goal. In thematic clustering, the aim is to cluster texts based on thematic similarity between them, namely grouping together texts that discuss the same theme.
Thematic clustering of sentences is important for various use cases. For example, in multi-document summarization, one often extracts sentences from multiple documents that should be organized into meaningful sections and paragraphs. Similarly, within the emerging field of computational argumentation, arguments may be found in a widespread set of articles, which further require thematic organization to generate a compelling argumentative narrative.
Evaluation of thematic clustering methods requires a ground truth dataset of sentence clustering. Unfortunately, sentence clustering is considered a very difficult task for humans. As a result, there is no standard human annotated sentence clustering dataset.
In the dataset “Thematic Clustering of Sentences” sentences are annotated for their thematic clusters. This annotation enables to evaluate thematic clustering methods. The dataset was generated automatically by leveraging the partition of Wikipedia articles into sections. The underlying assumption of its creation was that the section structure of a Wikipedia article can serve as ground truth for the thematic clustering of its sentences. Details about the way this dataset was generated can be found in the article.
IBM Unraveling Language Patterns is an algorithm for automatically extracting patterns that characterize subtle linguistic phenomena. To that end, IBM Unraveling Language Patterns augments each term of input text with multiple layers of linguistic information. These different facets of the text terms are systematically combined to reveal rich patterns.