Nutcracker is a system for recognising textual entailment, developed by Johan Bos. It is distributed with the C&C tools and is basically a wrapper around the CCG parser, Boxer, Wordnet and an external theorem prover and finite model builder.
Nutcracker is disributed with the C&C tools, currently only with the SVN development version. You first need to install the C&C tools (and download the models) and install Boxer, see the Installation pages. Building Nutcracker and the tokeniser (which both require SWI Prolog) is done by the following commands (this also downloads Wordnet):
make bin/tokkie make bin/nc
Nutcracker employs a FOL theorem prover and a FOL model builder. The current version of Nutcracker comes with interfaces for the theorem prover Bliksem (by Hans de Nivelle) and the model builders Mace (by Bill McCune?) and Paradox (by Koen Claessen). The binaries of these external programs need to go in the ext/bin directory.
The distribution makefile attempts to download and install Bliksem, Mace and Paradox (1.0, 2.3 or 3.0) via the following commands (currently only works reliably on linux platforms):
make ext/bin/bliksem make ext/bin/mace make ext/bin/paradox1 make ext/bin/paradox2 make ext/bin/paradox3
If this fails or if you're not working on a linux platform, you could try to download and install the theorem prover and model builder manually. In that case please follow the links for the provers/model builders given above for instructions, and put the binaries in the directory ext/bin.
Create a directory, say working/rte, with two files named t and h (following the RTE terminology introduced at the PASCAL challenges). The file t should contain a text, the h file a hypothesis text. Then run Nutcracker by:
bin/nc --dir working/rte
Nutcracker will report a prediction on the relationship between t and h: entailed (t entails h, in other words, h contains no new information), informative (t does not entail h), or inconsistent (t and h are inconsistent).
Nutcracker writes all output in the specified directory: the DRSs, the models, the background knowledge, and the prediction. To get more information about the option that can be used while running Nutcracker, try:
Searches directory <dir> and subdirectories thereof for files named t and h. If a director doesn't contain these files, a warning is given.
Selects a theorem prover <arg>. Possible values: bliksem, otter, vampire. Default: bliksem. Best results are obtained with the Vampire theorem prover, but it is hard to get your hands on.
Selects a model builder <arg>. Possible values: mace, paradox1, paradox2, paradox3. Default: mace. The model builder Mace (Mace2 actually) generally produces minimal models, but the Paradox model builders are in general faster. There are some problems that Paradox 1.0 handles better than Paradox 2.3.
This is the maximum size of the domain for which a model is constructed. Model builders attempt to construct models iteratively, increasing the domain size until a model is found.
This is the time (in seconds) given to the inference engines to find a proof or model. Only works if you have the program CPULimitedRun installed in ext/bin, which is distributed with the Bliksem theorem prover.
How Nutcracker works
Nutcracker employs the CCG parser and Boxer to produce a DRS for the text (T) and a DRS for the text together with the hypothesis (H). The latter ensures correct dependencies between anaphoric expressions in H referring to antecedents in T. Background knowledge (BK) is created on the basis of the information found in T and H, and computed via Wordnet relations. The DRSs for T and for T&H, as well as the BK, is then translated to first-order logic (FOL). An external theorem prover and model builder for first-order logic are used to find out whether (T&BK->T&H) is a theorem. In case a proof is found, Nutcracker predicts that T entails H. Furthermore, it is tested whether -(T&BK&H) is a theorem. If a proof is found, Nutcracker predicts that T and H are inconsistent. Otherwise, Nutcracker predicts that H is informative with respect to T.
- Johan Bos & Katja Markert (2006): When logical inference helps determining textual entailment (and when it doesn't). In: Pascal, Proceedings of the Second Challenge Workshop, Recognizing Textual Entailment. PDF
- Johan Bos & Katja Markert (2006): Recognising Textual Entailment with Robust Logical Inference. In: J. Quinonero et al. (Eds.): MLCW 2005, LNAI 3944. Pages 404-426. PDF
- Johan Bos & Katja Markert (2005): Recognising Textual Entailment with Logical Inference. Proceedings of EMNLP 2005. PDF
- Johan Bos & Katja Markert (2005): Combining Shallow and Deep NLP Methods for Recognizing Textual Entailment. In: Pascal, Proceedings of the First Challenge Workshop, Recognizing Textual Entailment. Pages 65-68. PDF
- Marcello Balduccini, Chitta Baral & Yuliya Lierler (2007): Knowledge Representation and Question Answering. Chapter 21 of Van Harmelen, Lifschitz & Porter (Eds.): Handbook of Knowledge Representation. PDF