KNEXT download site
KNEXT factoid browsing site
Participants:
Len Schubert
Department of Computer Science
Room 733, Computer Studies Building
University of Rochester
Benjamin Van Durme
Department of Computer Science
University of Rochester
Web home page: http://www.cs.rochester.edu/~vandurme/
Jonathan Gordon
Department of Computer Science
University of Rochester
Web home page: http://www.cs.rochester.edu/~jgordon/
Elif Eyigöz
Department of Computer Science
University of Rochester
Web home page: http://www.cs.rochester.edu/~eyigoz//
Karl Jang Sun Lee
Department of Computer Science
University of Rochester
Web home page: http://www.csug.rochester.edu/users/ugrads/jlee164/
Ting Qian
Department of Brain and Cognitive Science
University of Rochester
Past participants:
Greg Carlson
Department of Linguistics
University of Rochester
Web home page: http://www.ling.rochester.edu/people/carlson/carlson.html
Henry Kyburg
Departments of Computer Science and Philosophy
University of Rochester
Web home page: http://www.cs.rochester.edu/dept/news/kyburg_obituary.shtml
Alok Kothari
Department of Computer Science
University of Rochester (visiting from IIT Kharagpur)
Jivko Sinapov
Department of Computer Science
University of Iowa
Saurabh Deshpande
Department of Computer Science
University of Rochester
Web home page: http://www.cs.rochester.edu/~saurabh/
Phil Michalak
Department of Computer Science
University of Rochester
Web home page: http://www.cs.rochester.edu/~michalak/
Matthew Tong
Department of Computer Science
Univ. of California at San Diego
http://www.cs.ucsd.edu/csepeople/graduatestudenthomepages.html
David Ahn
Now at Powerset; formerly at
Informatics Institute
University of Amsterdam
Kruislaan 403
1098 SJ Amsterdam
The Netherlands
Web home page: http://staff.science.uva.nl/~ahn/
Aaron KaplanKnowledge browser website: http://www.cs.rochester.edu/research/knext/browse/
Xerox Research Centre Europe (XRCE)
6, chemin de Maupertuis
38240 Meylan, France
Web home page: http://www.xrce.xerox.com/people/browse_staff.php
Summary
We think that there is a largely untapped source of general knowledge in texts, lying at a level beneath the explicit assertional content. This knowledge consists of relationships implied to be possible in the world, or, under certain conditions, implied to be normal or commonplace in the world. For instance, the sentence "He entered the house through its open door" suggests that it is possible for a person (or at least a male) to enter a house, that houses have doors, that doors can be open, etc. The goal of the present work is to derive such general world knowledge (initially, from Penn Treebank corpora, subsequently from statistically parsed large text corpora such as the British National Corpus and web documents).
Our initiative differs from standard knowledge extraction work in both its aims and its methodology. We are attempting to derive a broad range of general relationships from texts, rather than some predetermined specific kinds of facts; and we are using general phrase structure coupled with compositional interpretive rules to obtain general propositional information, rather than employing specialized extraction patterns targeted at specific relationships. Our long-range goal is to use the derived knowledge as part of a KB supporting language understanding and commonsense reasoning in a self-aware, self-motivated communicative agent.
An example of a complete and unedited output from KNEXT for a sentence from the Brown corpus is the following (omitting the input bracketing):
(BLANCHE KNEW 0 SOMETHING MUST BE CAUSING STANLEY 'S NEW , STRANGE BEHAVIOR BUT SHE NEVER ONCE CONNECTED IT WITH KITTI WALKER .) OUTPUT (IN ENGLISH, FOLLOWED BY UNDERLYING LOGICAL FORMS): A FEMALE-INDIVIDUAL MAY KNOW A PROPOSITION. SOMETHING MAY CAUSE A BEHAVIOR. A MALE-INDIVIDUAL MAY HAVE A BEHAVIOR. A BEHAVIOR CAN BE NEW. A BEHAVIOR CAN BE STRANGE. A FEMALE-INDIVIDUAL MAY CONNECT A THING-REFERRED-TO WITH A FEMALE-INDIVIDUAL. ((:I (:Q DET FEMALE-INDIVIDUAL) KNOW.V (:Q DET PROPOS)) (:I (:F K SOMETHING.N) CAUSE.V (:Q THE BEHAVIOR.N)) (:I (:Q DET MALE*.N) HAVE.V (:Q DET BEHAVIOR.N)) (:I (:Q DET BEHAVIOR.N) NEW.A) (:I (:Q DET BEHAVIOR.N) STRANGE.A) (:I (:Q DET FEMALE*.N) CONNECT.V (:Q DET THING-REFERRED-TO.N) (:P WITH.P (:Q DET FEMALE-INDIVIDUAL*.N))))
In this way, tens of thousands of propositions have been extracted from the Brown corpus (a 1,000,000-word segment of the Penn Treebank), and of these, approximately two thirds are judged to comprise "reasonable claims about the world" by human judges (while the rest are judged to be too vague, ambiguous, incomplete, or faulty in some other way to count as reasonable claims). For the much larger (200,000,000-word) British National Corpus (BNC), we have extracted several million propositions. These are planned to become browsable at the website referenced above, which currently is restricted to the Brown corpus ( http://www.cs.rochester.edu/research/knext/browse/). We have also been abstracting strengthened propositions from sets of related extracted propositions, and from individual factoids (such as that a tree may grow) whose constituents suggest that it probably expresses a characterizing property that is true for and entire class, not just occasional instances. We are also working on new versions of KNEXT based on linguistically better-founded, more informative parses and more detailed logical interpretations of the parses obtained. In particular, our goal is to extract causal relations from sentences involving adverbial modifiers, as in "The wounded man died because of blood loss", or "Applying the defibrillator, the paramedic revived the heart attack victim".
A related effort is concerned with the development of a large lexical semantics knowledge base by manual and automated means, as another essential component of a general knowledge base for language understanding and commonsense inference, and as a tool for disambiguating propositions extracted by KNEXT. For instance, we would like to distinguish different senses of HAVE in propositions such as "A person may have an arm", "A person may have an accident", "A person may have a car", etc. The methods we are currently exploring include the use of the "upper ontology" comprised by the top 3 or 4 levels of the WordNet taxonomy of noun senses. We have also investigated the use of Google queries to determine semantic properties of lexical nominals such as whether they are mass or count nouns ("gold" vs. "ring"), whether they typically refer to parts or wholes ("steering-wheel" vs. "car"), whether they refer to events or nonevents ("accident" vs. "president"), etc. For example, numerous Google hits for "much gold" (418,600), and far fewer hits for "much ring" (8,580) (normalized by the occurrence frequencies for "gold" and "ring") tend to suggest that "gold" is a mass noun while "ring" is a count noun.
In past work, we also investigated the potential usefulness of the WordNet taxonomy as a direct source of logical knowledge. In particular, we attempted to determine how reliably hyponym-hypernym links between noun synsets can be viewed as providing subtype relations between predicates (for instance, that a marimba is a percussion instrument, or that a scythe is an edge tool) and how reliably hyponyms of the same synset can be viewed as mutually exclusive (for instance, that no percussion instrument is both a marimba and a triangle, or that no edge tool is both a scythe and a shear). We found that roughly 2 out of 3 of the hypernym links for nominals correspond to a true subtype relation, and a slightly higher percentage of ``sibling" hyponyms are truly exclusive. After noting some apparent reasons for failure of these properties, we attempted to improve the precision of the information extraction process. At the time, we found that the causes of were too diverse to enable large improvement by any automated means. Our new work on lexical semantics (above) suggests, however, that better lexical sematic feature annotations (esp. a reliable mass/count distinction and distinctions between deverbal and deadjectival nominals) will allow identification of many potentially erroneous supertype-subtype relations. Even so, a significant residual error rate for such extracted information is probably unavoidable. Hence any practical application making use of this information would have to be error-tolerant, perhaps using probabilistic weights for subtype and exclusion relations.
The fact that knowledge extracted from texts or from lexicons is bound to remain somewhat "risky" points to the need for inference methods that make effective use of such knowledge. A number of our theoretical investigations are aimed in that direction. These include the development of a new algebraic characterization of probabilities in Bayesian networks, as a step towards a quantified logic allowing Bayes network-like causal inference, and the development of an evidence-based nonmonotonic logic of risky knowledge.
Finally, another aspect of the work concerns the semantics and pragmatics of generic sentences, such as "Elementary school children in Rochester are usually bused to school". Generic sentences express general facts, and it is just these sorts of facts we are attempting to derive from texts. A key problem is that of filling in implicit information about the situations, events, or entities quantified over. For example, the preceding sentence seems to quantify (using adverbial quantifier "usually") over situations where elementary school children in Rochester are going to school. We have developed theories (and some preliminary algorithms) for fleshing out the descriptions of such situations by accommodating the presuppositions of the matrix clause.
Acknowledgement The work on this project was previously supported by the National Science Foundation under grants IIS-0082928, "ITR: Mining Text for General World Knowledge", 9/1/2000 - 8/31/2003, and IIS-0328849, "IIS: Deriving General World Knowledge from Texts by Abstraction of Logical Forms", 9/1/2003 - 8/31/2006, and is currently supported by NSF grant IIS-0535105, "Knowledge Representation and Reasoning Mechanisms for Explicitly Self-Aware Agents", 7/1/06–6/30/09.
Some relevant publications