The parser is the first major part of the system. It is a bottom-up chart parser, as detailed in chapter 3 of Allen's Natural Language Understanding, edition 1, with added selectional restrictions and some transformational capabilities, loosely based on the approaches in chapter 4. The architecture of the parser is shown below.
In the diagram above, the direct lines show actual output of Java methods. The boxes loosely group related structures. Dashed lines represent 'inherits', except the line that shows the relationship between Prolog and the other components, which means 'consults'. Dotted lines mean 'reads/writes file'.
The parser is a complex program which is written in two and a half languages (Java, Prolog, and the simple markup of the files themselves), so read all the files carefully before making any changes.
The first thing that happens to a string going into the parser is preprocessing. The preprocessor class is an inner class of the parser (which is probably poor design-- go ahead and change it). The preprocessor is responsible for, at minimum, the following:
The preprocessor carries out some tasks (like case alteration) with calls to String, but mostly it is a simple set of regular expressions, each of which is tested in sequence, once and only once, on every line of input. In other words, defining two preprocessor lines that alter the same thing is not recommended, and if you do define such lines, order is important. Preprocessor rules are defined in a file.
Currently, the regular expressions are not compiled. I will remedy this (and alter this notice in the docs, if I remember) when I have time, but if I don't, please take the five minutes to do it for me. It will be useful, I believe.
X-bar theory is a phrase structure theory that seems to be popular among syntacticians. The discussion below loosely follows Andrew Carnie's Syntax, A Generative Introduction, with added advice from Professor Runner of the linguistics department.
The basic generalization of X-bar theory is that natural language grammars have three major rules:
NP -> N'
(removing the optional YP), and another would be NP -> NP N'
(replacing Y with N), and yet another would be NP -> AP N'
(replacing Y with A). For historical reasons, X' is referred to as X-bar (it used to be represented as a line above the X, but this is difficult to type), hence the name of the theory.
For our purposes, not all the rules are required. English, for instance, has only one complement rule (normally-- see further work) in which the complement is on the right and the head, the X element at the base of an XP, is on the left. We also don't need versions of the adjunct rule which have no adjunct, that is, the rule X' -> X'
. In fact, defining such rules would break the parser.
In addition, the specifier rules are somewhat different. While standard X-bar grammar allows any syntactic type at all to fill a specifier, our rule is restricted:
XP -> XSpec X' / X'.
That is, if a category has a specifier type defined via some other rule, it can have a specifier of that type, but otherwise, no phrases have specifiers.
The only two phrases with specifiers in our grammar are NPs (noun phrases) and AuxPs (Carnie calls these TPs; they are sentences). NPs have determiners as specifiers, though this approach is not accepted in modern syntax (see Carnie, chapter 6). AuxPs have NPs as specifiers; specifically the subject of a sentence is in the specifier node above the auxiliary.
The slight differences between my approach and Carnie's on simple sentences are shown below. Note that, where there is no overt auxiliary (or tense) word like 'would' or perhaps 'am', the Aux and T nodes have nothing in them. In Carnie's representation, this 'nothing' is the unpronounced word 'null [present tense]'. In mine, the node doesn't exist.
Writing a grammar for this system is simple enough. The system has three hard-coded letters it considers 'replaceable' with the names of syntactic categories: X, Y and Z. (This would be a good thing to put in the file format.) The first line of the file declares which types you would like them to be replaced with. For instance, to have AuxPs, VPs, NPs, APs and PPs, you write Aux V N A P
Then you can start writing rules. These contain the following parts:
syntactic category @ parts @ binding list @ selectional code @ interpretation code (newline)
It's important to remember that the only newline in the rule must appear directly after the rule. This makes the file much easier to read using a BufferedReader, though a bit messy for humans.
For instance, an example rule is:
<X'> @ X <YP> @ H X @ assert( ( type(&, Var):- type(H, Var))). compOf(H, X). @ comp(H, X). <EQ & H>
Don't worry too much about the interpretation code (or see the interpreter documents). We want to concentrate on the first four parts. The @ symbol, by the way, is just a convenient separator that isn't meaningful in Java or Prolog.
The syntactic category of the rule tells what the type of the phrases produced by the rule will be. So an <X'> rule produces <N'>, <V'> and so forth.
The parts tell what constituents can be added to the phrase, and in what order. This rule wants one constituent in the X category. Since X doesn't have angle brackets, by convention it's a single word. It is also the same type as the phrase as a whole, since X gets replaced the same way every time. So in an NP, this is an N. The next part is a <YP>, some random phrase of any type. In other words, this is a complement rule.
The binding list tells the parser how to set up the code for checking and interpreting the phrase, by giving each constituent in the phrase a name. This name can be anything, but it shouldn't appear in the code on the right unless you want it to be replaced by the name of the phrase filling the corresponding slot. Here we have H, standing for Head, and X, standing for Unknown.
When the parser constructs a phrase with this rule, it will make a copy of the selectional code (and the interpretation code) and replace the H with the name of the constituent filling the H slot. Each constituent has a unique name. Words have names corresponding to their text plus the index of their meaning, so the first definition of 'robot' has the name 'robot0'. Phrases have the name 'phraseXXX' where XXX counts how many phrases have existed before. So if this rule was instantiated as a VP rule, making a constituent 'go to the store', the bindings would be H = go0, X = phraseFOO, where phraseFOO was the name of 'to the store'.
There is also one special replacement in the code sections, which is &. & is the name of the current phrase, which is also phraseXXX.
Rule parsing is handled in Rule.java.
After the grammar file, the lexicon files should look fairly familiar. A line in the lexicon file has the form:
(! @) word @ category @ selectional code @ interpretation code (newline)
Again, you only get to put a newline right at the end. And again, the interpretation code is explained on the interpreter page.
So an example lex line looks like:
speaker @ N @ type(&, speaker). @ predicate(&, [speaker(X)], []).
The word is just what it looks like-- a direct match to the English text of the token, as returned from the preprocessor. It matches 'speaker', and only 'speaker'-- not speakers, speaking, Speaker, sp3ak3r or whatever.
The category is the category the word fits in, and should string-match something in the grammar file. Anything in the grammar file is acceptable-- a word can be an <N'>, or an <NSpec> or whatever you want. However, the convention used here is that only phrases get angle brackets-- not single words. Words that function as phrases are listed as, e.g., NP, and there is another grammar rule, <NP> @ NP @ ...
that performs the conversion.
The selectional code is treated a lot like the grammar rule selectional code, but there is no binding list; the only binding performed is on &, which is the name of the word.
Words can be multiply defined, either by making two identical words in different categories, or with different code attached, or both. When scanning the file, the Lexicon class hashes all the words into a table of Word objects, which are responsible for returning the LexItem objects that represent them. That is, a Word is the full dictionary entry for, e.g., walk, containing:
Walk, 1: N, a short journey on foot. 2: V, to go on foot.There are two LexItems named
walk0
and walk1
, each representing one such sense.
So what does the optional ! @
at the beginning of the line do? The Lexicon class expands words in certain categories morphologically, according to several hard-coded rules. For instance, speaker @ N
generates speaker and speakers. These rules are pretty stupid-- they apply only to Ns and Vs, and they stick on endings with no thought to phonology. So if you had say @ V
, you'd get things like sayed and sayen. If you wanted to define the correct form, said, you might write said @ V
, but then the morph generator would give you morphs of that, too. To avoid this, you use the no-morph code ! @
before your rule. It's not usually important to do this, but in some cases improper morphing might produce a bad definition of a legitimate word, and it always expands the prolog dictionary needlessly. Note, by the way, that all morphs produced have the same category and definition as their parent word, so if you want a different definition or category for a morph, you must turn off morph generation for the parent, or you'll get both copies, producing ambiguity.
So now the burning question on everyones' mind must be, "How does the parser work?". I urge everyone to get a life, and also read Prof. Allen's book cited above. But here's a short tour, with special attention to the more idiosyncratic features.
The basic algorithm is simple. Read a token from the string. Consult the Lexicon, getting back all the LexItems associated with the word. Push all these LexItems onto a stack, the key list. Pop the first Constituent off the key list, and handle it as follows:
For all rules in the grammar, make a new Arc, and see if the Arc will accept the Constituent (that is, can the rule the Arc is based on use the Constituent as its first part?).
For all existing Arcs, check if they will accept the Constituent (that is, can their rule use the Constituent as their next part?).
When the Constituent is finished cycling through, it is dumped off the key list onto the chart, another list which will eventually contain all the analyses of the sentence and its component parts.
If, during the cycling process, an Arc can accept the Constituent, the Arc is used to make a new Arc, which contains the Constituent as its next part. If the new Arc is incomplete, and needs more parts, it goes to the arc list, where it can audit new Constituents as they appear. If the new Arc is complete, however, it produces a Phrase, which is pushed onto the key list.
When the key list is empty, the parser reads the next token and begins again. If there are no more tokens, the parse is finished and the parser can select a complete parse from the chart.
There are some filters going on, however, that make this process more complex. The first filters are acceptance filters, which are used to decide if a Constituent can be the next part in an Arc. This is permitted if:
The next filtering system is the selectional filter. Linguists distinguish two types of restrictions on phrase structures that X-bar theory does not discuss. The first is subcategorization, which is a word's requirement that its neighbors (and usually, specifically its complements) have various syntactic features. For instance, one can eat a sandwich or dine on a sandwich. Though these two phrases mean about the same thing, it isn't grammatical to * eat on a sandwich or * dine a sandwich, because this violates the subcategorization requirements of eat and dine. (By the way, lingusts use the * to indicate ungrammaticality.)
Selectional requirements have to do with the roles various things can assume. An act of going, for instance, requires (usually) two participants, an agent and a place. A grammatical sentence with go need not make any sense-- we can have Santa Claus goes to Hell, say-- but it needs the right kinds of arguments. * I go to two pm. isn't grammatical (under our normal assumptions about the universe, viz, I don't have a time machine), even though both sentences have the word go with a PP[to], because two pm isn't a place. * The room goes to lunch has a similar problem, where the room isn't capable of being an agent (if you don't live in Prof. Murphy's future house project, anyway).
Our program doesn't distinguish between these types of filters-- they're both handled in Prolog. Whenever an Arc completes, it returns a Phrase, which immediately (at construction time) looks up the selectional code associated with it. It makes all the necessary bindings (actually, the binding list is resolved before creation, while & is resolved just after). Then it calls, in sequence, every line (lines in Prolog are terminated with periods) of its selectional code. If the last line has any solutions, the phrase is consistent. (The first lines are just setup--they can have any number of solutions, but apart from side-effects on the database, like assertions, they are ignored.) Otherwise, the phrase sets an inconsistent flag, and the parser throws it away.
The mainstay of the selectional filtering process is the type
predicate. If you look at the lexical entries, you'll see that most words have type. The few that don't have code listings of none
, which means no code at all. Some words have multiple type, but most have only one. Grammar rules tend to assert some things about type as well. The typical assertion is assert( ( type(&, Var):- type(H, Var))).
This means, my type is X if, and only if, my head's type is also X.
To make sense of all the type assertions, you'll need to read the lexical-hierarchy
file. The first thing you'll see is the predicate htype(X, Y)
. htype stands for Hierarchical Type. This is because types are arranged in an inheritance hierarchy, and if a word or phrase has a type anywhere in the hierarchy, it inherits all the higher htypes as well.
As you can see, the hierarchy involved is fairly insane. Dashed lines represent upward inheritance, so that locative prepositional phrases are, in some sense, locations (and not vice versa). However, there are no actual loops, which would break everything, since that would allow Prolog to prove certain statements in infinite ways (taking infinite time in the process). The parenthesized parts of the names show the actual words involved, while the regular parts show their prolog names. There is a less complex hierarchy of verbs, not shown here, in the same file.
Grammar rules typically do checks like compOf(H, X)
or modOf(H, X)
, also defined in the lexical-hierarchy
file. Each type, or htype, has various permitted comps and mods, and this testing allows the program to determine whether the proposed phrase is consistent both in terms of subcategorization and selectional typing.
Unfortunately, all sentences don't fall into the nice structures we define with X-bar theory. Two examples that occur often in the infoserver domain are inverted yes/no questions and wh-questions. The former produces sentences like:
Are you a robot?
while the latter produces:
I know where you live
They also occur together as in:
What are you?
We want to associate these with the deep structures You are a robot, I know you live where and You are what. This will make it easier to interpret them, make our parser more general and make it easier to do selectional checking.
Following an approach outlined in chapter 4 of Allen's book, the parser holds the Constituents we suspect may have moved by placing them on a hold list. The Constituents on the hold list are never taken off, and they keep coming up for audit by new arcs until the end of the parse.
Prolog determines which constituents are held by checking the hold
predicate. The hold
predicate is asserted in the selectional code of whatever structure is supposed to be held. In some cases, that is the word itself (is is an example), but in others, it is the first phrase above the word (which book). Prolog keeps track of this with a special type, wh, which is asserted on words of this type (they tend, obviously, to begin with wh, though this isn't always the case). The grammar code then checks when creating an XP whether it contains a wh part, and if so, asserts hold on itself.
Held constituents cannot be checked quite as easily as normal ones, since they can come from strange places in the areas around the Arc. Arcs that begin with held constituents have no beginnings at all, and are called virtual. The one really tricky case is that of Do you know I love robots?, where the parser must determine that do belongs in the first Aux slot, You do know I love robots, and not the second, You know I do love robots. This is important in deciding which phrase is a question. The parser has some hacks to handle this case-- they are easy to find because the functions containing them are deprecated.
The parser finishes with a collection of Constituents on the chart. Some are parses, and some aren't. However, the parser currently chooses only one (this could be changed). The selector does the following:
The algorithm outlined here is exponential. This sucks. Parsing can be done deterministically and completely in N^3 time. Below are some reasons why, aka a list of my mistakes.