Clocksin and Mellish Chapter 9 (e-Reserves).
Prolog's built-in matching and searching make it a natural for implementing a CSG Parser.
S → NP VP
sentence(Z) :- append(X,Y,Z),
noun_phrase(X), verb_phrase(Y).
...
determiner([the]).
noun([cat]).
BUT we know enough about nondeterministic programming in prolog to
be quite wary of that append(X,Y,Z) goal, which is going to
do a lot of searching. Pure DFS is a weak search method, and
we'd like to add some smarts so obviously dumb search paths aren't
followed....note here that there is no restriction on the lengths of
NP or VP, but the grammar puts restrictions on them.
Our ``heuristic'', or rule of thumb, is that individual grammatical
components will decide how much input to consume. So NP will take
what it needs and leave the rest for VP to work on. This simple idea
leads to
sentence(S0,S) :-
noun_phrase(S0,S1), verb_phrase(S1,S).
...
verb_phrase(S0,S) :- verb(S0,S).
verb_phrase(S0,S) :- verb(S0,S1),
noun_phrase(S1,S).
...
verb([sings | S],S).
In the last line, sings sucks up the first word, leaving
the tail in S, which is returned as 2nd argument).
Still, Looks messy...
Cleans up previous verbosity; a very thin grammatical veneer
or sprinkling of syntactic sugar; translates exactly into clauses we've seen, and
(once again) we have to be aware of that since we may well have to go
down
to that level explicitly.
sentence --> noun_phrase, verb_phrase.
...
verb_phrase --> verb.
verb_phrase --> verb, noun_phrase.
...
verb --> [sings].
Above rules get translated into the (S0,S) style rules we've seen in DCGs.
As with the cut, we are dragged into having to know more than we want to!
Even to invoke these rules we have to descend into knowing what
they translate to: the phrase/2 (or phrase/3) predicate is defined
by
phrase(P,L) is true if the entire list L can be parsed as a
phrase of type P.
?- sentence([the,man,eats,the,apple][]).
?- phrase(sentence,
[the, man, eats, the, apple]).
?- phrase(sentence,
[the, man, eats, the, apple],[]).
Grammars of natural language need to cope elegantly with transformations and relations that demand at least context sensitivity. Natural language rules are not arbitrary, and may be governed by "linguistic universals".
English has number agreement, gender agreement, tense agreement, some case dependence (who, whom), also constructions like passive where chunks of sentence are moved or transformed (Amy shot Bob, Bob was shot by Amy).
Particular sorts of transformations and relations have been developed to make describing and parsing NLs more efficient. Among these are Augmented and Attribute Grammars, which rely on features of the language to which rules apply (number or gender agreement, for example).
Augmenting the DCG as described here and in Chapter 9 gives it the power of a CSG.
For NLU, a powerful idea (necessary, really) is to link semantics with the syntax: use input features to get semantic clues that help write a better grammar and parser. In programming languages, we link semantics with syntax when we generate code, evaluate expressions, or optimize code).
In English a singular subject needs a singular verb form. Or we can remember constituents not in their ``normal'' position, and so deal with movement, as in ``The apple was eaten by the man''. In an expression grammar the attribute might be the numerical value of the (sub) expression; or it could be a function to generate code for that node in the parse tree.
Augmenting productions with indices lets us produce the famous
Type 1, "can't do THIS with your PDA" language
S(n) → A(n)B(n)C(n)
A(1) → a
B(1) → b
C(1) → c
A(n+1) → aA(n)
B(n+1) → bB(n)
C(n+1) → cC(n)
So far we've only used two arguments to functors
like sentence, but we could use more. Just add them to our
rules and they get pasted in as we'd expect:
sentence --> sentence(X).
/* e.g. sentence is singular or plural */
sentence(X) --> noun_phrase(X), verb_phrase(X).
/* NP, VP must agree in number. */
verb_phrase(X) --> verb(X), noun_phrase(Y).
/* eg. The man (sing.) ate the apples (plu.)*/
determiner(_) --> [the]
/* 'the' OK with singular or plural nouns*/
Add one more extra argument (keeping in the arg. for plurality, why not) that generates the tree. Rather like our expression parser's production functions had an argument so they could return a parse tree.
A prolog tree is a structure: here we might have
sentence(
noun_phrase(
determiner(the),
noun(boy)),
verb_phrase(
verb(flies),
noun_phrase(
determiner(a),
noun(kite))
)
)
So if you find a sequence that's a noun
phrase and has parse tree NP, followed by a sequence that makes a verb
phrase with parse tree VP, then you've found a sequence that makes a
complete
sentence. Thus we can replace the
sentence(X) --> noun_phrase(X), verb_phrase(X).
rule above by
sentence(X,sentence(VP, NP)) -->
noun_phrase(X, NP), verb_phrase(X, VP).
Is it an issue that this rule looks recursive? If those two sentences are different, how does prolog know?
sentence(X,sentence(VP, NP)) -->
noun_phrase(X, NP), verb_phrase(X, VP).
Clauses read in thru consult, as in
[my_rules, other_rules].
are in a database that sticks around. One can
query it, (as in
?- A(b, X).
several times. Answers are returned and reported on the
terminal thru variables in the query, but EVEN THEY are not
remembered at top level, a fortiori none of the variables bound
during query-answering are remembered -- you could say they are
effectively in a different name space.
?- append([1,2], [3,4], X).
true
X = [1,2,3,4]
var(X). % is X a free variable now??
true % bye-bye X!
So what's the difference between the query
sentence(blahdy, blah)
and the created parse-tree structure
sentence(fooey, blooey)? Why do they look the
same if they ain't?
There IS a difference, and it IS confusing to use the same term for both!! It's the sort of stuff these Prologers seem to do, (old punch line) "because they can".
If it were my code, I would have written not
sentence(X,sentence(VP, NP)) -->
noun_phrase(X, NP), verb_phrase(X, VP).
but
sentence(X,pt_sentence(VP, NP)) -->
noun_phrase(X, NP), verb_phrase(X, VP).
and thus given the parse-tree nodes a different look than the
rules that created them.
Continuing from a few oheads back, the rules ARE just structures in
their own consultable, different (data base) name space.
Thus they look like trees, and could be written
:=(A, (b,X))
A query creates another structure that is used in a theorem-prover
(the main prolog interpreter) that tries to find the null clause
(as we'll see in the FOPC inference segment) given the negation of the
query and the data base.
Convert a rule or fact (clause) in the data base to an access- or update-able structure with clause.
Convert a structure in the form of a clause and insert it into the data base with asserta, assertz. And you can retract (remove) data base clauses as well. So it's all rather elegant. In LISP, "everything's a list", and in Prolog, "everything's a structure".
Clocksin and Mellish 9.5.
Idea is we may want, while parsing, to have rules that don't consume input, which all the --> ones do. To have Prolog code NOT translated by the --> functor, put it in {curly brackets}.
For example, to add a noun with our current code we need:
noun(singular, noun_pt(banana)) --> [banana].
Which translates to
noun(singular, noun_pt(banana), [banana | S], S).
Neater is:
noun(X, noun_pt(N)) --> [N], {is_noun(N, X)}.
...
is_noun(banana, singular).
is_noun(bananas, plural).