The Mabel information kiosk project is designed to provide information about a conference to the conference attendees. Mabel has several input and output modes available, both visual and audio. The software structure of the information kiosk involved a Bayesian sentence classifier, a keyword marking system, and a program to correlate sentence types and keywords with database entries (see Schmid, Kollar et al).
The objective of this project was to apply a conversational structure to sequences of utterances, allowing the response to a query to depend on the information asked for or given by earlier queries by the same user. This structuring applies to the filtered and tagged strings produced by the classifier and keyword marker, or to similar strings produced by other input modalities (including graphic systems which do not rely on natural language). It is capable of processing input within the conference-schedule domain, determining appropriate response information, and formatting it into language or other formats for output.
The original tagger had several functions. First, it filtered a sentence to catch key words, like the names of speakers, and store the information. Then it replaced these names by category words. Finally, it used probabilistic comparison to a corpus to determine the sentences' general type (Sweetser).
For instance:
when does henry kyburg speak : raw input
when does [speaker] speak : speaker = henry kyburg : filtered input
<time>: speaker = henry kyburg : tagged input
Several adjustments have been made to this approach.
First, the filterer catches key words, as before. The key words remain in the type brackets in the representation of the raw sentence.
when does he speak in room four -- raw input
when does [ref: he] speak in [loc: room four] : ref = he, loc = room four : filtered input
The input is passed to a reference filter, which tries to match personal referents to people in the recent context, and impersonal referents to events in the recent context.'There' and'one' are ignored, since they occur too often as syntactic markers which do not refer to anything (note 'is there an X', for instance). The referents are placed in the correct data category and their category tags are replaced. The words are removed from the category tags.
when does [speaker] speak in [loc] : speaker = henry kyburg (ref), loc = room four
The marker 'ref' attached to Henry Kyburg is used to prevent this reference to Henry Kyburg from conflicting with its antecedent Kyburg according to Consistency Rule 4 below. The replacement of [ref] by [speaker] makes the sentence more similar to the correct class in the training corpus, since [ref] occurs in more contexts than [speaker].
The sentence is passed to the tagger for probabilistic classification. All statements which do not match any sentence type with probability greater than a threshold are tagged as <unknown>. This allows the program to detect many sentences which are not in the conversational domain and avoid reacting to them. The threshold was empirically determined as the lowest probability assigned to a correct judgement in the test database. Statements which do not contain any key words are compared to a higher threshold, since the class of statements which do not include key words is far better defined than the class of statements as a whole, and useful statements of this class tend to have higher probabilities. Finally, the <event> tag, which is the most common in the training corpus, and is therefore generally applied to inane statements, is required to have keywords attached to it. All '<event> : no keywords' tags are changed to '<unknown> : no keywords'. This is unfortunate, since it eliminates statements like 'what' and 'what's going on', but required to prevent statements like 'will you marry me' from eliciting a summary of the full database with respect to events.
Conversations in the scheduling domain consist of pairs of utterances: statements by the user, followed by replies from the system. The system keeps little track of its own statements; although the data informing its replies is part of the conversational context, the actual replies it makes are not. This is a design flaw, since it prevents useful analysis of questions like 'tell me about the first one' as replies to a list of statements. The system does record the user's utterances. Users make two kinds of utterances, queries about things going on at the conference, and marker statements which convey no meaning, but mark parts of the dialogue structure (the conversational equivalent of words like 'and', or 'there is'). The system's responses must likewise be either information or marker statements. (The system may also ask disambiguating questions; unlike the other two types of utterances, these are not required, and were not implemented.)
The user's goal in the dialogue is to obtain one or more pieces of information; these pieces might be lists (things at two pm) or descriptions of single events. The user does not necessarily know any descriptive features of the information desired. The system's goal is to give the user all the information desired.
Sample Conversation: (From actual system transcript; input via GUI mode.)
User: hi
System: salutations.
what is marie doing
michael brenner, and marie desjardins organized the planning with and for multiagent systems in westin edmonton today in bc room at nine am.
what else is marie doing
michael brenner, and marie desjardins attended the planning with and for multiagent systems in westin edmonton yesterday in bc
room at two pm.
what is in shaw (building)
there are automation as caregiver, the role of intelligent technology in elder care, real-time decision support and diagnosis systems and multiagent modeling and simulation of economic systems and several more
thanks
sure thing.
bye
it's been a pleasure. Bye.
A conversation with the user follows a tree structure, where the root node represents the beginning of the conversation, and all the other nodes represent utterances.
(In drawings throughout this document, the previous user queries are shown in oval boxes. In later diagrams, the current query is shown in a rectangle, and its attachment point in the tree is a solid arrow. The proposed attachment point(s) are rectangles explaining the application of the rule, and dashed arrows connect them in sequence to the final insertion.)
(Tree of the conversation above: the system's responses are not stored as part of the tree.)
Conversation proceeds as a depth-first search of the tree, where the child relationship between nodes is modeled as an and relationship in the information desired. Such a situation could be modeled with a context-free dialogue grammar, but this representation is basically useless. The grammar is highly ambiguous, and since the system must respond at every step, it cannot 'look ahead' to decide what to do. Although a probabilistic parser could resolve these difficulties, it would require a large, complex dialogue corpus, which would be difficult to generate. This suggests a rule-based structure which is looser than a parser, since it can check for specific features of its ancestors, rather than requiring a grammatical rule for every combination of features.
The algorithm used keeps track of the path from the root to the most recent query, as well as some other data discussed below. The path is stored as a stack; the whole tree, while it remains in memory for debugging purposes, is not used for decision making. This path is sufficient to operate the following rules, which are heuristics to determine which previous queries a given utterance is intended to extend. All these rules are based on the idea that a query can extend any of its ancestors. In fact, this is not necessarily the case since most humans cannot remember back more than a few conversational layers, but it is still a useful assumption.
1. The current query must be as low in the tree as possible; this simple disambiguation strategy is easy to implement, and when offered a choice between finding general or specific information, always assumes the user wants specific information. Denecke's system asks disambiguation questions or uses probabilistic disambiguation to solve this problem, but his questioning protocol seems to require a keyboard, which is best avoided on a robot, and it seems very difficult to 'weigh' probabilities mentally or construct a training corpus.
2. The current query must be placed into the tree so that the information specified by the path from the root to it is non-empty if possible. If the path is empty, this means that the query is inconsistent (with respect to the database) with its ancestors (the 'and' of the query and its ancestors specifies data that does not exist). Though this is occasionally the user's aim, these cases cannot be distinguished using tagged representations, so the system always gives the user data if possible. Kellner similarly reasons that since the user intends to get information, s/he will rarely be inconsistent on purpose. The 'database inconsistency' of specifying no objects must be distinguished from 'logical inconsistency'. Database inconsistency occurs when the specific objects the user mentions do not occur together; logical inconsistency occurs when the user specifies multiple objects of the same general type.
3. If the current query contains an object of type X, and the current path contains an query N containing an object of type Y such that X determines some Y (a room, for instance, determines a building) then N may be an ancestor of the current query only if the Y determined by X and the Y mentioned in N are identical. This rule enforces logical consistency, and is a weaker form of Denecke's 'type inference', which requires a separate database of object types and their general attributes.
4.If the current query contains an object of type X, and the current path contains a query N containing an object of type X, then N should not be an ancestor of the current query. As Levinson states, speakers do not mention anything without some purpose (see Grice's maxims of conversation), and since the user knows the current path contains an X, the purpose of mentioning another X must be to specify another path, which also contains an X.
5. If the current query contains an object of type X, and the current path contains N which mentions an object of type Y, such that Y determines some X, then N should not be an ancestor of the current query. This is an extended version of the previous rule.
Rule 2 is somewhat difficult to implement, since it potentially requires a database search at every level between the bottom of the path and the root. Since this process is somewhat time-consuming, 3, 4 and 5 operate to find points of guaranteed inconsistency in the path. 2 may then begin its search closer to the root. 3 and 4 also find inconsistencies between queries which do specify data; 5 does not, and could be removed without affecting the results. 1 is implemented as a default; if none of the other rules operate, the node is attached to the bottom of the current path.
The previous rules are designed to ensure that the tree is consistent. The user can also use words to explicitly affect the tree structure. These rules are used to determine how to deal with this. Some of these rules use 'filter' queries; these do not correspond to any single utterance by themselves. Instead, a real query and its filter act as a single unit, which corresponds to an utterance. Operations like 'aunt' and 'move upward one' also treat the real query and the filter as a unit.
1. An utterance which is <greet> or <goodbye> is a child of the root, and no utterance is a child of <greet> or <goodbye> users begin and end conversations with <greet> and <goodbye> but do not intersperse them throughout at random (Levinson).
2.<thank> is a child of the current node; it does not change the subject of conversation.
3. A query containing a synonym of 'else' is first located in the tree as if it were a normal query. Then the path to that location is negated, or if the path is empty, the last query (even though it is not part of the context) is negated, and the query is attached to this 'filter'. The filter is attached as an aunt of the current location, and then moved up by rule 4 until data is found or the root is reached.
(The user's query is <event>: ordinal = else, no other data, which corresponds to natural language input like 'What else?')
4. A request for a suggestion is first treated as a normal query; a location is found for it and data is requested. If the data exists, a random tuple is selected. This tuple is described with reference to a combination of features assumed to be unique (currently date, time, location --assumed invariant). The suggestion query is transformed into a 'filter' and this description is attached to it. Then they are inserted into the tree.
(The user's query is '<suggest>: time = two pm, which corresponds to natural language input like 'Recommend something at two pm.' The system makes a random selection of an event in the current context (ie with John Smith at two pm.), then specifies it by time, date, and room.)
5. Any query containing a synonym of 'all' is a child of the root.
6. All queries which are made children of the root, and which do not include synonyms of 'all', are first attached as children of a list of default queries. Currently, there is only one default query, which finds information on 'date = today'. If this query is inconsistent, the current query is attached directly to the root.
7. The <endTime> query type, assigned to questions like 'When is the talk over', retrieves 'end time' information from the database rather than 'start time' information.
The first task in formatting output is to decide which data is important to the user and which is not. Each query type is considered to request some specific kinds of data, which are more important; a<time> request wants a time. If the query includes some data ('What is going on at five' includes a time) then this data is less important, since the user already has it. Other data is filled in between these two categories in a default order to construct a list of desirable types of data, in order by importance.
The system supports multiple output modes. Currently, there are two modes available, GUI and natural language. See Ward (unavailable) for details of the GUI output mode.
Various heuristics may be used to assign data to the appropriate devices; these are mostly motivated by Cohen and Oviatt's discussion of natural language output versus graphic output. Generally, spatial or temporal information is displayed graphically by default, while other information is spoken. If there is very little output, it is sent to both modes. To format data for natural language output, the system starts with the least important data category. It groups all the data items that are identical with respect to this category, heading them with strings of natural language describing the category. This process is repeated on the groups using the next data category.
(Gui Input Window--Preferred input for natural language is via speech recognition, which is not yet implemented.)
(Gui Spreadsheet output mode: The GUI also supports map output mode [not shown]. Images courtesy of Ward.)
The natural language formatting algorithm implicitly supposes that the last position in the sentence is most emphatic; the order of traversal in the list of desirable output types could be reversed if the first position were more emphatic.
Speakers, events and verbs are grouped together based on the position of the 'speaker' datatype in the original category list, since the program relies on the syntax 'speaker verb event' as the core of all output sentences. The verb is based on a database entry describing the type of event; all speeches, for instance, use the verb 'gave' or 'will give'. (The event time is used to determine the verb tense.) If there is no speaker, the program mimics an active construction using an expletive subject ('there took place / there will be'). All verbs used have the same form for singular and plural subjects (or in the expletive construction, objects) so the program does not need to guess the pluralities.
The program starts with the least important type, converts it to a phrase, and stores it. It splits the data into groups, all of which have the same entry of that type. The process is repeated on each group, using the next least important type, and then on the subgroups using the next type, etc. When this process is complete, the phrases are merged using the format 'x, y and z', or for only two items 'x and y', or for only one, 'x'.
For instance:
Importance of data types: event, time, loc, date, building, speaker Revised order with event-verb placed next to speaker: time, loc, date, building, event-verb, speaker Data:
building: room: speaker: date: westin edmonton bc room michael brenner,marie desjardins 28 jul westin edmonton bc room michael brenner,marie desjardins 29 jul time: event: type: 1400 planning with and for multiagent systems articled active 0900 planning with and for multiagent systems articled workshop speaker phrases: michael brenner and marie desjardins verb-event phrases: michael brenner ... { attended the planning with and for multiagent systems { organized the planning with and for multiagent systems building phrases: michael brenner ... / attended the planning with and for multiagent systems {in westin \ organized the planning with and for multiagent systems {in westin ... (date, location and time phrases added)... ...phrases merged with 'and' schema...
Final output:
michael brenner, and marie desjardins attended the planning with and for mul tiagent systems in westin edmonton on last friday in bc room at two pm, and or ganized the planning with and for multiagent systems in westin edmonton yesterd ay in bc room at nine a.m.
The database used as a knowledge base is the same as the one used only with the tagger, with two exceptions; a field has been added for event type (it also describes whether the event requires an article such as 'the'). The summary field which held canned output, on the other hand, has been removed. The type field can actually be automatically generated with reasonable accuracy by scanning the canned output string. In other words, the program is capable of more complex interaction while using less data than the previous system.
The success of this system in a wider context is its adaptability and lack of requirements or assumptions. It operates on a fairly simple input format, which is easily adapted to multiple input modes and flexible multi-mode output. It requires only a database and a few tiny lists of facts or semantic rules integrated into the code. There is no additional knowledge base, nor is there a training corpus. (The tagger does require a training corpus.) It runs very quickly, and does not require any parsing, theorem proving or planning modules.
Unfortunately, the stripped-down nature of the system leaves it unable to be extended much beyond its present capabilities.
Unlike Kellner's PADIS, this system operates in a wider domain, and without the sheltering assumption that the user always has a single goal. The two are therefore difficult to compare. Both Denecke's system and Sadek's ARTIMIS require extensive databases of knowledge about the world. Sadek's is designed around a theorem proving approach, which is very effective at detecting logical inconsistencies. They are therefore capable of resolving ambiguities in conversation more effectively than this system, and they are less dependent on database consistency.
The TRIPS system (Allen) is much more powerful than this system; it is not only an informer but also a planner and assistant. However, TRIPS is harder to extend and modify.
The lowest level problems have to do with the inadequacy of the tagger; words like 'not' cannot be interpreted using a filter-tag system, since they apply to constituents of the sentence rather than the sentence as a whole. (To some extent, this is also true of words like 'other'). This could be fixed by using a parser instead of a tagger.
Slightly higher level problems result from difficulty in finding statements which are not in the scheduler domain. As described above, questions like 'what's going on' are mistakenly eliminated by the filtering heuristics. A much wider knowledge base would be needed to make better judgments; the whole rule-based structure would probably be inadequate for the task. The system is incapable of dealing with ambiguity. It never asks disambiguating questions, nor does it disambiguate probabilistically. Generally, if the user repeats or rephrases a question often enough, they will find the information they want. A probabilistic disambiguation module would require a dialogue corpus (which seems very hard to construct) or some kind of machine learning. Disambiguation questions might be more feasible.
The system has no knowledge of the linguistic content of its utterances, as outlined above. The price of an arbitrary group of independent output modes is extreme difficulty in tracking where any particular fact is going to go--whether it finds a graphic device, is wrapped into a speakable sentence, or is discarded as irrelevant. A more integrated system like TRIPS (Allen) has all these capabilities, but is probably 'locked in' to a few well-defined output systems. Finally, the system has no knowledge outside the database. This is what leads to the difference between database inconsistency and logical inconsistency. For instance, 'Tuesday' is logically consistent with '3 pm', but unless something is actually going on at 3 pm on Tuesday, it is not database consistent. Since the system's definition of logical consistency is based only on a small list of 'determines' relationships, database consistency must be used as a substitute in these cases, leading to the answer that 'Tuesday' may be inconsistent with '3 pm'. A knowledge base and a theorem prover as in ARTIMIS (Sadek) might fix this deficiency, but it would be difficult to integrate.
Apart from the problem fixes detailed above, there are several possible extensions of the project. This software does not require either a robotic body or a scheduling domain; it might also act as a natural-language front end for any single database with a relatively restricted set of queries.
Additional query types like 'before' and 'after' might be added to the tagger and handled in the program. This would require a few more semantic rules. Another rule could handle indefinite times like 'now', 'this evening', etc.
Information on the user might be used to find more useful output directed toward the user's interests or needs. This could affect the ordering of output strings, or act as a default in some database searches.
Better heuristics might be used to determine what information from the results of the database searches should be output, and to what device. Currently, there are two devices, the GUI and the speakers, and a fairly limited heuristic determines which output goes to each.
The Bayesian tagger and filterer use Perl 5.0. The rest of the program is written in Python 2.2. The GUI system, which should have a separate readme, is also written in Python 2.2 with Tcl and Tk extensions. The database system uses the sqlite database, and the pysqlite interface. The text-to-speech system is loosely integrated, and can be any system with a 'speak sentence' command. Speech recognition is not yet integrated; the documentation may be updated in the next few weeks when integration is complete. All software should be Unix and Windows compatible; python extensions should all be buildable with gcc/cygwin if not some other method. So far, the program has been tested on Redhat Linux 7.3, on a Dell Inspiron 4000 using typed input, and on Windows XP, on a desktop using GUI input; under these conditions, it runs in real time.
Sources:
Tagger/filterer:
Tori Sweetser, or see docs in nlu/docs.
GUI:
Andrew Ward, or see readme.
Mabel architecture:
Schmid, Kollar, Meisner, Sweetser, Feil-Seifer, Brown: Mabel, Building a Robot Designed for Human Interaction
Dialogue Structure:
Levinson: Pragmatics.
Syndicate of Cambridge University, NY.
Other dialogue systems:
Denecke: An Information-based Approach for Guiding Multi-Modal Human-Computer Interaction.
Proceedings of IJCAI-97, 1997.
Sadek, Bretier, Panaget: ARTIMIS: Natural Dialogue Meets Rational Agency.
Proceedings of IJCAI-97, 1997.
Allen, Ferguson, Stent. An Architecture for More Realistic Conversational Systems.
IUI'01, January 2001.
Kellner, Rueber, Seide, Tran:
PADIS -- an automatic telephone switchboard and directory information system.
Speech Communication, 23:95-111, Oct 1997.
Output formatting:
Cohen, Oviatt: The Role of Voice Input for Human-Machine Communications
George Ferguson