Participants in the group on forward and backward-looking functions will perform a homework to prepare themselves for meeting at the 3rd Discourse Resource Initiative (DRI) workshop at Chiba Univ./Kazusa Academic Park, Japan, May 1998.
DAMSL (Dialog Act Markup in Several Layers) is an annotation scheme for forward and backward-looking communicative functions being developed by DRI. The homework will involve annotating a few dialogs with DAMSL. To perform this annotation, you will need the DAMSL annotation manual, the dialog annotation tool (dat), the dialogs to be annotated, and descriptions of the domains to be annotated. The dialogs to be annotated are TRIPS dialog 971202.1238, Verbmobil dialog r148c, and Map Task dialog d204. Annotated versions of TRIPS and Verbmobil dialogs are DUE APRIL 27. The Map Task dialog is DUE MAY 1. Please try to finish them close to that time. Your dialogs will still be useful if sent after that but may not be included in the data taken to Chiba Univ./Kazusa Academic Park. Please email each annotated dat file separately to mcore@cs.rochester.edu when ready.
A Note on Segmentation: the utterance units are prosodically motivated and may not comprise full sentences. For example, you may see a segmentation such as:
utt1 take the people utt2 to DeltaIt might be hard to label "to Delta" by itself. You may consider utt1 and utt2 ("take the people to Delta") as one unit. To label "take the people to Delta" as, say, an Action Directive, tag utt1 as an Action Directive and utt2 as an Action Directive. It would be helpful if you included a comment that you think utt1 and utt2 should be joined.
If the ftp locations above are slow try selecting an ftp site from here and downloading the ROADMAP file. This should allow you find what you are looking for. Check the Perl Home Page for more details.
Note, at the end of the dialog there is an interesting exchange:
utt32 A: see you then utt33 B: roger over and out utt34 A: thought it was roger wilcom utt35 B: oh no it is what we always say when we are talking on screenutt34 and utt35 are meta-comments about the task: How do we end the dialog/say goodbye? As to what "when we are talking on screen" means, B must be referring to being filmed. It seems unrelated to this face to face conversation.
JPEGs and html version of maps provided by David Traum.
A Note on Overlapping Speech: the convention in the DAMSL manual is used here. Words of overlapping speech are marked with numbered square brackets with the number in parenthesis next to the right bracket. You can match up overlapping sections by finding all the bracketed text with the same index. Consider the two utterance units below. Here, ``assuming'' and ``okay'' are marked as overlapping since they are both are bracketed with number 1 brackets.
T1 utt1: s: uh would take two hours[ assuming ](1) you have | | an engine at Bath T2 utt2: u: [ okay ](1)
The user first settles on a plan to get the people at Calypso to Delta and the people in Exodus to Delta. In utterance 29, truck one refers to the truck that has not moved yet. This truck picks up people in Barnacle and Abyss. Truck two is the one getting the people in Calypso. In utterance 39, the user alerts the system that the bridge to Delta is out and cannot be used in the plan. The system responds by highlighting the paths using the bridge. Part of the map at that point as shown below. Both trucks are using the bridge as shown by the light brown and pink paths. The red highlight flashes to alert the user. The user gets around the problem of the bridge and selects simulate plan from one of the menus (you should treat this as if the user had said "simulate the plan").
confusions: CLEAR [CLEAR should be: WHERE ARE] THE PEOPLE omissions: USE A TRUCK [missed: TO] GET THE PEOPLE ... commissions: IT [IT mistakenly inserted] FORGET ITLabel based on the speech recognition output not on what was actually said. For the most part, the errors do not change the character of the utterances. However, if there is a difference in how you would label the speech recognition output and the actual utterance then go by the speech recognition output. You are allowed to use "partial parsing"; i.e., if the speech recognizer output is "TAKE THE DFAFDA TO DFAF" you could label it an Action Directive at the Task level.
A summary of the speech recognition errors is provided below. Note, in utterance 54 I am not sure what the user said after the word "people". It is more of a noise than a word and certainly is not the word "left".
HYPOTHESIS ACTUAL UTTERANCE utt7 clear the people where are the people -------------------------------------------------------------------------- utt13 use a truck get the people use a truck to get the people from calypso to delta from calypso to delta -------------------------------------------------------------------------- utt15 it how_long will a take how long will that take -------------------------------------------------------------------------- utt19 what if we went along coast what if we went along the coast instead instead -------------------------------------------------------------------------- utt23 i forget it forget it -------------------------------------------------------------------------- utt25 use the other truck get the use the other truck to get the people at exodus to delta people at exodus to delta -------------------------------------------------------------------------- utt42 send_a truck two along the send truck two along the coast instead coast instead -------------------------------------------------------------------------- utt48 unload a people unload the people -------------------------------------------------------------------------- utt52 use the helicopter get the use the helicopter to get the people people from south_delta from south_delta to delta to delta -------------------------------------------------------------------------- utt54 where are the people left where are the people ???
Click here to send in comments or questions.