-
Eta Dialogue Manager: Large Language Models have recently been demonstrated to be competent conversational agents in various domains, but they remain prone to hallucination, and are often costly in complex simulations. What if an agent could "think fast and slow", using a flexible and user-configured combination of LLMs and symbolic pattern transduction? This has been the focus of my work on the Eta Dialogue Manager (originally a pun on "Etaoin shrdlu" and SHRDLU, Terry Winograd's famous Blocks World agent) — a dialogue manager that uses "transducers" to interpret the user, reason, update its conversation plan, and generate responses, all based on a core "event schema" representation encoding knowledge about expected dialogue flows. Some of the subsequent projects have been developed using this dialogue framework.
-
SOPHIE agent (end-of-life communication): A significant challenge that healthcare workers face is delivering difficult news to patients in a manner that is explicit, yet also empathetic and delivered in a way that empowers the patient. Continually practicing these skills through simulated medical scenarios is critical for optimal patient outcomes, yet is made difficult to the limited supply and relative lack of demographic diversity among human standardized patient actors. The SOPHIE project is an interdisciplinary effort (in collaboration with the ROC HCI lab and the URMC Center for Communication and Disparities Research) to create a multimodal virtual standardized patient using AI, with the goal of helping doctors to improve communication in end-of-life dialogue scenarios through automated feedback. I helped to develop the dialogue capabilities of SOPHIE, resulting in a successful pilot study of the system with medical students from the URMC.
-
Blocks World QA/tutoring agent: The Blocks World domain - consisting of several named/colored blocks on a flat 3D surface - is a classic testbed for spatial reasoning and planning capabilities. Despite its relative simplicity, tackling this domain in its full complexity requires the agent to exhibit a rich set of functional capabilities, ranging from vision to natural language understanding. I created a conversational agent that is capable of spatial/temporal QA and collaborative structure-building in a physically situated blocks world setting, given a "spatial reasoning specialist" module that calculates spatial relations in terms of low-level primitives. This work involved the integration of several capabilities, including a semantic parser capable of processing user inputs into structural logical forms (ULF), a spatial reasoner and planner, and an episodic memory for answering questions about the history of the interaction. Supported by the DARPA Communicating with Computers grant and developed in collaboration with Georgiy Platonov, who created the vision and spatial reasoning/planning modules.
-
MegaIntensionality: Certain English sentences can give rise to inferences pertaining to the belief, desire, and intentional states of the agents in the sentences. Not only are these inferences of import to Linguists — as they are putatively related to patterns in the syntactic distributions of verbs and provide evidence for (or against) certain generalizations — but they are also relevant to dialogue systems, since (inferred) beliefs and desires of the user can be used to steer a conversation. The focus of the MegaIntensionality project (a collaboration with Will Gantt and Aaron Steven White) is to model and analyze these inferences given lexical-scale human annotations. We collected a dataset of such annotations across a variety of verbs and syntactic contexts, and conducted analysis of the data using Bayesian mixed-effects mixture models. We also demonstrated that mixed-effects models can be integrated with standard supervised natural language inference (NLI) models to capture differences between individual annotators during training time.