Skip to main content

News & Events

Events

 

RSS

December 20, 2018, 12:00 PM
Georgiy Platonov: Structures and Spatial Relations in The Blocks World

[Thursday, December 20, 2018 at 12:00 PM in Wegmans Hall 2506]

Building a general intelligence requires integrating multiple diverse capabilities in one tightly connected system. The blocks world is a classic toy domain that has long been used to build such intelligent systems. Despite its relative simplicity, tackling this domain in its full complexity requires the agent to exhibit a rich set of skills, ranging from vision to natural language understanding. There is currently a resurgence of interest in solving problems in such limited domains using modern techniques. Some of these studies focus on certain certain specific aspects of the domain, while others tackle the blocks world in a more holistic way. In our work, we address several crucial problems related to building a blocks world agent. First, our agent should be capable of representing and reasoning about spatial relations and composite constructions. Second, the agent should be able to learn new concepts in a dialog setting on the fly, using either natural language descriptions and/or very few examples. To tackle this second aspect we consider the mechanism of schemas which itself is an old approach to explaining understanding and learning in humans in terms of naturally arising stable patterns of objects and events.

Advisor: Prof. Lenhart Schubert (Computer Science)

Committee: Prof. James Allen (Computer Science), Prof. Daniel Gildea (Computer Science), Prof. Aaron White (Linguistics)


December 21, 2018, 09:30 AM
Yang Feng: Localizing Content in Videos via Textual and Visual Queries

[Friday, December 21, 2018 at 9:30 AM in Wegmans Hall 2506]

The applications of videos in our lives are increasing dramatically in the past few years. On one hand, surveillance cameras continuously recording videos are widely deployed all over the world. On the other hand, it is becoming more popular than ever for ordinary users to share videos on social media. With more and more videos generated every day, exploring so many videos becomes increasingly challenging. Although it is possible to store the surveillance videos over a long period, the information in stored surveillance videos cannot be fully exploited. The surveillance videos are usually viewed manually. Typically, a human viewer first determines when to watch and then finds what happened at that time in surveillance videos. It is very time-consuming for a human viewer to search something or somebody over a long time range in surveillance videos. Similar issues arise when a user wants to find some video content of interest in a very long video. To explore the huge amount of videos, it is necessary to build tools which help users find the video content they want efficiently.

This thesis will develop methods for efficiently localizing certain content in videos. Regarding surveillance videos, we are particularly interested in finding the emergence of a certain person, which is needed in many security applications. Regarding user-generated videos or entertainment videos, we design localizing methods with different types of queries, which could be either video samples or natural sentences. It is straightforward using a natural sentence as a query to indicate what a user wants to find. However, some video content is difficult to describe clearly using a few sentences. When a video sample is available, the video sample could be used as the query, which accurately conveys the wanted meaning.

In this proposal, we present our preliminary work. Existing methods for recognizing people's identity mainly depends on appearance. The appearance-based methods usually fail when a person changes his/her clothes. We focus on recognizing people using their walking patterns, i.e., gait. It is also possible to use both walking/movement pattern and appearance to obtain improved results. For finding generic video contents, we design a matching framework to compare a given query with a reference video thoroughly to discover their semantic coherence. When no query video sample is available, a sentence could serve as the query. In this case, paired data are usually needed for training, which limits the generalization ability. To solve this issue, we attempt to align sentences with visual contents without using any paired data. Timelines for the future thesis work is included at the end of this proposal.

Advisor: Prof. Jiebo Luo (Computer Science)

Committee members: Prof. Daniel Gildea (Computer Science), Prof. Chenliang Xu (Computer Science), Dr. Lin Ma (Tencent AI Lab, Shenzhen)


January 18, 2019, 01:00 PM
Yu Kong: TBD

[Friday, January 18, 2019 at 1:00 PM in Wegmans Hall 2506] TBD