Chenliang Xu

pic

Learning Dynamics and Evolution towards
Cognitive Understanding of Videos


PI: Chenliang Xu, Co-PI: Jiebo Luo
Graduate Students: Jing Shi, Hao Huang, Wei Zhang, Shaojie Wang, Wentian Zhao, Tushar Kumar, Jie Chen, Songyang Zhang, Weijian Li, Yang Feng
Undergraduate Students: Ariel Tello
Award Number: NSF IIS 1813709
Award Title: RI: Small: Learning Dynamics and Evolution towards Cognitive Understanding of Videos
Award Amount: $465,990.00
Duration: September 1, 2018 to August 31, 2021 (Estimated)

Overview of Goals and Challenges:

A fundamental capability of human intelligence is being able to learn to act by watching instructional videos. Such capability is reflected in abstraction and summarization of the instructional procedures as well as in answering questions such as "why" and "how" something happened in the video. This project aims to build computational models that are able to perform well in above tasks, which require, beyond the conventional recognition of objects, actions and attributes in the scene, the higher-order inference of any relations therein. Here, the higher-order inference refers to inference that cannot be answered immediately by direct observations and thus requires stronger semantics. The developed technology will enable many applications in other fields, e.g., multimedia (video indexing and retrieval), robotics (reasoning capability of why and how questions), and healthcare (assistive devices for visually impaired people). In addition, the project will contribute to education and diversity by involving underrepresented groups in research activities, integrating research results into teaching curriculum, and conducting outreach activities to local K-12 communities.

The research will develop a framework to perform higher-order inference in understanding web instructional videos, such that models devised in this framework are capable of not only discovering and captioning procedures that constitute the instructional event but also answering questions such as why and how something happened. The framework is built on a video story graph that models the dynamics (the composition of actions at different scales) and evolution (the change in object states and attributes), and it supports higher-order inference upon deep learning units and incorporation of external knowledge graph in a unified framework. Methodologies to extract such video story graphs and use them to discover, caption procedures and perform question-answering will be explored. Expected outcomes of this project include: a software package for constructing and performing inference on video story graphs and incorporating external knowledge; a web-deployed system to process user-uploaded instructional videos; and a large video dataset with procedure and question-answering annotations.

Current Results:

Data (), demos (), and software () are downloadable by following the individual tasks below.


Broader Impacts:

The proposed project will make a breakthrough in understanding web instructional videos through computational modeling of the higher-order inference capability in human intelligence. In particular, it aims to create a sound framework that combines the structured reasoning capability in graph theory with the superior feature learning capability in deep neural networks, e.g., the preliminary dynamic graph module we developed. Furthermore, the project is expected to not only achieve a better performance on video captioning and QA tasks but also open up the black box of the decision-making process in deep learning. For example, our work on interpretable and controllable audio-visual captioning unfolds many interesting scientific inquires. Last but not least, the optimization problems associated with the video story graph learning and inference are non-convex and high-dimensional that require novel advances in the associated mathematics.

The proposed project is also of keen interest to other research communities. Mission-critical applications such as autonomous driving and security surveillance need robust reasoning capability to run in real-world scenarios. Researchers in multimedia are interested in indexing and retrieving information from large-scale Internet videos. Researchers in data mining and social media analysis are interested in analyzing user behaviors from images and videos uploaded by users to social media sites. In all of these cases, the proposed work is a fundamental facilitating technique. Methodologies and techniques developed in this project can also be applied to human-computer interaction and collaborative robots.

Publications from the Team:

  1. How to make a BLT sandwich? Learning VQA towards understanding web instructional videos. S. Wang, W. Zhao, Z. Kou, J. Shi, and C. Xu. IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2021.
  2. Unified multisensory perception: Weakly-supervised audio-visual video parsing. Y. Tian, D. Li, and C. Xu. European Conference on Computer Vision (ECCV), 2020.
  3. Deep grouping model for unified perceptual parsing. Z. Li, W. Bao, J. Zheng, and C. Xu. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
  4. Learning a weakly-supervised video actor-action segmentation model with a wise selection. J. Chen, Z. Li, J. Luo, and C. Xu. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
  5. Weakly-supervised audio-visual video parsing toward unified multisensory perception. Y. Tian, D. Li, and C. Xu. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPRW), 2020.
  6. Learning from interventions with hierarchical policies for safe learning. J. Bi, V. Dhiman, T. Xiao, and C. Xu. AAAI Conference on Artificial Intelligence (AAAI), 2020.
  7. Learning 2D temporal adjacent networks for moment localization with natural language. S. Zhang, H. Peng, J. Fu, and J. Luo. AAAI Conference on Artificial Intelligence (AAAI), 2020.
  8. Dynamic graph modules for modeling object-object interactions in activity recognition. H. Huang, L. Zhou, W. Zhang, J. J. Corso, and C. Xu. British Machine Vision Conference (BMVC), 2019.
  9. Not all frames are equal: Weakly-supervised video grounding with contextual similarity and visual clustering losses. J. Shi, J. Xu, B. Gong and C. Xu. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
  10. GAN-EM: GAN based EM learning framework. W. Zhao, S. Wang, Z. Xie, J. Shi and C. Xu. International Joint Conference on Artificial Intelligence (IJCAI), 2019.
  11. Audio-visual interpretable and controllable video captioning. Y. Tian, C. Guan, J. Goodman, M. Moore and C. Xu. IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2019.
  12. Audio-visual event localization in the wild. Y. Tian, J. Shi, B. Li, Z. Duan and C. Xu. IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2019.
  13. A fast and accurate one-stage approach to visual grounding. Z. Yang, B. Gong, L. Wang, W. Huang, D. Yu, and J. Luo. IEEE/CVF International Conference on Computer Vision (ICCV), 2019.
  14. Exploiting temporal relationships in video moment localization with natural language. S. Zhang, J. Su, and J. Luo. ACM Multimedia Conference (ACMMM), 2019.
  15. Unsupervised image captioning. Y. Feng, L. Ma, W. Liu, and J. Luo. IEEE/CVF Conference of Computer Vision and Pattern Recognition (CVPR), 2019.
  16. Spatio-temporal video re-localization by warp LSTM. Y. Feng, L. Ma, W. Liu, and J. Luo. IEEE/CVF Conference of Computer Vision and Pattern Recognition (CVPR), 2019.
  17. Attentive relational networks for mapping images to scene graphs. M. Qi, W. Li, Z. Yang, Y. Wang, and J. Luo. IEEE/CVF Conference of Computer Vision and Pattern Recognition (CVPR), 2019.
  18. Weakly-supervised action segmentation with iterative soft boundary assignment. L. Ding and C. Xu. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
  19. Towards automatic learning of procedures from web instructional videos. L. Zhou, C. Xu and J. J. Corso. AAAI Conference on Artificial Intelligence (AAAI), 2018.
Acknowledgements: This material is based upon work supported by the National Science Foundation under Grant No. 1813709.
Disclaimer: Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
Point of Contact: Chenliang Xu
Date of Last Update: August, 2019