Jinfa Huang
Currently, I am a first-year Ph.D. student in the Department of Computer Science, University of Rochester (UR), advised by Prof. Jiebo Luo.
I aim at building multimodal interactive AI systems that can not only ground and reason over the external world signals, to understand human language, but also assist humans in decision-making and efficiently solving social concerns, e.g., robot.
As steps towards this goal, my research interests include but are not limited to Multimodal Large Language Models, Video Language Alignment.
Prior to that, I got my master's degree from Peking University (PKU) in 2023, advised by Prof. Li Yuan and Prof. Jie Chen.
And I obtained the honored bachelor's degree from University of Electronic Science and Technology of China (UESTC) in 2020.
( Note: I am actively seeking a summer internship in the U.S. for 2024. Feel free to connect with me:) )
Email  / 
Google Scholar  / 
Github
|
|
News
[2023/09] Join the VIStA Lab as a Ph.D. student working on vision and language.
[2023/07] 1 paper is accepted by ACMMM 2023.
[2023/05] I was awarded the 2023 Peking University Excellent Graduation Thesis.
[2023/04] 1 paper is accepted by TIP 2023.
[2023/04] 1 paper is accepted by IJCAI 2023.
[2023/02] 1 paper (Top 10% Highlight) is accepted by CVPR 2023.
[2022/09] 1 paper is accepted by ICRA 2023.
[2022/09] 1 paper (Spotlight) is accepted by NeurIPS 2022.
|
|
University of Rochester (UR), USA
PH.D. Student in Computer Science • Sep. 2023 - Present
Advisor: Prof. Jiebo Luo
|
|
Peking University (PKU), China
MPhil Student in Computer Science • Sep. 2020 - Jun. 2023
Advisors: Prof. Li Yuan and Prof. Jie Chen
|
|
University of Electronic Science and Technology of China (UESTC), China
Bachelor Degree in Software Engineering • Sep. 2016 - Jun. 2020
|
Selected Publication [Google Scholar]
My current research mainly focues on vision+language. *Equal Contribution.
Representative works are highlighted.
|
|
Video-Text as Game Players: Hierarchical Banzhaf Interaction for Cross-Modal Representation Learning
Peng Jin,
Jinfa Huang,
Pengfei Xiong,
Shangxuan Tian,
Chang Liu,
Xiangyang Ji,
Li Yuan,
Jie Chen
IEEE International Conference on Computer Vision and Pattern Recognition, CVPR 2023
(Highlight Presentation, Top 2.5%)
[Paperlink], [Code]
Area: Video-and-Language Representation, Game Theory, Video-Text Retrieval
In this paper, we creatively model video-text as game players with multivariate cooperative game theory to wisely handle the uncertainty during fine-grained semantic interaction with diverse granularity, flexible combination, and vague intensity.
|
|
Expectation-Maximization Contrastive Learning for Compact Video-and-Language Representations
Peng Jin*,
Jinfa Huang*,
Fenglin Liu,
Xian Wu,
Shen Ge,
Guoli Song,
David A. Clifton,
Jie Chen
Conference on Neural Information Processing Systems, NeurIPS 2022
(Spotlight Presentation, Top 5%)
[Paperlink], [Code]
Area: Video-and-Language Representation, Machine Learning, Video-Text Retrieval, Video Captioning
To solve the problem of modality gap in video-text feature space, we propose Expectation-Maximization Contrastive Learning (EMCL) to learn compact video-and-language representations. We use the Expectation-Maximization algorithm to find a compact set of bases for the latent space, where the features could be concisely represented as the linear combinations of these bases.
|
|
Weakly-Supervised 3D Spatial Reasoning for Text-based Visual Question Answering
Hao Li,
Jinfa Huang,
Peng Jin,
Guoli Song,
Qi Wu,
Jie Chen
IEEE Transactions on Image Processing, TIP 2023
[Paperlink]
Area: 3D Spatial Reasoning, Text-based Visual Question Answering
Existing approaches are constrained within 2D spatial information learned from the input images and rely on transformer-based architectures to reason implicitly during the fusion process.
spatial reasoning between texts and objects is crucial in TextVQA, we introduce 3D geometric information into a human-like spatial reasoning process to capture the contextual knowledge of key objects step-by-step.
|
|
Cross-Modality Time-Variant Relation Learning for Generating Dynamic Scene Graphs
[Paperlink] , [Code]
Jingyi Wang,
Jinfa Huang,
Can Zhang,
Zhidong Deng
Proceedings of the International Conference on Robotics and Automation, ICRA 2023
Area: Cross-Modal Representation, Scene Graph Generation
In the process of temporal and spatial modeling during dynamic scene graph generation, it is particularly intractable to learn time-variant relations in dynamic scene graphs among frames.
In this paper, we propose a Time-variant Relation-aware TRansformer, which aims to model the temporal change of relations in dynamic scene graphs.
|
Selected Honors & Scholarships
Outstanding Graduate of University of Electronic Science and Technology of China (UESTC) , 2020
National Inspirational Scholarship, 2018
Selected entrant for Deepcamp 2020 (200 people worldwide), 2020
Outstanding Camper of Tencent Rhino Bird Elite Research Camp (24 people worldwide), 2020
Selected entrant for Google Machine Learning Winter Camp 2019 (100 people worldwide), 2019
China Collegiate Programming Contest (ACM-CCPC), JiLin, Bronze, 2018
Outstanding Student Scholarship (Top 10% student), UESTC 2017~2019
Peking University Excellent Graduation Thesis (Top 10%), PKU 2023
|
|
Academic Service
PC Member: CVPR'23, NeurIPS'23, ICLR'23, ICCV'23
Journal Reviewer: IEEE TCSVT
|
Personal Interests
Anime: As a pastime in my spare time, I watched a lot of Japanese anime about love, sports and sci-fi.
Literature: My favorite writer is Xiaobo Wang, the wisdom of his life inspires me. My favorite philosopher is Friedrich Wilhelm Nietzsche, and I am grateful that his philosophy has accompanied me through many difficult times in my life.
|
Last updated on Oct, 2023
This awesome template borrowed from this good man~
|
|