about image

Yong-Lu Li

Ph.D.

Email: yonglu_li[at]126[dot]com, yonglu_li[at]sjtu[dot]edu[dot]cn

[Google Scholar] [Github] [LinkedIn]

[ResearchGate] [dblp] [Semantic Scholar]

Hong Kong University of Science and Technology

Shanghai Jiao Tong University

About

Now I'm working closely with Prof. Chi Keung Tang and Yu-Wing Tai at the Hong Kong University of Science and Technology (HKUST) as a postdoc fellow (2021-present). I received a Ph.D. degree (2017-2021) in Computer Science from the Shanghai Jiao Tong University (SJTU), under the supervision of Prof. Cewu Lu, in Machine Vision and Intelligence Group (MVIG). Prior to that, I worked and studied at the Institute of Automation, Chinese Academy of Sciences (CASIA) under the supervision of Prof. Yiping Yang and A/Prof. Yinghao Cai. My primary research interests are Machine Learning, Computer Vision, and Intelligent Robot. Now we are building HAKE, a knowledge-driven system that enables intelligent agents to perceive human activities, reason human behavior logics, learn skills from human activities and interact with objects and environments. Check out the HAKE site for more information.

Research interests:

(1) Embodied AI: how to make agents learn skills from humans and interact with humans.

(2) Human Activity Understanding: how to learn and ground complex/ambiguous human activity concepts (body motion, human-object/human/scene interaction) and object concepts from multi-modal information (2D-3D-4D).

(3) Visual Reasoning: how to mine, capture, and embed the logics and causal relations from human activities.

(4) Activity Understanding from A Cognitive Perspective: work with multidisciplinary researchers to study how the brain perceives activities.

(5) General Visual Fundamental Model: especially for human-centric perception tasks.

Recruitment: I am actively looking for self-motivated interns, researchers, and engineers (with CV/ML/ROB/NLP background) to join our team (onsite or remote). If you share same/similar interests, feel free to drop me an email with your resume.



News and Olds



2022.03: Five papers on HOI detection/prediction, trajection prediction, 3D detection/keypoints are accepted by CVPR'22, papers and code are coming soon.

2022.02: We release the human body part state labels based on AVA: HAKE-AVA and HAKE 2.0.

2021.12: Our work on HOI generalization will appear at AAAI'22.

2021.10: Recieved Outstanding Reviewer Award from NeurIPS'21.

2021.10: Learning Single/Multi-Attribute of Object with Symmetry and Group is accepted by TPAMI!.

2021.09: Our work Localization with Sampling-Argmax will appear at NeurIPS'21!

2021.05: Selected as the Chinese AI New Star Top-100 (Machine Learning).

2021.02: Upgraded HAKE-Activity2Vec is released! Images/Videos --> human box + ID + skeleton + part states + action + representation. [Description]

2021.01: TIN (Transferable Interactiveness Network) is accepted by TPAMI!

2021.01: Recieved Baidu Scholarship (10 recipients globally).

2020.12: DecAug is accepted by AAAI'21.

2020.09: Our work HOI Analysis will appear at NeurIPS 2020.

2020.07: Fortunate to recieve WAIC YunFan Award and be among the 2nd A-Class Project.

2020.06: The larger HAKE-Large (>120K images with activity and part state labels) is released!

2020.02: Three papers Image-based HAKE: PaSta-Net, 2D-3D Joint HOI Learning, Symmetry-based Attribute-Object Learning are accepted in CVPR'20! Papers and corresponding resources (code, data) will be released soon.

2019.07: Our paper InstaBoost is accepted in ICCV'19.

2019.06: The Part I of our HAKE : HAKE-HICO which contains the image-level part-state annotations is released!

2019.04: Our project HAKE: Human Activity Knowledge Engine begins trial operation!

2019.02: Our paper on Interactiveness is accepted in CVPR'19.

2018.07: Our paper on GAN & Annotation Generation is accepted in ECCV'18.

2018.05: Presentation (Kaibot Team) in TIDY UP MY ROOM CHALLENGE | ICRA'18.

2018.02: Our paper on Object Part States is accepted in CVPR'18.




Publications





HAKE: A Knowledge Engine Foundation for Human Activity Understanding

Yong-Lu Li, Xinpeng Liu, Xiaoqian Wu, Yizhuo Li, Zuoyu Qiu, Liang Xu, Yue Xu, Hao-Shu Fang, Cewu Lu.

Preprint  HAKE 2.0 [arXiv] [PDF] [Project] [Press]

HAKE: Human Activity Knowledge Engine

Yong-Lu Li, Liang Xu, Xinpeng Liu, Xijie Huang, Yue Xu, Mingyang Chen, Ze Ma, Shiyi Wang, Hao-Shu Fang, Cewu Lu.

Preprint  HAKE 1.0 [arXiv] [PDF] [Project] [Code]

Main Repo:

Sub-repos: Torch TF Halpe List

Interactiveness Field of Human-Object Interactions

Xinpeng Liu*, Yong-Lu Li* (*=equal contribution), Xiaoqian Wu, Yu-Wing Tai, Cewu Lu, Chi Keung Tang.

CVPR 2022  [arXiv] [PDF] [Code]

Human Trajectory Prediction with Momentary Observation

Jianhua Sun, Yuxuan Li, Liang Chai, Hao-Shu Fang, Yong-Lu Li, Cewu Lu.

CVPR 2022  [arXiv] [PDF] [Code]

Learn to Anticipate Future with Dynamic Context Removal

Xinyu Xu, Yong-Lu Li, Cewu Lu.

CVPR 2022  [arXiv] [PDF] [Code]

Canonical Voting: Towards Robust Oriented Bounding Box Detection in 3D Scenes

Yang You, Zelin Ye, Yujing Lou, Chengkun Li, Yong-Lu Li, Lizhuang Ma, Weiming Wang, Cewu Lu.

CVPR 2022  [arXiv] [PDF] [Code]

UKPGAN: Unsupervised KeyPoint GANeration

Yang You, Wenhai Liu, Yong-Lu Li, Weiming Wang, Cewu Lu.

CVPR 2022  [arXiv] [PDF] [Code]

Highlighting Object Category Immunity for the Generalization of Human-Object Interaction Detection

Xinpeng Liu*, Yong-Lu Li*, Cewu Lu (*=equal contribution).

AAAI 2022  [arXiv] [PDF] [Code]

Learning Single/Multi-Attribute of Object with Symmetry and Group

Yong-Lu Li, Yue Xu, Xinyu Xu, Xiaohan Mao, Cewu Lu.

TPAMI 2021  [arXiv] [PDF] [Code]

An extension of our CVPR 2020 work (Symmetry and Group in Attribute-Object Compositions, SymNet).

Localization with Sampling-Argmax

Jiefeng Li, Tong Chen, Ruiqi Shi, Yujing Lou, Yong-Lu Li, Cewu Lu

NeurIPS 2021  [arXiv] [PDF] [Code]

Transferable Interactiveness Knowledge for Human-Object Interaction Detection

Yong-Lu Li, Xinpeng Liu, Xiaoqian Wu, Xijie Huang, Liang Xu, Cewu Lu.

TPAMI 2021  [arXiv] [PDF] [Code]

An extension of our CVPR 2019 work (Transferable Interactiveness Network, TIN).

DecAug: Augmenting HOI Detection via Decomposition

Yichen Xie, Hao-Shu Fang, Dian Shao, Yong-Lu Li, Cewu Lu.

AAAI 2021  [arXiv] [PDF]

HOI Analysis: Integrating and Decomposing Human-Object Interaction

Yong-Lu Li*, Xinpeng Liu*, Xiaoqian Wu, Yizhuo Li, Cewu Lu (*=equal contribution).

NeurIPS 2020  [arXiv] [PDF] [Code] [Project: HAKE-Action-Torch]

PaStaNet: Toward Human Activity Knowledge Engine

Yong-Lu Li, Liang Xu, Xinpeng Liu, Xijie Huang, Yue Xu, Shiyi Wang, Hao-Shu Fang, Ze Ma, Mingyang Chen, Cewu Lu.

CVPR 2020  [arXiv] [PDF] [Video] [Slides] [Data] [Code]

Oral Talk, Compositionality in Computer Vision in CVPR 2020.

Detailed 2D-3D Joint Representation for Human-Object Interaction

Yong-Lu Li, Xinpeng Liu, Han Lu, Shiyi Wang, Junqi Liu, Jiefeng Li, Cewu Lu.

CVPR 2020  [arXiv] [PDF] [Video] [Slides] [Benchmark: Ambiguous-HOI] [Code]

Symmetry and Group in Attribute-Object Compositions

Yong-Lu Li, Yue Xu, Xiaohan Mao, Cewu Lu.

CVPR 2020  [arXiv] [PDF] [Video] [Slides] [Code]

InstaBoost: Boosting Instance Segmentation Via Probability Map Guided Copy-Pasting

Hao-Shu Fang*, Jianhua Sun*, Runzhong Wang*, Minghao Gou, Yong-Lu Li, Cewu Lu.

ICCV 2019  [arXiv] [PDF] [Code]

Transferable Interactiveness Knowledge for Human-Object Interaction Detection

Yong-Lu Li, Siyuan Zhou, Xijie Huang, Liang Xu, Ze Ma, Hao-Shu Fang, Yan-Feng Wang, Cewu Lu.

CVPR 2019  [arXiv] [PDF] [Code]

SRDA: Generating Instance Segmentation Annotation via Scanning, Reasoning and Domain Adaptation

Wenqiang Xu*, Yong-Lu Li*, Cewu Lu. (*=equal contribution).

ECCV 2018  [arXiv] [PDF] [Dataset](Instance-60k & 3D Object Models) [Code]

Beyond Holistic Object Recognition: Enriching Image Understanding with Part States

Cewu Lu, Hao Su, Yong-Lu Li, Yongyi Lu, Li Yi, Chi-Keung Tang, Leonidas J. Guibas.

CVPR 2018  [PDF]

Optimization of Radial Distortion Self-Calibration for Structure from Motion from Uncalibrated UAV Images

Yong-Lu Li, Yinghao Cai, Dayong Wen, Yiping Yang.

ICPR 2016  [PDF]



Projects





Contents:

1) HAKE-Image (CVPR'18/20): Human body part state (PaSta) labels in images. HAKE-HICO, HAKE-HICO-DET, HAKE-Large, Extra-40-verbs.

2) HAKE-AVA: Human body part state (PaSta) labels in videos from AVA dataset. HAKE-AVA.

3) HAKE-Action-TF, HAKE-Action-Torch (CVPR'18/19/20, NeurIPS'20, TPAMI'21): SOTA action understanding methods and the corresponding HAKE-enhanced versions (TIN, IDN).

4) HAKE-3D (CVPR'20): 3D human-object representation for action understanding (DJ-RN).

5) HAKE-Object (CVPR'20, TPAMI'21): object knowledge learner to advance action understanding (SymNet).

6) HAKE-A2V (CVPR'20): Activity2Vec, a general activity feature extractor based on HAKE data, converts a human (box) to a fixed-size vector, PaSta and action scores.

7) Halpe: a joint project under Alphapose and HAKE, full-body human keypoints (body, face, hand, 136 points) of 50,000 HOI images.

8) HOI Learning List: a list of recent HOI (Human-Object Interaction) papers, code, datasets and leaderboard on widely-used benchmarks.

Transformer-in-Vision

Survery: recent Transformer-based CV and related works.


Public Services





Reviewer

  • Conference: CVPR'20/21/22, NeurIPS'20/21/22, ICCV'21, ICLR'22, ECCV'22, ICML'21/22, AAAI'21/22, ACCV'20, WACV'21/22.
  • Journal: TPAMI, Neurocomputing, JVCI, Science China Information Sciences.

  • Program Committee Member

  • Compositionality in Computer Vision, CVPR 2020.


  • Teaching





    Assistant

  • Problem Solving and Practice of Artificial Intelligence, Shanghai Jiao Tong University, 2020-2021 (Spring)
  • The second Artificial Intelligence Class

    Mentor

  • Problem Solving and Practice of Artificial Intelligence, Shanghai Jiao Tong University, 2019-2020 (Spring)
  • The first Artificial Intelligence Class

  • Guided Paper:
  • Rb-PaStaNet: A Few-Shot Human-Object Interaction Detection Based on Rules and Part States

    Shenyu Zhang, Zichen Zhu, Qingquan Bao (freshmen)

    IMVIP 2020, Press Coverage: SEIEE of SJTU



    Talks





  • 2021.12.9: Knowledge-driven Activity Understanding
  • CUMT "Image Analysis and Understanding" Frontier Forum. Thank Prof. Zhiwen Shao for the invitation.

  • 2021.8.21: HAKE and Human-Object Interaction (HOI) Detection
  • CoLab. Thank Prof. Si Liu for the invitation.

  • 2021.3.05: Human Activity Knowledge Engine (updated)
  • SJTU Computer Science Global Lunch Series. [Video]

  • 2020.8.23: Knowledge Driven Human Activity Understanding
  • The 3rd International conference on Image, Video Processing and Artificial Intelligence, IVPAI 2020.

  • 2020.7.12: Human Activity Knowledge Engine
  • Student Forum on Frontiers of AI, SFFAI. [Slides]

  • 2020.6.16: PaStaNet: Toward Human Activity Knowledge Engine
  • Compositionality in Computer Vision in CVPR 2020 Virtual. [Video] [Slides]



    Honors





  • Outstanding Reviewer Award, NeurIPS 2021, Oct. 2021.
  • Shanghai Outstanding Doctoral Graduate, Aug. 2021.
  • PhD Fellowship, The 85th Computer Department Education Development Fund and Yang Yuanqing Education Fund, Jun. 2021.
  • Chinese AI New Star Top-100 (Machine Learning), May. 2021.
  • Baidu Scholarship (Top-10, worldwide), Jan. 2021. Press Coverage.
  • WAIC Outstanding Developer, Dec. 2020. Press Coverage: 机器之心, 上海临港
  • China National Scholarship, Sep. 2020.
  • WAIC YunFan Award, Rising Star, July. 2020 (World Artificial Intelligence Conference, Shanghai). Press Coverage: 机器之心
  • The 2nd A-Class Project , July. 2020.
  • Annual Scholarship, SJTU, Nov. 2019.


  • Personal Interests





  • Chinese History Reading
  • Twenty-Four Histories: 5/24

  • Travel around China
  • 34 Provinces: 20/34