Seungjae Lee

I am currently an assistant researcher in Generalizable Robotics and AI Lab (GRAIL) at NYU, advised by Prof. Lerrel Pinto. Also, I will start my Ph.D. at UMD Department of Computer Science in 2024 Fall, and will be co-advised by Prof. Furong Huang and Prof. Jia-Bin Huang.

Prior to UMD, I had my Masters Degree in Department of Aerospace Engineering at SNU. advised by Prof. H. Jin Kim. I worked on enhancing the data efficiency of Reinforcement Learning (RL) and Imitation Learning (IL) systems and applied them to various decision-making scenarios, including real-world robots.

Before that, I received Bachelor's degrees in Mechanical and Aerospace Engineering at SNU.

Email  /  CV  /  Google Scholar  /  Twitter  /  Github

profile photo

News

  • 05/2024: A paper on multi-modal behavior generation for robot agents were accepted to ICML 2024.
  • 02/2023: Graduated from the master's program at Seoul National University (Aerospace Engineering).
  • 12/2023: Presented two papers on curriculum learning for robot agents at NeurIPS 2023.
  • 07/2023: Joined Prof. Lerrel Pinto's lab at NYU as an assistant researcher.
  • 07/2023: Presented a paper on 3D representation learning for robot agents at ICML 2023.
  • 05/2023: Presented a paper on exploration for RL at ICLR 2023 (Spotlight).
  • 12/2022: Presented a paper on hierarchical RL at NeurIPS 2022 (Oral presentation).

Research

My research interest is understanding the interaction between agents and environments, and devising data-efficient decision-making (or robot learning) algorithms, especially in the field of reinforcement learning (RL). Representative papers are highlighted.

Behavior Generation with Latent Actions
Seungjae Lee, Yibin Wang, Haritheja Etukuru, H. Jin Kim, Nur Muhammad Mahi Shafiullah, Lerrel Pinto
ICML, 2024
project website/ github

In this work, we present Vector-Quantized Behavior Transformer (VQ-BeT), a versatile model for behavior generation that handles multimodal action prediction, conditional generation, and partial observations. Across seven environments including simulated manipulation, autonomous driving, and robotics, VQ-BeT improves on state-of-the-art models such as BeT and Diffusion Policies.

DHRL: A Graph-Based Approach for Long-Horizon and Sparse Hierarchical Reinforcement Learning Goal Generation
Seungjae Lee, Jigang Kim, Inkyu Jang, H. Jin Kim
NeurIPS, 2022   [Oral Presentation]
arXiv / github

DHRL provides a freely stretchable high-level action interval, which facilitates longer temporal abstraction and faster training in complex tasks.

Outcome-directed Reinforcement Learning by Uncertainty & Temporal Distance-Aware Curriculum Goal Generation
Daesol Cho*, Seungjae Lee*, H Jin Kim Kim (*equal contribution)
ICLR, 2023   [Spotlight]
arXiv / github

We propose an uncertainty & temporal distance-aware curriculum goal generation method for the outcome-directed RL via solving a bipartite matching problem. It can provide precisely calibrated guidance of the curriculum to the desired outcome states.

SNeRL: Semantic-aware Neural Radiance Fields for Reinforcement Learning
Dongseok Shim*, Seungjae Lee*, H Jin Kim Kim (*equal contribution)
ICML, 2023
arXiv / github

We present Semantic-aware Neural Radiance Fields for Reinforcement Learning (SNeRL), which jointly optimizes semantic-aware neural radiance fields (NeRF) with a convolutional encoder to learn 3D-aware neural implicit representation from multi-view images.

CQM: Curriculum Reinforcement Learning with a Quantized World Model
Seungjae Lee, Daesol Cho, Jonghae Park, H Jin Kim
NeurIPS, 2023
arXiv

Previous approaches often face challenges when they generate curriculum goals in a high-dimensional space. To alleviate it, we propose a novel curriculum method for agents that automatically defines the semantic goal space, and suggests curriculum goals over it.

Diversify Conquer: Outcome-directed Curriculum RL via Out-of-Distribution Disagreement
Daesol Cho, Seungjae Lee, H Jin Kim
NeurIPS, 2023
arXiv

Unlike previous curriculum learning methods, D2C requires only a few examples of desired outcomes and works in any environment, regardless of its geometry or the distribution of the desired outcome examples.

Deep End-to-End Imitation Learning for Missile Guidance with Infrared Images Goal Generation
Seungjae Lee, Jongho Shin, Hyeong-Geun Kim, Daesol Cho, H. Jin Kim
IJCAS, 2023

We propose an end-to-end missile guidance algorithm from raw infrared image pixels by imitating a conventional guidance law which leverages privileged data.

Robust and Recursively Feasible Real-Time Trajectory Planning in Unknown Environments Goal Generation
Inkyu Jang, Dongjae Lee, Seungjae Lee, H Jin Kim
IROS, 2021
arXiv

We propose a real-time robust planner that recursively guarantees persistent feasibility without any need of braking.

Projects

Training Excavator Virtual Driver based on Inverse RL

Co-work with HD Hyundai Heavy Industries Co., Ltd.
Apr. 2023 - Present

End-to-End Machine Learning Based Guidance Research
Co-work with a Korean national research institute
May. 2021 - Apr. 2023

Awards and Achievements

  • [Awards] Graduated Summa Cum Laude, Seoul National University (1st prize in Department of Aerospace Engineering)
  • [Scholarship] Hyundai Motor Chung Mong-Koo Foundation
  • [Awards] NeurIPS Scholar Award
  • [Awards] Global Excellence Scholarship 2022, Hyundai Motor Chung Mong-Koo Foundation
  • [Awards] Best poster competition, SNU Artificial Intelligence Institute Spring Retreat
  • [Awards] Global Excellence Scholarship 2023, Hyundai Motor Chung Mong-Koo Foundation

Academic Services

  • Conference reviewer for ICML'22
  • Conference reviewer for IROS'23
  • Conference reviewer for NeurIPS'23
  • Conference reviewer for ICLR'24

Professional Experience

  • [Internship] Samsung Electronics, Deep Learning Algorithm Team / Device Solutions (DS), 2020.7 - 2020.9
  • [Research Group] Deepest. (SNU Deep Learning Society), 2020.9 - 2022.2
  • [Collaborative Research] Co-work with Generalizable Robotics and AI Lab (Prof. Lerrel Pinto) at New York University, 2023.7 - present

Source code credit to Dr. Jon Barron