I am first-year Ph.D. student at UMD Department of Computer Science, co-advised by professors Furong Huang and Jia-Bin Huang.
Prior to UMD, I had my Masters Degree in Department of Aerospace Engineering at SNU. advised by Prof. H. Jin Kim.
I also spent time at Generalizable Robotics and AI Lab (GRAIL) at NYU, working closely with Professor Prof. Lerrel Pinto.
I worked on enhancing the data efficiency of Reinforcement Learning (RL) and Imitation Learning (IL) systems and applied them to various decision-making scenarios, including real-world robots.
05/2024: A paper on multi-modal behavior generation for robot agents were accepted to ICML 2024 (Spotlight).
02/2024: Graduated from the master's program at Seoul National University (Aerospace Engineering).
12/2023: Presented two papers on curriculum learning for robot agents at NeurIPS 2023.
07/2023: Presented a paper on 3D representation learning for robot agents at ICML 2023.
05/2023: Presented a paper on exploration for RL at ICLR 2023 (Spotlight).
Research
My research interest is understanding the interaction between agents and environments, and devising data-efficient decision-making (or robot learning) algorithms, especially in the field of reinforcement learning (RL).
Robot Utility Models (RUMs) is a simple method to build zero-shot robot policies that can solve useful tasks in completely new homes without any additional training often at 90%+ success rate.
Behavior Generation with Latent Actions Seungjae Lee, Yibin Wang, Haritheja Etukuru, H. Jin Kim, Nur Muhammad Mahi Shafiullah, Lerrel Pinto
ICML, 2024Spotlight (Top: 3.5%)
+ RSS 2024 Workshop SemRob, "Oral spotlights"
+ ICML 2024 Workshop MFM-EAI, "Outstanding Paper Award - Winner"
In this work, we present Vector-Quantized Behavior Transformer (VQ-BeT), a versatile model for behavior generation that handles multimodal action prediction, conditional generation, and partial observations. Across seven environments including simulated manipulation, autonomous driving, and robotics, VQ-BeT improves on state-of-the-art models such as BeT and Diffusion Policies.
Previous approaches often face challenges when they generate curriculum goals in a high-dimensional space. To alleviate it, we propose a novel curriculum method for agents that automatically defines the semantic goal space, and suggests curriculum goals over it.
Unlike previous curriculum learning methods, D2C requires only a few examples of desired outcomes and works in any environment, regardless of its geometry or the distribution of the desired outcome examples.
We present Semantic-aware Neural Radiance Fields for Reinforcement Learning (SNeRL), which jointly optimizes semantic-aware neural radiance fields (NeRF) with a convolutional encoder to learn 3D-aware neural implicit representation from multi-view images.
We propose an uncertainty & temporal distance-aware curriculum goal generation method for the outcome-directed RL via solving a bipartite matching problem. It can provide precisely calibrated guidance of the curriculum to the desired outcome states.
We propose an end-to-end missile guidance algorithm from raw infrared image pixels by imitating a conventional guidance law which leverages privileged data.