Hello! I am a researcher working with Professor Honglak Lee at LG AI Research Center, Ann Arbor. Before that, I received my PhD in Computer Science and Engineering at Seoul National University and worked as a member of Vision & Learning Lab.

My research interests mainly lie in LLM and VLM-based agents for decision-making in real-world tasks by leveraging large-scale demonstration datasets and prior knowledge in foundation models.



Small Language Models Need Strong Verifiers to Self-Correct Reasoning
Yunxiang Zhang, Muhammad Khalifa, Lajanugen Logeswaran, Jaekyeom Kim, Moontae Lee, Honglak Lee, Lu Wang
arXiv 2024
AutoGuide: Automated Generation and Selection of State-Aware Guidelines for Large Language Model Agents
Yao Fu, Dong-Ki Kim, Jaekyeom Kim, Sungryull Sohn, Lajanugen Logeswaran, Kyunghoon Bae, Honglak Lee
arXiv 2024
Constrained GPI for Zero-Shot Transfer in Reinforcement Learning
Jaekyeom Kim, Seohong Park, Gunhee Kim
NeurIPS 2022
[paper] [arxiv] [talk] [code]
Lipschitz-constrained Unsupervised Skill Discovery
Seohong Park, Jongwook Choi*, Jaekyeom Kim*, Honglak Lee, Gunhee Kim (*equal contribution)
ICLR 2022
[paper] [arxiv] [project] [code]
Time Discretization-Invariant Safe Action Repetition for Policy Gradient Methods
Seohong Park, Jaekyeom Kim, Gunhee Kim
NeurIPS 2021
[paper] [appx] [arxiv] [talk] [code]
Unsupervised Skill Discovery with Bottleneck Option Learning
Jaekyeom Kim*, Seohong Park*, Gunhee Kim (*equal contribution)
ICML 2021
[paper] [appx] [arxiv] [talk] [code]
Drop-Bottleneck: Learning Discrete Compressed Representation for Noise-Robust Exploration
Jaekyeom Kim, Minjung Kim, Dongyeon Woo, Gunhee Kim
ICLR 2021
[paper] [arxiv] [talk] [code]
Model-Agnostic Boundary-Adversarial Sampling for Test-Time Generalization in Few-Shot Learning
Jaekyeom Kim, Hyoungseok Kim, Gunhee Kim
ECCV 2020 (Oral)
[paper] [appx] [talk] [code]
EMI: Exploration with Mutual Information
Hyoungseok Kim*, Jaekyeom Kim*, Yeonwoo Jeong, Sergey Levine, Hyun Oh Song (*equal contribution)
ICML 2019 (Long talk)
[paper] [supp] [arxiv] [talk] [code]

Honors & Awards