Model-based Reinforcement Learning for Parameterized Action Spaces

Published in Forty-first International Conference on Machine Learning (ICML), 2024

Recommended citation: Zhang, R., Fu, H., Miao, Y., & Konidaris, G. (2024). Model-based Reinforcement Learning for Parameterized Action Spaces. arXiv preprint arXiv:2404.03037. https://arxiv.org/abs/2404.03037

Download paper here

We propose a novel model-based reinforcement learning algorithm – Dynamics Learning and predictive control with Parameterized Actions (DLPA) – for Parameterized Action Markov Decision Processes (PAMDPs). The agent learns a parameterized-action-conditioned dynamics model and plans with a modified Model Predictive Path Integral control. We theoretically quantify the difference between the generated trajectory and the optimal trajectory during planning in terms of the value they achieved through the lens of Lipschitz Continuity. Our empirical results on several standard benchmarks show that our algorithm achieves superior sample efficiency and asymptotic performance than state-of-the-art PAMDP methods.

Citation: Zhang, R., Fu, H., Miao, Y., & Konidaris, G. (2024). Model-based Reinforcement Learning for Parameterized Action Spaces. arXiv preprint arXiv:2404.03037.