Almost Optimal Model-Free Reinforcement Learning via Reference-Advantage Decomposition (Yuan Zhou)

Abstract

We study the reinforcement learning problem in the setting of finite-horizon episodic Markov Decision Processes (MDPs) with S states, A actions, and episode length H. We propose a model-free algorithm UCB-Advantage and prove that it achieves -regret where T=KH and K is the number of episodes to play. Our regret bound improves upon the results of [Jin et al., 2018] and matches the best known model-based algorithms as well as the information theoretic lower bound up to logarithmic factors. We also show that UCB-Advantage achieves low local switching cost and applies to concurrent reinforcement learning, improving upon the recent results of [Bai et al., 2019].

Before diving into the details of the research problem, there will be an introductory part covering some elementary results on online active learning and reinforcement learning, in order to help the audience with less background quickly get familiar with the topic.

Time

2020-08-03   09:00 ~ 11:00   

Speaker

Yuan Zhou, University of Illinois at Urbana Champaign

Room

Zoom ID: 61513425198; PW:123456