Deep Exploration via Randomized Value Functions
Ian Osband, Benjamin Van Roy, Daniel J. Russo, Zheng Wen.
Year: 2019, Volume: 20, Issue: 124, Pages: 1−62
Abstract
We study the use of randomized value functions to guide deep exploration in reinforcement learning. This offers an elegant means for synthesizing statistically and computationally efficient exploration with common practical approaches to value function learning. We present several reinforcement learning algorithms that leverage randomized value functions and demonstrate their efficacy through computational studies. We also prove a regret bound that establishes statistical efficiency with a tabular representation.