Convex Q-Learning: Theory and Applications

Date:

Description: Finding optimal control policies is one of the most important tasks in reinforcement learning. It is well known that the extension of Watkins’ algorithm to general function approximation settings is challenging: does the projected Bellman equation have a solution? If so, is the solution useful in the sense of generating a good policy? And, if the preceding questions are answered in the affirmative, is the algorithm consistent? These questions are unanswered even in the special case of Q-function approximations that are linear in the parameter. The challenge seems paradoxical, given the long history of convex analytic approaches to dynamic programming.

We propose a new set of reinforcement learning algorithms, called convex Q learning, based on the convex relaxation of the Bellman equation. Convergence is established under general conditions, including a linear function approximation for the Q-function. A batch implementation appears similar to the famed DQN algorithm. It is shown that in fact the algorithms are very different: while convex Q- learning solves a convex program that approximates the Bellman equation, theory for DQN is no stronger than for Watkins’ algorithm with function approximation. The proposed convex Q-learning is successful in solving various examples from OpenAI Gym as well as cases where standard Q-learning diverges, such as the LQR problem.

Link: AM Seminar at UCSC