icml2017-rlworkshop

ICML’2017 RL late breaking results event

View the Project on GitHub

Welcome to the ICML’2017 RL late breaking results event

The workshop will contain presentations of late-breaking reinforcement learning results in all areas of the field, including deep reinforcement learning, exploration, transfer learning and using auxiliary tasks, theoretical result etc, as well as applications of reinforcement learning to various domains. A panel discussion on the most interesting and challenging current research directions will conclude the workshop.

Draft list of presenters

Balaraman Ravindran (IIT Madras), Ruishan Liu (Stanford), Riashat Islam (McGill), Lucas Lehnert (Brown), Alessandro Lazaric (INRIA), Katja Hofmann (Microsoft), Tudor Berariu and Andrei Nica (Bucharest), Harm Van Seijn (Maluuba-Microsoft), Ahmed Touati (Univ. Montreal)

Schedule

8:45am Opening remarks (Doina Precup)

9:00am Balaraman Ravindran (IIT Madras) - Some experiments with learning hyperparameters, transfer, and multi-task learning

9:35am Chelsea Finn (Berkeley) - Deep predictive learning for acquiring vision-based skills

10:00am Coffee Break

10:30am Harm Van Seijen (Maluuba-Microsoft) - Achieving Above-Human Performance on Ms. Pac-Man by Reward Decomposition

11:00am Alessandro Lazaric (INRIA) - Explorartion methods for options

11:30am Ruishan Liu (Stanford) - The effects of memory replay in reinforcement learning

12:00pm Lunch Break

1:30pm Mat Monfort (MIT & Microsoft) - Asynchronous data aggregation for training end-to-end visual control networks

1:50pm Riashat Islam (McGill) - On the reproducibility of policy gradient experiments

2:10pm Tudor Berariu (Universitatea Politehnica bucharest and Bitdefender, Romania) - Reinforcement learning for the MALMO Minecraft challenge

2:30pm Ahmed Touati (Univ. Montreal) - On the convergence of tree backup and other reinforcement learning algorithms

2:50pm Lucas Lehnert (Brown Univ.) - Transfer learning using successor state features

3:10pm Coffee Break

3:30pm Panel discussion: B. Ravindran (IIT Madras), Chelsea Finn (Berkeley), Alessandro Lazaric (INRIA), Katja Hofmann (Microsoft Research), Marc Bellemare (Google)

If you have new reinforcement learning results that you would like to share with us, please email dprecup@cs.mcgill.ca