PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.
Find a file
2019-11-21 14:52:29 +01:00
docs undo changes to conf.py 2019-11-21 14:52:29 +01:00
scripts Bug fixes + add evaluate script 2019-09-06 10:44:55 +02:00
tests save implementation for a2c needed before uncommenting save and load test in test_run.py::test_onpolicy 2019-11-21 14:44:02 +01:00
torchy_baselines Implemented Changes suggested from Antonin-Raffin 2019-11-21 14:39:44 +01:00
.coveragerc Bug fixes + add evaluate script 2019-09-06 10:44:55 +02:00
.gitignore Refactor: CEM-RL closer to TD3 implementation 2019-09-09 13:43:46 +02:00
LICENSE Init: TD3 2019-09-05 17:29:41 +02:00
README.md Add A2C 2019-10-25 10:59:15 +02:00
setup.cfg Add flexible mlp 2019-10-17 13:32:25 +02:00
setup.py Working SAC 2019-09-24 14:15:12 +02:00

Build Status Documentation Status

Torchy Baselines

PyTorch version of Stable Baselines, a set of improved implementations of reinforcement learning algorithms.

Implemented Algorithms

  • A2C
  • CEM-RL (with TD3)
  • PPO
  • SAC
  • TD3

Roadmap

TODO:

  • save/load
  • better predict
  • complete logger

Later:

  • get_parameters / set_parameters
  • CNN policies + normalization
  • tensorboard support
  • DQN
  • TRPO
  • ACER
  • DDPG
  • HER -> use stable-baselines because does not depends on tf?