mirror of
https://github.com/saymrwulf/stable-baselines3.git
synced 2026-05-15 21:00:53 +00:00
Include SuperSuit in projects (#359)
* include supersuit * longer title underline * Update changelog.rst
This commit is contained in:
parent
1e2eae6472
commit
e1ee87fef7
2 changed files with 14 additions and 0 deletions
|
|
@ -25,6 +25,7 @@ Others:
|
|||
Documentation:
|
||||
^^^^^^^^^^^^^^
|
||||
- Added gym pybullet drones project (@JacopoPan)
|
||||
- Added link to SuperSuit in projects (@justinkterry)
|
||||
|
||||
|
||||
Release 1.0 (2021-03-15)
|
||||
|
|
|
|||
|
|
@ -61,3 +61,16 @@ PyBullet Gym environments for single and multi-agent reinforcement learning of q
|
|||
| Author: Jacopo Panerati
|
||||
| Github: https://github.com/utiasDSL/gym-pybullet-drones/
|
||||
| Paper: https://arxiv.org/abs/2103.02142
|
||||
|
||||
SuperSuit
|
||||
---------
|
||||
|
||||
SuperSuit contains easy to use wrappers for Gym (and multi-agent PettingZoo) environments to do all forms of common preprocessing (frame stacking, converting graphical observations to greyscale, max-and-skip for Atari, etc.). It also notably includes:
|
||||
|
||||
-Wrappers that apply lambda functions to observations, actions, or rewards with a single line of code.
|
||||
-All wrappers can be used natively on vector environments, wrappers exist to Gym environments to vectorized environments and concatenate multiple vector environments together
|
||||
-A wrapper is included that allows for using regular single agent RL libraries (e.g. stable baselines) to learn simple multi-agent PettingZoo environments, explained in this tutorial:
|
||||
|
||||
| Author: Justin Terry
|
||||
| GitHub: https://github.com/PettingZoo-Team/SuperSuit
|
||||
| Tutorial on multi-agent support in stable baselines: https://towardsdatascience.com/multi-agent-deep-reinforcement-learning-in-15-lines-of-code-using-pettingzoo-e0b963c0820b
|
||||
|
|
|
|||
Loading…
Reference in a new issue