mirror of
https://github.com/saymrwulf/stable-baselines3.git
synced 2026-05-16 21:10:08 +00:00
* Added working her version, Online sampling is missing. * Updated test_her. * Added first version of online her sampling. Still problems with tensor dimensions. * Reformat * Fixed tests * Added some comments. * Updated changelog. * Add missing init file * Fixed some small bugs. * Reduced arguments for HER, small changes. * Added getattr. Fixed bug for online sampling. * Updated save/load funtions. Small changes. * Added her to init. * Updated save method. * Updated her ratio. * Move obs_wrapper * Added DQN test. * Fix potential bug * Offline and online her share same sample_goal function. * Changed lists into arrays. * Updated her test. * Fix online sampling * Fixed action bug. Updated time limit for episodes. * Updated convert_dict method to take keys as arguments. * Renamed obs dict wrapper. * Seed bit flipping env * Remove get_episode_dict * Add fast online sampling version * Added documentation. * Vectorized reward computation * Vectorized goal sampling * Update time limit for episodes in online her sampling. * Fix max episode length inference * Bug fix for Fetch envs * Fix for HER + gSDE * Reformat (new black version) * Added info dict to compute new reward. Check her_replay_buffer again. * Fix info buffer * Updated done flag. * Fixes for gSDE * Offline her version uses now HerReplayBuffer as episode storage. * Fix num_timesteps computation * Fix get torch params * Vectorized version for offline sampling. * Modified offline her sampling to use sample method of her_replay_buffer * Updated HER tests. * Updated documentation * Cleanup docstrings * Updated to review comments * Fix pytype * Update according to review comments. * Removed random goal strategy. Updated sample transitions. * Updated migration. Removed time signal removal. * Update doc * Fix potential load issue * Add VecNormalize support for dict obs * Updated saving/loading replay buffer for HER. * Fix test memory usage * Fixed save/load replay buffer. * Fixed save/load replay buffer * Fixed transition index after loading replay buffer in online sampling * Better error handling * Add tests for get_time_limit * More tests for VecNormalize with dict obs * Update doc * Improve HER description * Add test for sde support * Add comments * Add comments * Remove check that was always valid * Fix for terminal observation * Updated buffer size in offline version and reset of HER buffer * Reformat * Update doc * Remove np.empty + add doc * Fix loading * Updated loading replay buffer * Separate online and offline sampling + bug fixes * Update tensorboard log name * Version bump * Bug fix for special case Co-authored-by: Antonin Raffin <antonin.raffin@dlr.de> Co-authored-by: Antonin RAFFIN <antonin.raffin@ensta.org>
101 lines
2.3 KiB
ReStructuredText
101 lines
2.3 KiB
ReStructuredText
.. _ppo2:
|
|
|
|
.. automodule:: stable_baselines3.ppo
|
|
|
|
PPO
|
|
===
|
|
|
|
The `Proximal Policy Optimization <https://arxiv.org/abs/1707.06347>`_ algorithm combines ideas from A2C (having multiple workers)
|
|
and TRPO (it uses a trust region to improve the actor).
|
|
|
|
The main idea is that after an update, the new policy should be not too far form the old policy.
|
|
For that, ppo uses clipping to avoid too large update.
|
|
|
|
|
|
.. note::
|
|
|
|
PPO contains several modifications from the original algorithm not documented
|
|
by OpenAI: advantages are normalized and value function can be also clipped .
|
|
|
|
|
|
Notes
|
|
-----
|
|
|
|
- Original paper: https://arxiv.org/abs/1707.06347
|
|
- Clear explanation of PPO on Arxiv Insights channel: https://www.youtube.com/watch?v=5P7I-xPq8u8
|
|
- OpenAI blog post: https://blog.openai.com/openai-baselines-ppo/
|
|
- Spinning Up guide: https://spinningup.openai.com/en/latest/algorithms/ppo.html
|
|
|
|
|
|
Can I use?
|
|
----------
|
|
|
|
- Recurrent policies: ❌
|
|
- Multi processing: ✔️
|
|
- Gym spaces:
|
|
|
|
|
|
============= ====== ===========
|
|
Space Action Observation
|
|
============= ====== ===========
|
|
Discrete ✔️ ✔️
|
|
Box ✔️ ✔️
|
|
MultiDiscrete ✔️ ✔️
|
|
MultiBinary ✔️ ✔️
|
|
============= ====== ===========
|
|
|
|
Example
|
|
-------
|
|
|
|
Train a PPO agent on ``Pendulum-v0`` using 4 environments.
|
|
|
|
.. code-block:: python
|
|
|
|
import gym
|
|
|
|
from stable_baselines3 import PPO
|
|
from stable_baselines3.ppo import MlpPolicy
|
|
from stable_baselines3.common.env_util import make_vec_env
|
|
|
|
# Parallel environments
|
|
env = make_vec_env('CartPole-v1', n_envs=4)
|
|
|
|
model = PPO(MlpPolicy, env, verbose=1)
|
|
model.learn(total_timesteps=25000)
|
|
model.save("ppo_cartpole")
|
|
|
|
del model # remove to demonstrate saving and loading
|
|
|
|
model = PPO.load("ppo_cartpole")
|
|
|
|
obs = env.reset()
|
|
while True:
|
|
action, _states = model.predict(obs)
|
|
obs, rewards, dones, info = env.step(action)
|
|
env.render()
|
|
|
|
Parameters
|
|
----------
|
|
|
|
.. autoclass:: PPO
|
|
:members:
|
|
:inherited-members:
|
|
|
|
|
|
PPO Policies
|
|
-------------
|
|
|
|
.. autoclass:: MlpPolicy
|
|
:members:
|
|
:inherited-members:
|
|
|
|
.. autoclass:: stable_baselines3.common.policies.ActorCriticPolicy
|
|
:members:
|
|
:noindex:
|
|
|
|
.. autoclass:: CnnPolicy
|
|
:members:
|
|
|
|
.. autoclass:: stable_baselines3.common.policies.ActorCriticCnnPolicy
|
|
:members:
|
|
:noindex:
|