stable-baselines3/tests/test_her.py
Jaden Travnik 75b6f3b3b0
Dictionary Observations (#243)
* First commit

* Fixing missing refs from a quick merge from master

* Reformat

* Adding DictBuffers

* Reformat

* Minor reformat

* added slow dict test. Added SACMultiInputPolicy for future. Added private static image transpose helper to common policy

* Ran black on buffers

* Ran isort

* Adding StackedObservations classes used within VecStackEnvs wrappers. Made test_dict_env shorter and removed slow

* Running isort :facepalm

* Fixed typing issues

* Adding docstrings and typing. Using util for moving data to device.

* Fixed trailing commas

* Fix types

* Minor edits

* Avoid duplicating code

* Fix calls to parents

* Adding assert to buffers. Updating changelong

* Running format on buffers

* Adding multi-input policies to dqn,td3,a2c. Fixing warnings. Fixed bug with DictReplayBuffer as Replay buffers use only 1 env

* Fixing warnings, splitting is_vectorized_observation into multiple functions based on space type

* Created envs folder in common. Updated imports. Moved stacked_obs to vec_env folder

* Moved envs to envs directory. Moved stacked obs to vec_envs. Started update on documentation

* Fixes

* Running code style

* Update docstrings on torch_layers

* Decapitalize non-constant variables

* Using NatureCNN architecture in combined extractor. Increasing img size in multi input env. Adding memory reduction in test

* Update doc

* Update doc

* Fix format

* Removing NineRoom env. Using nested preprocess. Removing mutable default args

* running code style

* Passing channel check through to stacked dict observations.

* Running black

* Adding channel control to SimpleMultiObsEnv. Passing check_channels to CombinedExtractor

* Remove optimize memory for dict buffers

* Update doc

* Move identity env

* Minor edits + bump version

* Update doc

* Fix doc build

* Bug fixes + add support for more type of dict env

* Fixes + add multi env test

* Add support for vectranspose

* Fix stacked obs for dict and add tests

* Add check for nested spaces. Fix dict-subprocvecenv test

* Fix (single) pytype error

* Simplify CombinedExtractor

* Fix tests

* Fix check

* Merge branch 'master' into feat/dict_observations

* Fix for net_arch with dict and vector obs

* Fixes

* Add consistency test

* Update env checker

* Add some docs on dict obs

* Update default CNN feature vector size

* Refactor HER (#351)

* Start refactoring HER

* Fixes

* Additional fixes

* Faster tests

* WIP: HER as a custom replay buffer

* New replay only version (working with DQN)

* Add support for all off-policy algorithms

* Fix saving/loading

* Remove ObsDictWrapper and add VecNormalize tests with dict

* Stable-Baselines3 v1.0 (#354)

* Bump version and update doc

* Fix name

* Apply suggestions from code review

Co-authored-by: Adam Gleave <adam@gleave.me>

* Update docs/index.rst

Co-authored-by: Adam Gleave <adam@gleave.me>

* Update wording for RL zoo

Co-authored-by: Adam Gleave <adam@gleave.me>

* Add gym-pybullet-drones project (#358)

* Update projects.rst

Added gym-pybullet-drones

* Update projects.rst

Longer title underline

* Update changelog

Co-authored-by: Antonin Raffin <antonin.raffin@ensta.org>

* Include SuperSuit in projects (#359)

* include supersuit

* longer title underline

* Update changelog.rst

* Fix default arguments + add bugbear (#363)

* Fix potential bug + add bug bear

* Remove unused variables

* Minor: version bump

* Add code of conduct + update doc (#373)

* Add code of conduct

* Fix DQN doc example

* Update doc (channel-last/first)

* Apply suggestions from code review

Co-authored-by: Anssi <kaneran21@hotmail.com>

* Apply suggestions from code review

Co-authored-by: Adam Gleave <adam@gleave.me>

Co-authored-by: Anssi <kaneran21@hotmail.com>
Co-authored-by: Adam Gleave <adam@gleave.me>

* Make installation command compatible with ZSH (#376)

* Add quotes

* Add Zsh bracket info

* Add clarify pip installation line

* Make note bold

* Add Zsh pip installation note

* Add handle timeouts param

* Fixes

* Fixes (buffer size, extend test)

* Fix `max_episode_length` redefinition

* Fix potential issue

* Add some docs on dict obs

* Fix performance bug

* Fix slowdown

* Add package to install (#378)

* Add package to install

* Update docs packages installation command

Co-authored-by: Antonin RAFFIN <antonin.raffin@ensta.org>

* Fix backward compat + add test

* Fix VecEnv detection

* Update doc

* Fix vec env check

* Support for `VecMonitor` for gym3-style environments (#311)

* add vectorized monitor

* auto format of the code

* add documentation and VecExtractDictObs

* refactor and add test cases

* add test cases and format

* avoid circular import and fix doc

* fix type

* fix type

* oops

* Update stable_baselines3/common/monitor.py

Co-authored-by: Antonin RAFFIN <antonin.raffin@ensta.org>

* Update stable_baselines3/common/monitor.py

Co-authored-by: Antonin RAFFIN <antonin.raffin@ensta.org>

* add test cases

* update changelog

* fix mutable argument

* quick fix

* Apply suggestions from code review

* fix terminal observation for gym3 envs

* delete comment

* Update doc and bump version

* Add warning when already using `Monitor` wrapper

* Update vecmonitor tests

* Fixes

Co-authored-by: Antonin RAFFIN <antonin.raffin@ensta.org>

* Reformat

* Fixed loading of ``ent_coef`` for ``SAC`` and ``TQC``, it was not optimized anymore (#392)

* Fix ent coef loading bug

* Add test

* Add comment

* Reuse save path

* Add test for GAE + rename `RolloutBuffer.dones` for clarification (#375)

* Fix return computation + add test for GAE

* Rename `last_dones` to `episode_starts` for clarification

* Revert advantage

* Cleanup test

* Rename variable

* Clarify return computation

* Clarify docs

* Add multi-episode rollout test

* Reformat

Co-authored-by: Anssi "Miffyli" Kanervisto <kaneran21@hotmail.com>

* Fixed saving of `A2C` and `PPO` policy when using gSDE (#401)

* Improve doc and replay buffer loading

* Add support for images

* Fix doc

* Update Procgen doc

* Update changelog

* Update docstrings

Co-authored-by: Adam Gleave <adam@gleave.me>
Co-authored-by: Jacopo Panerati <jacopo.panerati@utoronto.ca>
Co-authored-by: Justin Terry <justinkterry@gmail.com>
Co-authored-by: Anssi <kaneran21@hotmail.com>
Co-authored-by: Tom Dörr <tomdoerr96@gmail.com>
Co-authored-by: Tom Dörr <tom.doerr@tum.de>
Co-authored-by: Costa Huang <costa.huang@outlook.com>

* Update doc and minor fixes

* Update doc

* Added note about MultiInputPolicy in error of NatureCNN

* Merge branch 'master' into feat/dict_observations

* Address comments

* Naming clarifications

* Actually saving the file would be nice

* Fix edge case when doing online sampling with HER

* Cleanup

* Add sanity check

Co-authored-by: Antonin RAFFIN <antonin.raffin@ensta.org>
Co-authored-by: Anssi "Miffyli" Kanervisto <kaneran21@hotmail.com>
Co-authored-by: Adam Gleave <adam@gleave.me>
Co-authored-by: Jacopo Panerati <jacopo.panerati@utoronto.ca>
Co-authored-by: Justin Terry <justinkterry@gmail.com>
Co-authored-by: Tom Dörr <tomdoerr96@gmail.com>
Co-authored-by: Tom Dörr <tom.doerr@tum.de>
Co-authored-by: Costa Huang <costa.huang@outlook.com>
2021-05-11 12:29:30 +02:00

400 lines
13 KiB
Python

import os
import pathlib
import warnings
from copy import deepcopy
import gym
import numpy as np
import pytest
import torch as th
from stable_baselines3 import DDPG, DQN, SAC, TD3, HerReplayBuffer
from stable_baselines3.common.envs import BitFlippingEnv
from stable_baselines3.common.evaluation import evaluate_policy
from stable_baselines3.common.monitor import Monitor
from stable_baselines3.common.noise import NormalActionNoise
from stable_baselines3.common.vec_env import DummyVecEnv
from stable_baselines3.her.goal_selection_strategy import GoalSelectionStrategy
from stable_baselines3.her.her_replay_buffer import get_time_limit
def test_import_error():
with pytest.raises(ImportError) as excinfo:
from stable_baselines3 import HER
HER("MlpPolicy")
assert "documentation" in str(excinfo.value)
@pytest.mark.parametrize("model_class", [SAC, TD3, DDPG, DQN])
@pytest.mark.parametrize("online_sampling", [True, False])
@pytest.mark.parametrize("image_obs_space", [True, False])
def test_her(model_class, online_sampling, image_obs_space):
"""
Test Hindsight Experience Replay.
"""
n_bits = 4
env = BitFlippingEnv(
n_bits=n_bits,
continuous=not (model_class == DQN),
image_obs_space=image_obs_space,
)
model = model_class(
"MultiInputPolicy",
env,
replay_buffer_class=HerReplayBuffer,
replay_buffer_kwargs=dict(
n_sampled_goal=2,
goal_selection_strategy="future",
online_sampling=online_sampling,
max_episode_length=n_bits,
),
train_freq=4,
gradient_steps=1,
policy_kwargs=dict(net_arch=[64]),
learning_starts=100,
buffer_size=int(2e4),
)
model.learn(total_timesteps=150)
evaluate_policy(model, Monitor(env))
@pytest.mark.parametrize(
"goal_selection_strategy",
[
"final",
"episode",
"future",
GoalSelectionStrategy.FINAL,
GoalSelectionStrategy.EPISODE,
GoalSelectionStrategy.FUTURE,
],
)
@pytest.mark.parametrize("online_sampling", [True, False])
def test_goal_selection_strategy(goal_selection_strategy, online_sampling):
"""
Test different goal strategies.
"""
env = BitFlippingEnv(continuous=True)
normal_action_noise = NormalActionNoise(np.zeros(1), 0.1 * np.ones(1))
model = SAC(
"MultiInputPolicy",
env,
replay_buffer_class=HerReplayBuffer,
replay_buffer_kwargs=dict(
goal_selection_strategy=goal_selection_strategy,
online_sampling=online_sampling,
max_episode_length=10,
n_sampled_goal=2,
),
train_freq=4,
gradient_steps=1,
policy_kwargs=dict(net_arch=[64]),
learning_starts=100,
buffer_size=int(1e5),
action_noise=normal_action_noise,
)
assert model.action_noise is not None
model.learn(total_timesteps=150)
@pytest.mark.parametrize("model_class", [SAC, TD3, DDPG, DQN])
@pytest.mark.parametrize("use_sde", [False, True])
@pytest.mark.parametrize("online_sampling", [False, True])
def test_save_load(tmp_path, model_class, use_sde, online_sampling):
"""
Test if 'save' and 'load' saves and loads model correctly
"""
if use_sde and model_class != SAC:
pytest.skip("Only SAC has gSDE support")
n_bits = 4
env = BitFlippingEnv(n_bits=n_bits, continuous=not (model_class == DQN))
kwargs = dict(use_sde=True) if use_sde else {}
# create model
model = model_class(
"MultiInputPolicy",
env,
replay_buffer_class=HerReplayBuffer,
replay_buffer_kwargs=dict(
n_sampled_goal=2,
goal_selection_strategy="future",
online_sampling=online_sampling,
max_episode_length=n_bits,
),
verbose=0,
tau=0.05,
batch_size=128,
learning_rate=0.001,
policy_kwargs=dict(net_arch=[64]),
buffer_size=int(1e5),
gamma=0.98,
gradient_steps=1,
train_freq=4,
learning_starts=100,
**kwargs
)
model.learn(total_timesteps=150)
obs = env.reset()
observations = {key: [] for key in obs.keys()}
for _ in range(10):
obs = env.step(env.action_space.sample())[0]
for key in obs.keys():
observations[key].append(obs[key])
observations = {key: np.array(obs) for key, obs in observations.items()}
# Get dictionary of current parameters
params = deepcopy(model.policy.state_dict())
# Modify all parameters to be random values
random_params = dict((param_name, th.rand_like(param)) for param_name, param in params.items())
# Update model parameters with the new random values
model.policy.load_state_dict(random_params)
new_params = model.policy.state_dict()
# Check that all params are different now
for k in params:
assert not th.allclose(params[k], new_params[k]), "Parameters did not change as expected."
params = new_params
# get selected actions
selected_actions, _ = model.predict(observations, deterministic=True)
# Check
model.save(tmp_path / "test_save.zip")
del model
# test custom_objects
# Load with custom objects
custom_objects = dict(learning_rate=2e-5, dummy=1.0)
model_ = model_class.load(str(tmp_path / "test_save.zip"), env=env, custom_objects=custom_objects, verbose=2)
assert model_.verbose == 2
# Check that the custom object was taken into account
assert model_.learning_rate == custom_objects["learning_rate"]
# Check that only parameters that are here already are replaced
assert not hasattr(model_, "dummy")
model = model_class.load(str(tmp_path / "test_save.zip"), env=env)
# check if params are still the same after load
new_params = model.policy.state_dict()
# Check that all params are the same as before save load procedure now
for key in params:
assert th.allclose(params[key], new_params[key]), "Model parameters not the same after save and load."
# check if model still selects the same actions
new_selected_actions, _ = model.predict(observations, deterministic=True)
assert np.allclose(selected_actions, new_selected_actions, 1e-4)
# check if learn still works
model.learn(total_timesteps=150)
# Test that the change of parameters works
model = model_class.load(str(tmp_path / "test_save.zip"), env=env, verbose=3, learning_rate=2.0)
assert model.learning_rate == 2.0
assert model.verbose == 3
# clear file from os
os.remove(tmp_path / "test_save.zip")
@pytest.mark.parametrize("online_sampling", [False, True])
@pytest.mark.parametrize("truncate_last_trajectory", [False, True])
def test_save_load_replay_buffer(tmp_path, recwarn, online_sampling, truncate_last_trajectory):
"""
Test if 'save_replay_buffer' and 'load_replay_buffer' works correctly
"""
# remove gym warnings
warnings.filterwarnings(action="ignore", category=DeprecationWarning)
warnings.filterwarnings(action="ignore", category=UserWarning, module="gym")
path = pathlib.Path(tmp_path / "replay_buffer.pkl")
path.parent.mkdir(exist_ok=True, parents=True) # to not raise a warning
env = BitFlippingEnv(n_bits=4, continuous=True)
model = SAC(
"MultiInputPolicy",
env,
replay_buffer_class=HerReplayBuffer,
replay_buffer_kwargs=dict(
n_sampled_goal=2,
goal_selection_strategy="future",
online_sampling=online_sampling,
max_episode_length=4,
),
gradient_steps=1,
train_freq=4,
buffer_size=int(2e4),
policy_kwargs=dict(net_arch=[64]),
seed=1,
)
model.learn(200)
if online_sampling:
old_replay_buffer = deepcopy(model.replay_buffer)
else:
old_replay_buffer = deepcopy(model.replay_buffer.replay_buffer)
model.save_replay_buffer(path)
del model.replay_buffer
with pytest.raises(AttributeError):
model.replay_buffer
# Check that there is no warning
assert len(recwarn) == 0
model.load_replay_buffer(path, truncate_last_traj=truncate_last_trajectory)
if truncate_last_trajectory:
assert len(recwarn) == 1
warning = recwarn.pop(UserWarning)
assert "The last trajectory in the replay buffer will be truncated" in str(warning.message)
else:
assert len(recwarn) == 0
if online_sampling:
n_episodes_stored = model.replay_buffer.n_episodes_stored
assert np.allclose(
old_replay_buffer._buffer["observation"][:n_episodes_stored],
model.replay_buffer._buffer["observation"][:n_episodes_stored],
)
assert np.allclose(
old_replay_buffer._buffer["next_obs"][:n_episodes_stored],
model.replay_buffer._buffer["next_obs"][:n_episodes_stored],
)
assert np.allclose(
old_replay_buffer._buffer["action"][:n_episodes_stored],
model.replay_buffer._buffer["action"][:n_episodes_stored],
)
assert np.allclose(
old_replay_buffer._buffer["reward"][:n_episodes_stored],
model.replay_buffer._buffer["reward"][:n_episodes_stored],
)
# we might change the last done of the last trajectory so we don't compare it
assert np.allclose(
old_replay_buffer._buffer["done"][: n_episodes_stored - 1],
model.replay_buffer._buffer["done"][: n_episodes_stored - 1],
)
else:
replay_buffer = model.replay_buffer.replay_buffer
assert np.allclose(old_replay_buffer.observations["observation"], replay_buffer.observations["observation"])
assert np.allclose(old_replay_buffer.observations["desired_goal"], replay_buffer.observations["desired_goal"])
assert np.allclose(old_replay_buffer.actions, replay_buffer.actions)
assert np.allclose(old_replay_buffer.rewards, replay_buffer.rewards)
assert np.allclose(old_replay_buffer.dones, replay_buffer.dones)
# test if continuing training works properly
reset_num_timesteps = False if truncate_last_trajectory is False else True
model.learn(200, reset_num_timesteps=reset_num_timesteps)
def test_full_replay_buffer():
"""
Test if HER works correctly with a full replay buffer when using online sampling.
It should not sample the current episode which is not finished.
"""
n_bits = 4
env = BitFlippingEnv(n_bits=n_bits, continuous=True)
# use small buffer size to get the buffer full
model = SAC(
"MultiInputPolicy",
env,
replay_buffer_class=HerReplayBuffer,
replay_buffer_kwargs=dict(
n_sampled_goal=2,
goal_selection_strategy="future",
online_sampling=True,
max_episode_length=n_bits,
),
gradient_steps=1,
train_freq=4,
policy_kwargs=dict(net_arch=[64]),
learning_starts=1,
buffer_size=20,
verbose=1,
seed=757,
)
model.learn(total_timesteps=100)
def test_get_max_episode_length():
dict_env = DummyVecEnv([lambda: BitFlippingEnv()])
# Cannot infer max epsiode length
with pytest.raises(ValueError):
get_time_limit(dict_env, current_max_episode_length=None)
default_length = 10
assert get_time_limit(dict_env, current_max_episode_length=default_length) == default_length
env = gym.make("CartPole-v1")
vec_env = DummyVecEnv([lambda: env])
assert get_time_limit(vec_env, current_max_episode_length=None) == 500
# Overwrite max_episode_steps
assert get_time_limit(vec_env, current_max_episode_length=default_length) == default_length
# Set max_episode_steps to None
env.spec.max_episode_steps = None
vec_env = DummyVecEnv([lambda: env])
with pytest.raises(ValueError):
get_time_limit(vec_env, current_max_episode_length=None)
# Initialize HER and specify max_episode_length, should not raise an issue
DQN("MultiInputPolicy", dict_env, replay_buffer_class=HerReplayBuffer, replay_buffer_kwargs=dict(max_episode_length=5))
with pytest.raises(ValueError):
DQN("MultiInputPolicy", dict_env, replay_buffer_class=HerReplayBuffer)
# Wrapped in a timelimit, should be fine
# Note: it requires env.spec to be defined
env = DummyVecEnv([lambda: gym.wrappers.TimeLimit(BitFlippingEnv(), 10)])
DQN("MultiInputPolicy", env, replay_buffer_class=HerReplayBuffer, replay_buffer_kwargs=dict(max_episode_length=5))
@pytest.mark.parametrize("online_sampling", [False, True])
@pytest.mark.parametrize("n_bits", [10])
def test_performance_her(online_sampling, n_bits):
"""
That DQN+HER can solve BitFlippingEnv.
It should not work when n_sampled_goal=0 (DQN alone).
"""
env = BitFlippingEnv(n_bits=n_bits, continuous=False)
model = DQN(
"MultiInputPolicy",
env,
replay_buffer_class=HerReplayBuffer,
replay_buffer_kwargs=dict(
n_sampled_goal=5,
goal_selection_strategy="future",
online_sampling=online_sampling,
max_episode_length=n_bits,
),
verbose=1,
learning_rate=5e-4,
train_freq=1,
learning_starts=100,
exploration_final_eps=0.02,
target_update_interval=500,
seed=0,
batch_size=32,
buffer_size=int(1e5),
)
model.learn(total_timesteps=5000, log_interval=50)
# 90% training success
assert np.mean(model.ep_success_buffer) > 0.90