* Fix VecNormalize type hints
* Fix VecEnv utils type annotations
* Apply suggestions from code review
Co-authored-by: M. Ernestus <maximilian@ernestus.de>
* Remove PyType
---------
Co-authored-by: M. Ernestus <maximilian@ernestus.de>
* fix: Follow PEP8 guidelines and evaluate falsy to truth with `not` rather than `is False`.
https://docs.python.org/2/library/stdtypes.html#truth-value-testing
* chore: Update changelog inline with intent of changes in PR #1707
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
* fix: Change `is False` to `not` as per PEP8
* chore: Remove superfluous comment about `is False`
* test: One On- and one Off-Policy algorithm (A2C and SAC respectively), with settings to speed up testing
* Update changelog
* chore: Remove EvalCallback as it's not actually required
* Update changelog.rst
* Rm duplicated "others" section in changelog.rst
---------
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
Co-authored-by: Antonin Raffin <antonin.raffin@ensta.org>
* Fix failing set_env test
* Fix test failiing due to deprectation of env.seed
* Adjust mean reward threshold in failing test
* Fix her test failing due to rng
* Change seed and revert reward threshold to 90
* Pin gym version
* Make VecEnv compatible with gym seeding change
* Revert change to VecEnv reset signature
* Change subprocenv seed cmd to call reset instead
* Fix type check
* Add backward compat
* Add `compat_gym_seed` helper
* Add goal env checks in env_checker
* Add docs on HER requirements for envs
* Capture user warning in test with inverted box space
* Update ale-py version
* Fix randint
* Allow noop_max to be zero
* Update changelog
* Update docker image
* Update doc conda env and dockerfile
* Custom envs should not have any warnings
* Fix test for numpy >= 1.21
* Add check for vectorized compute reward
* Bump to gym 0.24
* Fix gym default step docstring
* Test downgrading gym
* Revert "Test downgrading gym"
This reverts commit 0072b77156c006ada8a1d6e26ce347ed85a83eeb.
* Fix protobuf error
* Fix in dependencies
* Fix protobuf dep
* Use newest version of cartpole
* Update gym
* Fix warning
* Loosen required scipy version
* Scipy no longer needed
* Try gym 0.25
* Silence warnings from gym
* Filter warnings during tests
* Update doc
* Update requirements
* Add gym 26 compat in vec env
* Fixes in envs and tests for gym 0.26+
* Enforce gym 0.26 api
* format
* Fix formatting
* Fix dependencies
* Fix syntax
* Cleanup doc and warnings
* Faster tests
* Higher budget for HER perf test (revert prev change)
* Fixes and update doc
* Fix doc build
* Fix breaking change
* Fixes for rendering
* Rename variables in monitor
* update render method for gym 0.26 API
backwards compatible (mode argument is allowed) while using the gym 0.26 API (render mode is determined at environment creation)
* update tests and docs to new gym render API
* undo removal of render modes metatadata check
* set rgb_array as default render mode for gym.make
* undo changes & raise warning if not 'rgb_array'
* Fix type check
* Remove recursion and fix type checking
* Remove hacks for protobuf and gym 0.24
* Fix type annotations
* reuse existing render_mode attribute
* return tiled images for 'human' render mode
* Allow to use opencv for human render, fix typos
* Add warning when using non-zero start with Discrete (fixes#1197)
* Fix type checking
* Bug fixes and handle more cases
* Throw proper warnings
* Update test
* Fix new metadata name
* Ignore numpy warnings
* Fixes in vec recorder
* Global ignore
* Filter local warning too
* Monkey patch not needed for gym 26
* Add doc of VecEnv vs Gym API
* Add render test
* Fix return type
* Update VecEnv vs Gym API doc
* Fix for custom render mode
* Fix return type
* Fix type checking
* check test env test_buffer
* skip render check
* check env test_dict_env
* test_env test_gae
* check envs in remaining tests
* Update tests
* Add warning for Discrete action space with non-zero (#1295)
* Fix atari annotation
* ignore get_action_meanings [attr-defined]
* Fix mypy issues
* Add patch for gym/gymnasium transition
* Switch to gymnasium
* Rely on signature instead of version
* More patches
* Type ignore because of https://github.com/Farama-Foundation/Gymnasium/pull/39
* Fix doc build
* Fix pytype errors
* Fix atari requirement
* Update env checker due to change in dtype for Discrete
* Fix type hint
* Convert spaces for saved models
* Ignore pytype
* Remove gitlab CI
* Disable pytype for convert space
* Fix undefined info
* Fix undefined info
* Upgrade shimmy
* Fix wrappers type annotation (need PR from Gymnasium)
* Fix gymnasium dependency
* Fix dependency declaration
* Cap pygame version for python 3.7
* Point to master branch (v0.28.0)
* Fix: use main not master branch
* Rename done to terminated
* Fix pygame dependency for python 3.7
* Rename gym to gymnasium
* Update Gymnasium
* Fix test
* Fix tests
* Forks don't have access to private variables
* Fix linter warnings
* Update read the doc env
* Fix env checker for GoalEnv
* Fix import
* Update env checker (more info) and fix dtype
* Use micromamab for Docker
* Update dependencies
* Clarify VecEnv doc
* Fix Gymnasium version
* Copy file only after mamba install
* [ci skip] Update docker doc
* Polish code
* Reformat
* Remove deprecated features
* Ignore warning
* Update doc
* Update examples and changelog
* Fix type annotation bundle (SAC, TD3, A2C, PPO, base class) (#1436)
* Fix SAC type hints, improve DQN ones
* Fix A2C and TD3 type hints
* Fix PPO type hints
* Fix on-policy type hints
* Fix base class type annotation, do not use defaults
* Update version
* Disable mypy for python 3.7
* Rename Gym26StepReturn
* Update continuous critic type annotation
* Fix pytype complain
---------
Co-authored-by: Carlos Luis <carlos.luisgonc@gmail.com>
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
Co-authored-by: Thomas Lips <37955681+tlpss@users.noreply.github.com>
Co-authored-by: tlips <thomas.lips@ugent.be>
Co-authored-by: tlpss <thomas17.lips@gmail.com>
Co-authored-by: Quentin GALLOUÉDEC <gallouedec.quentin@gmail.com>
* Fix type hints for DQN
* [ci skip] Remove commented line
* Refine types
* Fix vectorized obs detection
* Fix for pytype
* Fix check at load time to create replay buffer
* One config file to rule them all
* Delete unused config
---------
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
* generalize the use of `from gym import spaces`
* command line get system info
* Documentation line length for doc
* update changelog
* add space before os plateform to avoid ref to other issue
* format
* get_system_info update in changelog
* fix type check error
* fix get system info
* add comment about regex
* update version
* Adds deprecation warning if `eval_env` or `eval_freq` parameters are used. See #925
* added changelog entry
* added missing backtick
* deprecating `create_eval_env` parameter as well and adding comments to explain the `stacklevel` parameter used
* Updated tests to ignore DeprecationWarnings
* Updated changelog entry
* - Removed the `create_eval_env` parameter from the examples in the docs
- Removed information about the `create_eval_env` parameter from the migration docs
- Added information about deprecation of the `create_eval_env` parameter in the docs
* Add alternative in docstring
* Update docstrings
* `eval_freq` warning in docstring
* Add deprecation comments in tests
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
Co-authored-by: Antonin RAFFIN <antonin.raffin@ensta.org>
Co-authored-by: Quentin GALLOUÉDEC <gallouedec.quentin@gmail.com>
* Add progress bar callback and argument
* Update doc
* Update changelog
* Upgrade pytype in docker image
* Use tqdm.write in the logger to have cleaner output
* Fix logger test
* Fix when doing multiple calls to learn()
* Address comments from code-review
* Fix return type for load, learn in BaseAlgorithm
* Update changelog
* Add typing extensions to dependencies
* Import directly from typing for python >3.11
* Reorder changelog to reflect merge order
* Roll back to typevar solution
* Updated changelog
* Remove typing extensions requirement
* Update base_class.py
* Remove final point in changelog
* Additional type fixes across project
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
Co-authored-by: Antonin RAFFIN <antonin.raffin@ensta.org>
* Fix replay_buffer_class type annotation
* Update changelog
* Further replacement of same type annotation issue
* Formatting
* Rolled back formatting changes for consistency
* Use higher resolution time and round up to eps
* Update changelog
* Add test case
* Fix formatting, time()->time_ns
* Bugfix: ns is integer not float
* Move test to better place
* Divide by 1e9 earlier
* Replacing the policy registry with policy "aliases"
* Fixing import order and SAC
* Changing arg. order to be sure policy_aliases is a kwarg
* Import orders
* Removing pytype error check
* Reformat
* Fix alias import
* Not using mutable {} as default for policy_aliases
* Empty aliases initialization
* Using static attributes for policy_aliases
* Fixing isort
* Fixing back bad merge
* Running isort
* Fixing aliases for A2C and PPO
* Using f-string
* Moving policy_aliases definition position
* Adding change in the changelog
* Update version
Co-authored-by: Antonin Raffin <antonin.raffin@ensta.org>
* Removing dead code for handling time limits (see #829)
* Mentionning remove_time_limit_termination in the changelog
* Update changelog.rst
Co-authored-by: Antonin RAFFIN <antonin.raffin@ensta.org>
* Add multi-env training support for SAC
* Fix for dict obs
* Pytype fixes
* Fix assert on number of envs
* Remove for loop
* Add support for Dict obs
* Start cleanup
* Update doc and bug fix
* Add support for vectorized action noise
and add multi env example for off-policy
* Update version
* Bug fix with VecNormalize
* Update README table
* Update variable names
* Update changelog and version
* Update doc and fix for `gradient_steps=-1`
* Add test for `gradient_steps=-1`
* Disable pytype pyi errors
* Fix for DQN
* Update comment on deepcopy
* Remove episode_reward field
* Fix RolloutReturn
* Avoid modification by reference
* Fix error message
Co-authored-by: Anssi <kaneran21@hotmail.com>
* Store number of timesteps at the beginning of each learn cycle
* Update changelog
* Set default _num_timesteps_at_start in the contructor
* Test case for FPS logger
* Adjust test to cover both on-policy and off-policy algorithms
* Fix formatting
* Update test and add comment
* Fix test
Co-authored-by: Oleksii Kachaiev <okachaiev@riotgames.com>
Co-authored-by: Antonin RAFFIN <antonin.raffin@ensta.org>
* Add `system_env_info`
* Add `print_system_info` to load
and store system info at save time
* Remove TODO
* Rename to `get_system_info`
* Import as sb3 for consistency
* Update changelog
* Add warning for old SB3 versions
* Use underscore litteral for more clarity
* Use a consistent key to log the total timesteps
This changes the timestep logging key of on-policy algorithms from
`time/total_timesteps` to `time/total timesteps` (note the
underscore/space). The off-policy algorithms and the eval callback
already use the latter, so this behavior is more consistent.
* Use underscores instead of spaces in logging keys
Most keys already followed this policy and consistent behavior is
friendlier to new users.
* Minor edit and bump version
Co-authored-by: Antonin Raffin <antonin.raffin@ensta.org>
* make sure DQN policy is always in correct mode - train or eval
* make set_training_mode an abstract method of the base policy - safer
* update docstring of _build method to note that the target network is put into eval mode
* use set_training_mode to put the dqn target network into eval mode
* use set_training_mode to set the training model of the q-network
* move set_training_mode abstract method from BasePolicy to BaseModel
* set train and eval mode for TD3
* make sure critic is always in correct mode during train
* set train and eval mode for SAC
* add comment re batch norm and dropout
* set train and eval mode for A2C and PPO
* add tests for collect rollouts with batch norm
* fix formatting
* update change log
* update version
* remove Optional typing for batch size - causing type check to fail
* Fix scipy dependency for toy text envs
* implement set_training_mode method in BaseModel
* move all tests of train/eval mode to test_train_eval_mode
* call learn with learning_starts = total_timesteps to test that collect_rollouts does not update batch norm
* remove extra calls to set_training_mode in train method of TD3 and SAC
* Allow gradient_steps=0
* Refactor tests
* Add comment + use aliases
* Typos
Co-authored-by: Antonin Raffin <antonin.raffin@ensta.org>
* training and evaluation: call model.train() and model.eval() to enable and disable dropout and batchnorm
* Add comment documentation
* Fix train and eval for the Actor class
* Run black
* Add github handle to changelog
* Add unit tests for PPO and DQN
* Refactor unit test
* Run black
* unit test: add a dropout layer and check that calling predict with deterministic=True is deterministic
* documentation: add bugfix description to changelog
* unit test: use learning_starts=0, decrease the size of the network and use more training steps
* on policy algorithms: call policy.train() and policy.eval() instead of disable_training and enable_training as it is a th.nn.module
* Rename unit test
* unit test: use drop out probability of 0.5
* Call policy.train and policy.eval
* Fixes + update tests
* Remove unneeded eval
Co-authored-by: David Blom <davidsblom@gmail.com>
Co-authored-by: Antonin Raffin <antonin.raffin@ensta.org>
* Add support for custom objects
* Add python 3.8 to the CI
* Bump version
* PyType fixes
* [ci skip] Fix typo
* Add note about slow-down + fix typos
* Minor edits to the doc
* Bug fix for DQN
* Update test
* Add test for custom objects
* Removed unneeded overrides of feature_extractor and normalize_images in the TD3 Actor.
* Add learning rate schedule example (#248)
* Add learning rate schedule example
* Update docs/guide/examples.rst
Co-authored-by: Adam Gleave <adam@gleave.me>
* Address comments
Co-authored-by: Adam Gleave <adam@gleave.me>
* Add supported action spaces checks (#254)
* Add supported action spaces checks
* Address comment
* Use `pass` in an abstractmethod instead of deleting the arguments.
* Remove the "deterministic" keyword from the forward method of the TD3 Actor since it always is deterministic anyways.
* Rename _get_data to _get_data_to_reconstruct_model.
_get_data was too generic and could have meant anything.
* Remove the n_episodes_rollout parameter and allow passing tuples as train_freq instead.
* Fix docstring of `train_freq` parameter.
* Black fixes.
* Fix TD3 delayed update + rename `_get_data()`
* Fix TD3 test
* Normalize `train_freq` to a tuple in the constructor and turn the warning into an assert.
* Make one step the default train frequency.
* Black fixes.
* Change np.bool to bool.
* Use the tuple format to specify an amount of steps in terms of steps or episodes in the collect_collouts of the off policy algorithm.
* Use the tuple format to specify an amount of steps in terms of steps or episodes in the collect_collouts of HER.
* Use named tuple for train freq
* Rename train_freq to train_every and TrainFreq to ExperienceDuration. Also add some type annotations and documentation.
* Black fixes.
* Revert to train_freq
* Fix terminal observation issues
* Typo
* Fix action noise bug in HER
* Add assert when loading HER models
* Update version
Co-authored-by: Antonin RAFFIN <antonin.raffin@ensta.org>
Co-authored-by: Adam Gleave <adam@gleave.me>
* Add callback signature to the learning rate type annotations.
* Add callback signature to the learning rate schedule type annotations.
* Add missing type annotations for learning rate callbacks.
* Add signature to old-style learning and evaluation callbacks.
* Add signature to env wrapper callback.
* Add type annotation to closure function.
* Use MaybeCallback more consistently.
* Update changelog.
* Remove now unused List import.
* Fix import order.
* Add type alias for learning rate schedules.
* Optimize imports.
* Fix messed up import.
* Remove resolved TODO.
Co-authored-by: Antonin RAFFIN <antonin.raffin@ensta.org>
* Small docstring improvements related to the notion of Rollout
* documented changes in changelog.rst, added myself to contributers
* Minor edits
Co-authored-by: Stefan Heid <stefan.heid@upb.de>
Co-authored-by: Antonin RAFFIN <antonin.raffin@ensta.org>
* Added working her version, Online sampling is missing.
* Updated test_her.
* Added first version of online her sampling. Still problems with tensor dimensions.
* Reformat
* Fixed tests
* Added some comments.
* Updated changelog.
* Add missing init file
* Fixed some small bugs.
* Reduced arguments for HER, small changes.
* Added getattr. Fixed bug for online sampling.
* Updated save/load funtions. Small changes.
* Added her to init.
* Updated save method.
* Updated her ratio.
* Move obs_wrapper
* Added DQN test.
* Fix potential bug
* Offline and online her share same sample_goal function.
* Changed lists into arrays.
* Updated her test.
* Fix online sampling
* Fixed action bug. Updated time limit for episodes.
* Updated convert_dict method to take keys as arguments.
* Renamed obs dict wrapper.
* Seed bit flipping env
* Remove get_episode_dict
* Add fast online sampling version
* Added documentation.
* Vectorized reward computation
* Vectorized goal sampling
* Update time limit for episodes in online her sampling.
* Fix max episode length inference
* Bug fix for Fetch envs
* Fix for HER + gSDE
* Reformat (new black version)
* Added info dict to compute new reward. Check her_replay_buffer again.
* Fix info buffer
* Updated done flag.
* Fixes for gSDE
* Offline her version uses now HerReplayBuffer as episode storage.
* Fix num_timesteps computation
* Fix get torch params
* Vectorized version for offline sampling.
* Modified offline her sampling to use sample method of her_replay_buffer
* Updated HER tests.
* Updated documentation
* Cleanup docstrings
* Updated to review comments
* Fix pytype
* Update according to review comments.
* Removed random goal strategy. Updated sample transitions.
* Updated migration. Removed time signal removal.
* Update doc
* Fix potential load issue
* Add VecNormalize support for dict obs
* Updated saving/loading replay buffer for HER.
* Fix test memory usage
* Fixed save/load replay buffer.
* Fixed save/load replay buffer
* Fixed transition index after loading replay buffer in online sampling
* Better error handling
* Add tests for get_time_limit
* More tests for VecNormalize with dict obs
* Update doc
* Improve HER description
* Add test for sde support
* Add comments
* Add comments
* Remove check that was always valid
* Fix for terminal observation
* Updated buffer size in offline version and reset of HER buffer
* Reformat
* Update doc
* Remove np.empty + add doc
* Fix loading
* Updated loading replay buffer
* Separate online and offline sampling + bug fixes
* Update tensorboard log name
* Version bump
* Bug fix for special case
Co-authored-by: Antonin Raffin <antonin.raffin@dlr.de>
Co-authored-by: Antonin RAFFIN <antonin.raffin@ensta.org>
* Add auto formatting with black and isort
* Reformat code
* Ignore typing errors
* Add note about line length
* Add minimum version for isort
* Add commit-checks
* Update docker image
* Fixed lost import (during last merge)
* Fix opencv dependency