* Fix big when saving/loading q-net alone
* Rename variables to match SB3-contrib
* Update docker image
* Set min version for tensorboard
* Add SB3-Contrib to doc
* Update DQN
* Apply suggestions from code review
Co-authored-by: Adam Gleave <adam@gleave.me>
* Update wording
Co-authored-by: Adam Gleave <adam@gleave.me>
* Update evaluate_policy to use monitor data if available
* Update documentation
* Cleaning up
* Remove unnecessary typing trickery
* Update doc
* Rename is_wrapped to clarify it is for vecenvs
* Add is_wrapped for regular envs
* Add is_wrapped call for subprocvecenv and update code for circular imports
* Move new functions back to env_util and fix imports
* Update changelog
* Clarify evaluate_policy docs
* Add tests for wrapped modifying episode lengths
* Fix tests
* Update changelog
* Minor edits
* Add warn switch to evaluate_policy and update tests
Co-authored-by: Antonin RAFFIN <antonin.raffin@ensta.org>
* Update doc and add new example
* Add save/load replay buffer example
* Add save format + export doc
* Add example for get/set parameters
* Typos and minor edits
* Add results sections
* Add note about performance
* Add DDPG results
* Address comments
* Fix grammar/wording
Co-authored-by: Anssi "Miffyli" Kanervisto <kaneran21@hotmail.com>
* Added working her version, Online sampling is missing.
* Updated test_her.
* Added first version of online her sampling. Still problems with tensor dimensions.
* Reformat
* Fixed tests
* Added some comments.
* Updated changelog.
* Add missing init file
* Fixed some small bugs.
* Reduced arguments for HER, small changes.
* Added getattr. Fixed bug for online sampling.
* Updated save/load funtions. Small changes.
* Added her to init.
* Updated save method.
* Updated her ratio.
* Move obs_wrapper
* Added DQN test.
* Fix potential bug
* Offline and online her share same sample_goal function.
* Changed lists into arrays.
* Updated her test.
* Fix online sampling
* Fixed action bug. Updated time limit for episodes.
* Updated convert_dict method to take keys as arguments.
* Renamed obs dict wrapper.
* Seed bit flipping env
* Remove get_episode_dict
* Add fast online sampling version
* Added documentation.
* Vectorized reward computation
* Vectorized goal sampling
* Update time limit for episodes in online her sampling.
* Fix max episode length inference
* Bug fix for Fetch envs
* Fix for HER + gSDE
* Reformat (new black version)
* Added info dict to compute new reward. Check her_replay_buffer again.
* Fix info buffer
* Updated done flag.
* Fixes for gSDE
* Offline her version uses now HerReplayBuffer as episode storage.
* Fix num_timesteps computation
* Fix get torch params
* Vectorized version for offline sampling.
* Modified offline her sampling to use sample method of her_replay_buffer
* Updated HER tests.
* Updated documentation
* Cleanup docstrings
* Updated to review comments
* Fix pytype
* Update according to review comments.
* Removed random goal strategy. Updated sample transitions.
* Updated migration. Removed time signal removal.
* Update doc
* Fix potential load issue
* Add VecNormalize support for dict obs
* Updated saving/loading replay buffer for HER.
* Fix test memory usage
* Fixed save/load replay buffer.
* Fixed save/load replay buffer
* Fixed transition index after loading replay buffer in online sampling
* Better error handling
* Add tests for get_time_limit
* More tests for VecNormalize with dict obs
* Update doc
* Improve HER description
* Add test for sde support
* Add comments
* Add comments
* Remove check that was always valid
* Fix for terminal observation
* Updated buffer size in offline version and reset of HER buffer
* Reformat
* Update doc
* Remove np.empty + add doc
* Fix loading
* Updated loading replay buffer
* Separate online and offline sampling + bug fixes
* Update tensorboard log name
* Version bump
* Bug fix for special case
Co-authored-by: Antonin Raffin <antonin.raffin@dlr.de>
Co-authored-by: Antonin RAFFIN <antonin.raffin@ensta.org>
* Add support to log videos via tensorboard
The ability to look at renderings of agent's trajectories during
training helps evaluate the performance of that agent. One can see what
the agent actually does at various stages during training. For now only
tensorboard is supported, as it is straightforward to implement.
* Remove moviepy dependency from extra & doc update
* Removed the moviepy dependency from the `extra` dependencies so the
user can decide whether to install it or not
* Update the video logging docu with proper naming, comments
* Added a warning to the video logging docu explaining the moviepy
dependency
* Updated the video test, to check for a warning when moviepy is missing
* Update doc
* Update FormatUnsupportedError message
* Also log the offending value making the error message more expressive
* Fix reporting the correct format and update regression test
* Use string description in FormatUnsupportedError
* Instead of converting the value to string without the user's control
the constructor takes a string representation of the value
* Use string description in FormatUnsupportedError
* Use a shorter string description for the error to reduce verbosity
Co-authored-by: Bernhard Raml <raml.bernhard@gmail.com>
Co-authored-by: Antonin RAFFIN <antonin.raffin@ensta.org>
* Add custom arch for off-policy actor/critic networks
* Fix type hints
* Address comments
* Make sure number of updated parameters match in polyak
* Add zip_strict for strict-length zipping
* Fix building docs
* Add test for zip strict
* Faster tests
Co-authored-by: Anssi "Miffyli" Kanervisto <kaneran21@hotmail.com>
* Add DDPG + TD3 with any number of critics
* Allow any number of critics for SAC
* Update doc
* [ci skip] Update DDPG example
* Remove unused parameter
* Add DDPG to identity test
* Fix computation with n_critics=1,3
* Update doc
* Apply suggestions from code review
Co-authored-by: Adam Gleave <adam@gleave.me>
* Update docstrings for off-policy algos
* Add check for sde
Co-authored-by: Adam Gleave <adam@gleave.me>
* Created DQN template according to the paper.
Next steps:
- Create Policy
- Complete Training
- Debug
* Changed Base Class
* refactor save, to be consistence with overriding the excluded_save_params function. Do not try to exclude the parameters twice.
* Added simple DQN policy
* Finished learn and train function
- missing correct loss computation
* changed collect_rollouts to work with discrete space
* moved discrete space collect_rollouts to dqn
* basic dqn working
* deleted SDE related code
* added gradient clipping and moved greedy policy to policy
* changed policy to implement target network
and added soft update(in fact standart tau is 1 so hard update)
* fixed policy setup
* rebase target_update_intervall on _n_updates
* adapted all tests
all tests passing
* Move to stable-baseline3
* Fixes for DQN
* Fix tests + add CNNPolicy
* Allow any optimizer for DQN
* added some util functions to create a arbitrary linear schedule, fixed pickle problem with old exploration schedule
* more documentation
* changed buffer dtype
* refactor and document
* Added Sphinx Documentation
Updated changelog.rst
* removed custom collect_rollouts as it is no longer necessary
* Implemented suggestions to clean code and documentation.
* extracted some functions on tests to reduce duplicated code
* added support for exploration_fraction
* Fixed exploration_fraction
* Added documentation
* Fixed get_linear_fn -> proper progress scaling
* Merged master
* Added nature reference
* Changed default parameters to https://www.nature.com/articles/nature14236/tables/1
* Fixed n_updates to be incremented correctly
* Correct train_freq
* Doc update
* added special parameter for DQN in tests
* different fix for test_discrete
* Update docs/modules/dqn.rst
Co-authored-by: Antonin RAFFIN <antonin.raffin@ensta.org>
* Update docs/modules/dqn.rst
Co-authored-by: Antonin RAFFIN <antonin.raffin@ensta.org>
* Update docs/modules/dqn.rst
Co-authored-by: Antonin RAFFIN <antonin.raffin@ensta.org>
* Added RMSProp in optimizer_kwargs, as described in nature paper
* Exploration fraction is inverse of 50.000.000 (total frames) / 1.000.000 (frames with linear schedule) according to nature paper
* Changelog update for buffer dtype
* standard exlude parameters should be always excluded to assure proper saving only if intentionally included by ``include`` parameter
* slightly more iterations on test_discrete to pass the test
* added param use_rms_prop instead of mutable default argument
* forgot alpha
* using huber loss, adam and learning rate 1e-4
* account for train_freq in update_target_network
* Added memory check for both buffers
* Doc updated for buffer allocation
* Added psutil Requirement
* Adapted test_identity.py
* Fixes with new SB3 version
* Fix for tensorboard name
* Convert assert to warning and fix tests
* Refactor off-policy algorithms
* Fixes
* test: remove next_obs in replay buffer
* Update changelog
* Fix tests and use tmp_path where possible
* Fix sampling bug in buffer
* Do not store next obs on episode termination
* Fix replay buffer sampling
* Update comment
* moved epsilon from policy to model
* Update predict method
* Update atari wrappers to match SB2
* Minor edit in the buffers
* Update changelog
* Merge branch 'master' into dqn
* Update DQN to new structure
* Fix tests and remove hardcoded path
* Fix for DQN
* Disable memory efficient replay buffer by default
* Fix docstring
* Add tests for memory efficient buffer
* Update changelog
* Split collect rollout
* Move target update outside `train()` for DQN
* Update changelog
* Update linear schedule doc
* Cleanup DQN code
* Minor edit
* Update version and docker images
Co-authored-by: Antonin RAFFIN <antonin.raffin@ensta.org>
* Split torch module code into torch_layers file
* Updated reference to CNN
* Change 'CxWxH' to 'CxHxW', as per common notion
* Fix missing import in policies.py
* Move PPOPolicy to OnlineActorCriticPolicy
* Create OnPolicyRLModel from PPO, and make A2C and PPO inherit
* Update A2C optimizer comment
* Clean weight init scales for clarity
* Fix A2C log_interval default parameter
* Rename 'progress' to 'progress_remaining
* Rename 'Models' to 'Algorithms'
* Rename 'OnlineActorCriticPolicy' to 'ActorCriticPolicy'
* Move static functions out from BaseAlgorithm
* Move on/off_policy base algorithms to their own files
* Add files for A2C/PPO
* Fix docs
* Fix pytype
* Update documentation on OnPolicyAlgorithm
* Add proper doctstring for on_policy rollout gathering
* Add bit clarification on the mlppolicy/cnnpolicy naming
* Move static function is_vectorized_policies to utils.py
* Checking docstrings, pep8 fixes
* Update changelog
* Clean changelog
* Remove policy warnings for sac/td3
* Add monitor_wrapper for OnPolicyAlgorithm. Clean tb logging variables. Add parameter keywords to OffPolicyAlgorithm super init
Co-authored-by: Antonin RAFFIN <antonin.raffin@ensta.org>
* init commit tensorboard-integration
* Added tb logger to ppo (with output exclusions)
* fixed truncated stdout
* categorize stdout outputs by tag
* separated exclusions from values, added missing logs
* saving exclusions as dict instead of list
* reformatting, auto run indexing
* included renaming suggestions, fixed tests
* tb support for sac
* linting
* moved logging to base class
* tb support for td3
* removed histograms, non-verbose output working
* modifed changelog
* linting
* fixed type error
* moved logger config to utils
* removed episode_rewards log from ppo
* Enable tensorboard in tests
* Remove unused import
* Update logger sub titles
* Minor edit for PPO
* Update logger and tb log folder
* Pass correct logger to Callbacks
* updated docs
* added tb example image to docs
* add support for continuing training in tensorboard
* added tensorboard to docs index
* added tb test
* moved logger config to _setup_learn, updated tests
* accessing verbose from base class
* Update doc and tests
* Rename session -> time
* Update version
* Update logger truncate
* Update types
* Remove duplicated code
Co-authored-by: Antonin RAFFIN <antonin.raffin@ensta.org>