mirror of
https://github.com/saymrwulf/stable-baselines3.git
synced 2026-05-14 20:58:03 +00:00
* Fix failing set_env test * Fix test failiing due to deprectation of env.seed * Adjust mean reward threshold in failing test * Fix her test failing due to rng * Change seed and revert reward threshold to 90 * Pin gym version * Make VecEnv compatible with gym seeding change * Revert change to VecEnv reset signature * Change subprocenv seed cmd to call reset instead * Fix type check * Add backward compat * Add `compat_gym_seed` helper * Add goal env checks in env_checker * Add docs on HER requirements for envs * Capture user warning in test with inverted box space * Update ale-py version * Fix randint * Allow noop_max to be zero * Update changelog * Update docker image * Update doc conda env and dockerfile * Custom envs should not have any warnings * Fix test for numpy >= 1.21 * Add check for vectorized compute reward * Bump to gym 0.24 * Fix gym default step docstring * Test downgrading gym * Revert "Test downgrading gym" This reverts commit 0072b77156c006ada8a1d6e26ce347ed85a83eeb. * Fix protobuf error * Fix in dependencies * Fix protobuf dep * Use newest version of cartpole * Update gym * Fix warning * Loosen required scipy version * Scipy no longer needed * Try gym 0.25 * Silence warnings from gym * Filter warnings during tests * Update doc * Update requirements * Add gym 26 compat in vec env * Fixes in envs and tests for gym 0.26+ * Enforce gym 0.26 api * format * Fix formatting * Fix dependencies * Fix syntax * Cleanup doc and warnings * Faster tests * Higher budget for HER perf test (revert prev change) * Fixes and update doc * Fix doc build * Fix breaking change * Fixes for rendering * Rename variables in monitor * update render method for gym 0.26 API backwards compatible (mode argument is allowed) while using the gym 0.26 API (render mode is determined at environment creation) * update tests and docs to new gym render API * undo removal of render modes metatadata check * set rgb_array as default render mode for gym.make * undo changes & raise warning if not 'rgb_array' * Fix type check * Remove recursion and fix type checking * Remove hacks for protobuf and gym 0.24 * Fix type annotations * reuse existing render_mode attribute * return tiled images for 'human' render mode * Allow to use opencv for human render, fix typos * Add warning when using non-zero start with Discrete (fixes #1197) * Fix type checking * Bug fixes and handle more cases * Throw proper warnings * Update test * Fix new metadata name * Ignore numpy warnings * Fixes in vec recorder * Global ignore * Filter local warning too * Monkey patch not needed for gym 26 * Add doc of VecEnv vs Gym API * Add render test * Fix return type * Update VecEnv vs Gym API doc * Fix for custom render mode * Fix return type * Fix type checking * check test env test_buffer * skip render check * check env test_dict_env * test_env test_gae * check envs in remaining tests * Update tests * Add warning for Discrete action space with non-zero (#1295) * Fix atari annotation * ignore get_action_meanings [attr-defined] * Fix mypy issues * Add patch for gym/gymnasium transition * Switch to gymnasium * Rely on signature instead of version * More patches * Type ignore because of https://github.com/Farama-Foundation/Gymnasium/pull/39 * Fix doc build * Fix pytype errors * Fix atari requirement * Update env checker due to change in dtype for Discrete * Fix type hint * Convert spaces for saved models * Ignore pytype * Remove gitlab CI * Disable pytype for convert space * Fix undefined info * Fix undefined info * Upgrade shimmy * Fix wrappers type annotation (need PR from Gymnasium) * Fix gymnasium dependency * Fix dependency declaration * Cap pygame version for python 3.7 * Point to master branch (v0.28.0) * Fix: use main not master branch * Rename done to terminated * Fix pygame dependency for python 3.7 * Rename gym to gymnasium * Update Gymnasium * Fix test * Fix tests * Forks don't have access to private variables * Fix linter warnings * Update read the doc env * Fix env checker for GoalEnv * Fix import * Update env checker (more info) and fix dtype * Use micromamab for Docker * Update dependencies * Clarify VecEnv doc * Fix Gymnasium version * Copy file only after mamba install * [ci skip] Update docker doc * Polish code * Reformat * Remove deprecated features * Ignore warning * Update doc * Update examples and changelog * Fix type annotation bundle (SAC, TD3, A2C, PPO, base class) (#1436) * Fix SAC type hints, improve DQN ones * Fix A2C and TD3 type hints * Fix PPO type hints * Fix on-policy type hints * Fix base class type annotation, do not use defaults * Update version * Disable mypy for python 3.7 * Rename Gym26StepReturn * Update continuous critic type annotation * Fix pytype complain --------- Co-authored-by: Carlos Luis <carlos.luisgonc@gmail.com> Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com> Co-authored-by: Thomas Lips <37955681+tlpss@users.noreply.github.com> Co-authored-by: tlips <thomas.lips@ugent.be> Co-authored-by: tlpss <thomas17.lips@gmail.com> Co-authored-by: Quentin GALLOUÉDEC <gallouedec.quentin@gmail.com>
164 lines
5.5 KiB
ReStructuredText
164 lines
5.5 KiB
ReStructuredText
Dealing with NaNs and infs
|
||
==========================
|
||
|
||
During the training of a model on a given environment, it is possible that the RL model becomes completely
|
||
corrupted when a NaN or an inf is given or returned from the RL model.
|
||
|
||
How and why?
|
||
------------
|
||
|
||
The issue arises when NaNs or infs do not crash, but simply get propagated through the training,
|
||
until all the floating point number converge to NaN or inf. This is in line with the
|
||
`IEEE Standard for Floating-Point Arithmetic (IEEE 754) <https://ieeexplore.ieee.org/document/4610935>`_ standard, as it says:
|
||
|
||
.. note::
|
||
Five possible exceptions can occur:
|
||
- Invalid operation (:math:`\sqrt{-1}`, :math:`\inf \times 1`, :math:`\text{NaN}\ \mathrm{mod}\ 1`, ...) return NaN
|
||
- Division by zero:
|
||
- if the operand is not zero (:math:`1/0`, :math:`-2/0`, ...) returns :math:`\pm\inf`
|
||
- if the operand is zero (:math:`0/0`) returns signaling NaN
|
||
- Overflow (exponent too high to represent) returns :math:`\pm\inf`
|
||
- Underflow (exponent too low to represent) returns :math:`0`
|
||
- Inexact (not representable exactly in base 2, eg: :math:`1/5`) returns the rounded value (ex: :code:`assert (1/5) * 3 == 0.6000000000000001`)
|
||
|
||
And of these, only ``Division by zero`` will signal an exception, the rest will propagate invalid values quietly.
|
||
|
||
In python, dividing by zero will indeed raise the exception: ``ZeroDivisionError: float division by zero``,
|
||
but ignores the rest.
|
||
|
||
The default in numpy, will warn: ``RuntimeWarning: invalid value encountered``
|
||
but will not halt the code.
|
||
|
||
|
||
Anomaly detection with PyTorch
|
||
------------------------------
|
||
|
||
To enable NaN detection in PyTorch you can do
|
||
|
||
.. code-block:: python
|
||
|
||
import torch as th
|
||
th.autograd.set_detect_anomaly(True)
|
||
|
||
|
||
Numpy parameters
|
||
----------------
|
||
|
||
Numpy has a convenient way of dealing with invalid value: `numpy.seterr <https://docs.scipy.org/doc/numpy/reference/generated/numpy.seterr.html>`_,
|
||
which defines for the python process, how it should handle floating point error.
|
||
|
||
.. code-block:: python
|
||
|
||
import numpy as np
|
||
|
||
np.seterr(all="raise") # define before your code.
|
||
|
||
print("numpy test:")
|
||
|
||
a = np.float64(1.0)
|
||
b = np.float64(0.0)
|
||
val = a / b # this will now raise an exception instead of a warning.
|
||
print(val)
|
||
|
||
but this will also avoid overflow issues on floating point numbers:
|
||
|
||
.. code-block:: python
|
||
|
||
import numpy as np
|
||
|
||
np.seterr(all="raise") # define before your code.
|
||
|
||
print("numpy overflow test:")
|
||
|
||
a = np.float64(10)
|
||
b = np.float64(1000)
|
||
val = a ** b # this will now raise an exception
|
||
print(val)
|
||
|
||
but will not avoid the propagation issues:
|
||
|
||
.. code-block:: python
|
||
|
||
import numpy as np
|
||
|
||
np.seterr(all="raise") # define before your code.
|
||
|
||
print("numpy propagation test:")
|
||
|
||
a = np.float64("NaN")
|
||
b = np.float64(1.0)
|
||
val = a + b # this will neither warn nor raise anything
|
||
print(val)
|
||
|
||
|
||
VecCheckNan Wrapper
|
||
-------------------
|
||
|
||
In order to find when and from where the invalid value originated from, stable-baselines3 comes with a ``VecCheckNan`` wrapper.
|
||
|
||
It will monitor the actions, observations, and rewards, indicating what action or observation caused it and from what.
|
||
|
||
.. code-block:: python
|
||
|
||
import gymnasium as gym
|
||
from gymnasium import spaces
|
||
import numpy as np
|
||
|
||
from stable_baselines3 import PPO
|
||
from stable_baselines3.common.vec_env import DummyVecEnv, VecCheckNan
|
||
|
||
class NanAndInfEnv(gym.Env):
|
||
"""Custom Environment that raised NaNs and Infs"""
|
||
metadata = {"render.modes": ["human"]}
|
||
|
||
def __init__(self):
|
||
super(NanAndInfEnv, self).__init__()
|
||
self.action_space = spaces.Box(low=-np.inf, high=np.inf, shape=(1,), dtype=np.float64)
|
||
self.observation_space = spaces.Box(low=-np.inf, high=np.inf, shape=(1,), dtype=np.float64)
|
||
|
||
def step(self, _action):
|
||
randf = np.random.rand()
|
||
if randf > 0.99:
|
||
obs = float("NaN")
|
||
elif randf > 0.98:
|
||
obs = float("inf")
|
||
else:
|
||
obs = randf
|
||
return [obs], 0.0, False, {}
|
||
|
||
def reset(self):
|
||
return [0.0]
|
||
|
||
def render(self, close=False):
|
||
pass
|
||
|
||
# Create environment
|
||
env = DummyVecEnv([lambda: NanAndInfEnv()])
|
||
env = VecCheckNan(env, raise_exception=True)
|
||
|
||
# Instantiate the agent
|
||
model = PPO("MlpPolicy", env)
|
||
|
||
# Train the agent
|
||
model.learn(total_timesteps=int(2e5)) # this will crash explaining that the invalid value originated from the environment.
|
||
|
||
RL Model hyperparameters
|
||
------------------------
|
||
|
||
Depending on your hyperparameters, NaN can occurs much more often.
|
||
A great example of this: https://github.com/hill-a/stable-baselines/issues/340
|
||
|
||
Be aware, the hyperparameters given by default seem to work in most cases,
|
||
however your environment might not play nice with them.
|
||
If this is the case, try to read up on the effect each hyperparameters has on the model,
|
||
so that you can try and tune them to get a stable model. Alternatively, you can try automatic hyperparameter tuning (included in the rl zoo).
|
||
|
||
Missing values from datasets
|
||
----------------------------
|
||
|
||
If your environment is generated from an external dataset, do not forget to make sure your dataset does not contain NaNs.
|
||
As some datasets will sometimes fill missing values with NaNs as a surrogate value.
|
||
|
||
Here is some reading material about finding NaNs: https://pandas.pydata.org/pandas-docs/stable/user_guide/missing_data.html
|
||
|
||
And filling the missing values with something else (imputation): https://towardsdatascience.com/how-to-handle-missing-data-8646b18db0d4
|