pytorch/torch
Richard Zou c9a0204ef4 Disable functorch modes in testing's freeze_rng_state(), part 2 (#81109)
I forgot to update one line in
https://github.com/pytorch/pytorch/pull/81006. torch.get_rng_state()
returns a Tensor that can also be affected by modes so it also needs a
no_functorch() context manager.

Test Plan:
- tested with functorch tests on CUDA (that's how I discovered this
problem)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81109
Approved by: https://github.com/samdow
2022-07-08 20:18:56 +00:00
..
_C [cuDNN V8 API] (reopen 2) Allow the number of kernels profiled under torch.backends.cudnn.benchmark = True to be limitedCudnnv8 benchmark limit (#78299) 2022-07-07 23:25:23 +00:00
_C_flatbuffer
_decomp Revert "Make kl_div a composite function. (#80334)" 2022-07-06 17:51:06 +00:00
_lazy python bindings for create_metric_report (#79679) 2022-07-06 20:06:17 +00:00
_masked Use segment/scatter_reduce to support masked reductions on sparse CSR tensors (mean, amax, amin) (fp only) (#78918) 2022-06-30 14:11:53 +00:00
_prims [primTorch] Elementwise unary ops vi (#79526) 2022-07-08 15:17:45 +00:00
_refs Register unregistered refs and add a test to check registration (#80497) 2022-07-08 16:29:52 +00:00
_subclasses fix overload ambiguity with functional ops; fix _foreach op grouping (#80556) 2022-07-06 12:45:11 +00:00
amp Add __all__ to torch.nn.quantized, fx.passes, ao.nn and amp submodules (#80376) 2022-06-27 21:36:27 +00:00
ao [Quant][fx][bc-breaking] Do not move models to CPU in convert (#80555) 2022-07-08 19:23:57 +00:00
autograd Revert "Enable Intel® VTune™ Profiler's Instrumentation and Tracing Technology APIs (ITT) to PyTorch (#63289)" 2022-06-30 12:49:41 +00:00
backends [cuDNN V8 API] (reopen 2) Allow the number of kernels profiled under torch.backends.cudnn.benchmark = True to be limitedCudnnv8 benchmark limit (#78299) 2022-07-07 23:25:23 +00:00
contrib
cpu
csrc Fix retains grad behavior after in-place (#79996) 2022-07-08 19:13:28 +00:00
cuda
distributed Add __all__ to torch.distributed, futures, fx, nn, package, benchmark submodules (#80520) 2022-07-08 14:31:24 +00:00
distributions More stable computation of KL between two Bernoulli distributions (#79944) 2022-06-27 21:31:45 +00:00
fft
futures Add __all__ to torch.distributed, futures, fx, nn, package, benchmark submodules (#80520) 2022-07-08 14:31:24 +00:00
fx Prims+NvFuser Backend Prototype (#80591) 2022-07-08 19:53:03 +00:00
jit Fixing the torch.jit.freeze docs (#81020) 2022-07-07 20:33:26 +00:00
legacy
lib
linalg [Array API] Add linalg.vecdot (#70542) 2022-07-08 15:37:58 +00:00
monitor
multiprocessing Weak-ref-ify MetaConverter and FakeTensorConverter (#80544) 2022-06-29 23:36:35 +00:00
nested
nn Add __all__ to torch.distributed, futures, fx, nn, package, benchmark submodules (#80520) 2022-07-08 14:31:24 +00:00
onnx [onnx] Add argsort support (#80234) 2022-07-07 22:06:29 +00:00
optim Revert "Adding maximize to ASGD (#80323)" 2022-07-08 13:35:31 +00:00
package Add __all__ to torch.distributed, futures, fx, nn, package, benchmark submodules (#80520) 2022-07-08 14:31:24 +00:00
profiler [Profiler] Add Pattern that detects extra cuda copy (#80572) 2022-07-07 20:22:42 +00:00
quantization
sparse Add spdiags sparse matrix initialization (#78439) 2022-07-01 01:11:54 +00:00
special torch.special.scaled_modified_bessel_k0 (#78900) 2022-06-29 14:53:37 +00:00
testing Disable functorch modes in testing's freeze_rng_state(), part 2 (#81109) 2022-07-08 20:18:56 +00:00
utils [DataLoader] Locking lower ranks seed recepients (#81071) 2022-07-08 18:53:45 +00:00
__config__.py
__future__.py
__init__.py
_appdirs.py
_classes.py
_deploy.py
_jit_internal.py
_linalg_utils.py
_lobpcg.py
_lowrank.py
_meta_registrations.py Add support for multiple inputs to out_wrapper and strict dtype checking (#80601) 2022-07-05 12:31:21 +00:00
_namedtensor_internals.py
_ops.py
_python_dispatcher.py
_six.py
_sources.py
_storage_docs.py
_tensor.py Remove split functional wrapper (#74727) 2022-07-08 19:21:22 +00:00
_tensor_docs.py Remove split functional wrapper (#74727) 2022-07-08 19:21:22 +00:00
_tensor_str.py
_torch_docs.py Remove split functional wrapper (#74727) 2022-07-08 19:21:22 +00:00
_utils.py
_utils_internal.py
_VF.py
_vmap_internals.py
abi-check.cpp
CMakeLists.txt Revert "[Profiler] Include ActivityType from Kineto (#80750)" 2022-07-08 05:16:56 +00:00
custom_class.h
custom_class_detail.h
deploy.h
extension.h
functional.py Remove split functional wrapper (#74727) 2022-07-08 19:21:22 +00:00
hub.py
library.h
library.py Add doc string for Library.impl (#81047) 2022-07-08 18:18:14 +00:00
overrides.py [Array API] Add linalg.vecdot (#70542) 2022-07-08 15:37:58 +00:00
py.typed
quasirandom.py
random.py
README.txt
return_types.py
script.h
serialization.py Avoid temporary buffers for tensors with torch.save. (#80404) 2022-06-30 00:19:42 +00:00
storage.py Fix Module.share_memory error (#80843) 2022-07-05 15:17:36 +00:00
torch_version.py Add __all__ to torch.distributed, futures, fx, nn, package, benchmark submodules (#80520) 2022-07-08 14:31:24 +00:00
types.py

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.