pytorch/torch
mattip 672ed3c06b replace onnx producer_version when updating results (#41910)
Summary:
xref gh-39002 which handled the reading but not the writing of the onnx expect files, and the last comment in that PR which points out `XXX` was suboptimal.
xref [this comment](https://github.com/pytorch/pytorch/pull/37091#discussion_r456460168) which pointed out the problem.

This PR:
- replaces `XXX` with `CURRENT_VERSION` in the stored files
- ensures that updating the results with the `--accept` flag will maintain the change

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41910

Reviewed By: pbelevich

Differential Revision: D22758671

Pulled By: ezyang

fbshipit-source-id: 47c345c66740edfc8f0fb9ff358047a41e19b554
2020-07-28 08:15:01 -07:00
..
_C Add done() API to Future (#42013) 2020-07-24 14:13:41 -07:00
autograd typo fixes (#41632) 2020-07-20 07:23:00 -07:00
backends
contrib remediation of S205607 2020-07-17 17:19:47 -07:00
csrc Add suggestion to enumerate ModuleDict in error message (#41946) 2020-07-27 16:24:00 -07:00
cuda typo fixes (#41632) 2020-07-20 07:23:00 -07:00
distributed typo fixes (#41632) 2020-07-20 07:23:00 -07:00
distributions
for_onnx
futures Add done() API to Future (#42013) 2020-07-24 14:13:41 -07:00
jit DOC: split quantization.rst into smaller pieces (#41321) 2020-07-25 23:59:40 -07:00
legacy
lib Enable ProcessGroupGlooTest in CI (take 2) (#42086) 2020-07-27 10:21:59 -07:00
multiprocessing
nn Let DDP.train() return self to stay consistent with nn.Module (#42131) 2020-07-27 18:22:13 -07:00
onnx [ONNX] Add pass that fuses Conv and BatchNormalization (#40547) 2020-07-22 14:59:27 -07:00
optim Raise error for duplicate params in param group #40967 (#41597) 2020-07-27 12:25:52 -07:00
quantization Updates to Scale and Zero Point Gradient Calculation (#42034) 2020-07-27 11:18:49 -07:00
sparse typo fixes (#41632) 2020-07-20 07:23:00 -07:00
testing replace onnx producer_version when updating results (#41910) 2020-07-28 08:15:01 -07:00
utils [ModelLints] Refine dropout lint message. (#42046) 2020-07-27 18:15:30 -07:00
__config__.py
__future__.py
__init__.py Grammar Changes (#42076) 2020-07-26 13:53:41 -07:00
_appdirs.py
_classes.py
_jit_internal.py [1/N] Implement Enum JIT support (#41390) 2020-07-18 22:15:06 -07:00
_linalg_utils.py
_lobpcg.py typo fixes (#41632) 2020-07-20 07:23:00 -07:00
_lowrank.py
_namedtensor_internals.py
_ops.py
_overrides.py Add torch.movedim (#41480) 2020-07-23 09:41:01 -07:00
_six.py
_storage_docs.py
_tensor_docs.py Reland split (#41567) 2020-07-21 08:06:27 -07:00
_tensor_str.py
_torch_docs.py Add torch.movedim (#41480) 2020-07-23 09:41:01 -07:00
_utils.py
_utils_internal.py
_VF.py
_vmap_internals.py
abi-check.cpp
CMakeLists.txt Replace if(NOT ${var}) by if(NOT var) (#41924) 2020-07-23 15:49:20 -07:00
custom_class.h
custom_class_detail.h
extension.h
functional.py
hub.py typo fixes (#41632) 2020-07-20 07:23:00 -07:00
library.h
py.typed remediation of S205607 2020-07-17 17:19:47 -07:00
quasirandom.py
random.py
README.txt
script.h
serialization.py
storage.py
tensor.py
types.py

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.