pytorch/caffe2/python
Edward Yang 54d9823d00 Make caffe2::Tensor::dims() return an IntList instead of a const vector& (#12180)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12180

I had to fix a lot of call sites, because a lot of places assume that
you can actually get a const vector&, and if the internal representation
of sizes in a tensor is NOT a vector, it's not possible to fulfill
this API contract.

Framework changes:
- I deleted TensorImpl::dims(); caffe2::Tensor::dims() just forwards to
  sizes() now.
- De-templatized SetDims; now it is an explicit list of ArrayRef and
  variadic overloads.  This makes implicit conversions work again,
  so I don't need to explicitly list the std::vector cases too.
  - As a knock-on effect, this causes Reset() to accept at::IntList as well as
    const std::vector<int64_t>&
- Edited variadic overloads of SetDims to all forward to the underlying
  arbitrary-dim implementation, reducing code duplication. (It's probably
  marginally less efficient in the new world.)
- Replace Tensor constructor accepting const std::vector<int64_t>& with at::IntList
- Make MKLTensor accept ArrayRef along with vector in constructor and
  Reset (unfortunately, no implicit conversions here, since it's templated on
  index type.)
- There are a few other places, like cudnn, where I changed functions
  that previously took const std::vector<int64_t>& to take at::IntList
  instead.

Classification of call site changes:
- 'const std::vector<int64_t>& x_dims = x.dims()' ==>
  'at::IntList x_dims = x.dims()'
- 'std::vector<int64_t> x_dims = x.dims()' ==>
  'std::vector<int64_t> x_dims = x.dims().vec()' (we need a copy!)
  Usually this is because we're about to mutably modify the vector
  to compute some new dimension.  However, it also very commonly occurs in the
  form: 'x_dims_ = x.dims()' because we frequently cache sizes in operators.
- Instead of constructing std::vector<int64_t>{blah, blah}, construct an
  at::IntList directly

ArrayRef changes:
- cbegin()/cend() iterators, they operate the same aas begin()/end() because
  everything on ArrayRef is const.
- Moved operator<< into ArrayRef.h, so that it's always available when
  working with ArrayRef.  I also templated it, so it now works on an
  ArrayRef of any type.
- Add operator== overload for ArrayRef, and also add variants to permit
  comparison of ArrayRef with std::vector, a very common operation.
  (The non-templated version of operator== can get these automatically
  via implicit conversion, but with templates C++ refuses to do
  any explicit conversions.)

I'm planning to audit all dims() call sites to make sure they don't
expect 'auto x = t.dims()' to give you an x whose lifetime can validly
outlive the tensor.

I opted not to do a dims() to sizes() rename, because dims() also matches
the protobufs accessor.  Bad news!

Reviewed By: jerryzh168

Differential Revision: D10111759

fbshipit-source-id: a2a81dc4b92c22ad4b3b8ef4077a7e97b6479452
2018-10-05 15:57:41 -07:00
..
docs
examples Add resnext model to OSS (#11468) 2018-09-12 15:59:20 -07:00
helpers
ideep Implementation MomentumSGD/MomentumSGDUpdate operators for mkl-dnn (#11686) 2018-09-27 13:39:59 -07:00
layers Support FP16 sparse lookup (#11674) 2018-09-14 02:40:08 -07:00
mint move flags to c10 (#12144) 2018-10-04 02:09:56 -07:00
mkl enable_mkl support for resnet18+lstm model 2018-09-12 18:56:46 -07:00
modeling diagnose option: get_entry to print a whole row (#11308) 2018-09-06 21:26:30 -07:00
models Add resnext model to OSS (#11468) 2018-09-12 15:59:20 -07:00
onnx Make if block also take control_inputs, preserve SSA (#12224) 2018-10-03 14:29:01 -07:00
operator_test Back out "Revert D10123245: Back out "codemod cuda_gpu_id to device_id"" (#12232) 2018-10-01 21:54:52 -07:00
predictor Back out "Revert D10123245: Back out "codemod cuda_gpu_id to device_id"" (#12232) 2018-10-01 21:54:52 -07:00
rnn
serialized_test Use tempfile during serialized test comparison (#12021) 2018-09-25 20:55:45 -07:00
test Update from Facebook (#8887) 2018-06-26 14:55:48 -07:00
trt ONNXIFI transform (#9569) 2018-07-20 15:09:59 -07:00
__init__.py caffe2::DeviceType -> at::DeviceType (#11254) 2018-09-05 16:28:09 -07:00
_import_c_extension.py Completely remove build_aten and use_aten (#10469) 2018-08-20 20:26:42 -07:00
allcompare_test.py
attention.py
benchmark_generator.py
binarysize.py
brew.py
brew_test.py Move tanh function to math (#9328) 2018-07-11 13:59:50 -07:00
build.py
cached_reader.py Update from Facebook (#8887) 2018-06-26 14:55:48 -07:00
caffe_translator.py
caffe_translator_test.py Remove caffe1 specific proto (#10380) 2018-08-10 11:10:26 -07:00
checkpoint.py Create class constant for string literal 'blob_names' 2018-08-24 22:11:43 -07:00
checkpoint_test.py Revert D9566744: [New Checkpoint] Kill the dummy TaskOutput when task.get_step() (#11164) 2018-08-31 22:25:57 -07:00
CMakeLists.txt Add nomnigraph bindings 2018-08-28 12:40:16 -07:00
cnn.py Back out "Revert D10123245: Back out "codemod cuda_gpu_id to device_id"" (#12232) 2018-10-01 21:54:52 -07:00
compatibility.py migrating deprecated calls without abc module for containers (#11515) 2018-09-13 15:09:22 -07:00
context.py Resolve name conflict of ContextManager (#7244) 2018-06-22 00:41:51 -04:00
context_test.py
control.py
control_ops_grad.py
control_ops_util.py
control_test.py
convert.py Revert D10098106: [pytorch][PR] [WIP] New version of PT1 model format 2018-10-02 00:43:40 -07:00
convert_test.py Revert D10098106: [pytorch][PR] [WIP] New version of PT1 model format 2018-10-02 00:43:40 -07:00
convnet_benchmarks.py
convnet_benchmarks_test.py
core.py Back out "Revert D10123245: Back out "codemod cuda_gpu_id to device_id"" (#12232) 2018-10-01 21:54:52 -07:00
core_gradients_test.py
core_test.py Back out "Revert D10123245: Back out "codemod cuda_gpu_id to device_id"" (#12232) 2018-10-01 21:54:52 -07:00
crf.py Productionize CRF layer in PyText (#10362) 2018-08-22 00:25:26 -07:00
crf_predict.py Move crf in caffe2 from fb to oss (#12200) 2018-10-01 18:31:41 -07:00
crf_viterbi_test.py Move crf in caffe2 from fb to oss (#12200) 2018-10-01 18:31:41 -07:00
data_parallel_model.py Back out "Revert D10123245: Back out "codemod cuda_gpu_id to device_id"" (#12232) 2018-10-01 21:54:52 -07:00
data_parallel_model_test.py Disable more flaky tests on CircleCI (#11399) 2018-09-25 10:25:30 -07:00
data_workers.py Fixed log message (#10874) 2018-09-05 09:55:52 -07:00
data_workers_test.py
dataio.py Fixing stop condition on composite reader (#9888) 2018-08-20 03:02:20 -07:00
dataio_test.py Update from Facebook (#8887) 2018-06-26 14:55:48 -07:00
dataset.py
db_file_reader.py Update from Facebook (#8887) 2018-06-26 14:55:48 -07:00
db_test.py
device_checker.py
dlpack.h
dyndep.py
embedding_generation_benchmark.py
experiment_util.py
extension_loader.py Completely remove build_aten and use_aten (#10469) 2018-08-20 20:26:42 -07:00
functional.py Caffe2 Functional enforcing inplace output (#10797) 2018-08-23 22:42:47 -07:00
functional_test.py Add support for specifying device_option in Functional (#9619) 2018-07-24 14:41:59 -07:00
fused_8bit_rowwise_conversion_ops_test.py
gradient_check_test.py [Caffe2] Fix gradient_check on in-place ops (#8828) 2018-06-25 15:25:56 -07:00
gradient_checker.py framework for committed serialized tests (#10594) 2018-08-30 22:41:46 -07:00
gru_cell.py
hsm_util.py
hypothesis_test.py 64B align for avx512 (#11748) 2018-09-17 14:08:31 -07:00
hypothesis_test_util.py Back out "Revert D10123245: Back out "codemod cuda_gpu_id to device_id"" (#12232) 2018-10-01 21:54:52 -07:00
ideep_test_util.py
layer_model_helper.py parallize the dense part in event models 2018-08-22 22:40:07 -07:00
layer_model_instantiator.py
layer_parameter_sharing_test.py
layer_test_util.py
layers_test.py fbshipit-source-id: ba600fcd2b5cefc7621357bdeb05e24cea02e5af 2018-06-27 04:50:56 -07:00
lengths_reducer_fused_8bit_rowwise_ops_test.py
lengths_reducer_rowwise_8bit_ops_test.py
lstm_benchmark.py
memonger.py
memonger_test.py
mkl_test_util.py
model_device_test.py
model_helper.py Back out "Revert D10123245: Back out "codemod cuda_gpu_id to device_id"" (#12232) 2018-10-01 21:54:52 -07:00
model_helper_test.py keep net type info when generating model complete net (#11032) 2018-09-04 21:10:06 -07:00
modifier_context.py
mpi_python.cc
muji.py Back out "Revert D10123245: Back out "codemod cuda_gpu_id to device_id"" (#12232) 2018-10-01 21:54:52 -07:00
muji_test.py
net_builder.py
net_builder_test.py
net_drawer.py
net_printer.py Back out "Revert D10123245: Back out "codemod cuda_gpu_id to device_id"" (#12232) 2018-10-01 21:54:52 -07:00
net_printer_test.py
nomnigraph.py Add successor/predecessor functions 2018-09-18 12:27:06 -07:00
nomnigraph_test.py Add distributed annotations 2018-09-21 19:09:59 -07:00
normalizer.py Update from Facebook (#8887) 2018-06-26 14:55:48 -07:00
normalizer_context.py Update from Facebook (#8887) 2018-06-26 14:55:48 -07:00
normalizer_test.py Update from Facebook (#8887) 2018-06-26 14:55:48 -07:00
numa_benchmark.py
numa_test.py Back out "Revert D10123245: Back out "codemod cuda_gpu_id to device_id"" (#12232) 2018-10-01 21:54:52 -07:00
observer_test.py
optimizer.py Back out "Revert D10123245: Back out "codemod cuda_gpu_id to device_id"" (#12232) 2018-10-01 21:54:52 -07:00
optimizer_context.py
optimizer_test.py Add Adadelta optimizer to caffe2 (#9088) 2018-07-24 20:09:21 -07:00
optimizer_test_util.py Implementation of Wngrad optimizer caffe2 python wrapper and unit test on least square regression (#9001) 2018-07-13 18:54:52 -07:00
parallel_workers.py
parallel_workers_test.py
parallelize_bmuf_distributed_test.py
pipeline.py SNNTest with Data Preproc Service (#11707) 2018-09-17 21:25:49 -07:00
pipeline_test.py
predictor_constants.py
pybind_state.cc Make caffe2::Tensor::dims() return an IntList instead of a const vector& (#12180) 2018-10-05 15:57:41 -07:00
pybind_state.h Guard NumPy usage using USE_NUMPY (#11798) 2018-10-04 12:11:02 -07:00
pybind_state_dlpack.cc codemod: caffe::float16 -> at::Half (#11785) 2018-09-20 18:55:19 -07:00
pybind_state_dlpack.h Back out "Revert D10123245: Back out "codemod cuda_gpu_id to device_id"" (#12232) 2018-10-01 21:54:52 -07:00
pybind_state_gpu.cc Remove many caffe2::TIndex and replace them with int64_t (#11943) 2018-09-22 18:11:04 -07:00
pybind_state_hip.cc Remove many caffe2::TIndex and replace them with int64_t (#11943) 2018-09-22 18:11:04 -07:00
pybind_state_ideep.cc Guard NumPy usage using USE_NUMPY (#11798) 2018-10-04 12:11:02 -07:00
pybind_state_int8.cc Guard NumPy usage using USE_NUMPY (#11798) 2018-10-04 12:11:02 -07:00
pybind_state_mkl.cc Guard NumPy usage using USE_NUMPY (#11798) 2018-10-04 12:11:02 -07:00
pybind_state_nomni.cc Add distributed annotations 2018-09-21 19:09:59 -07:00
pybind_state_registry.cc Move registry fully to c10 (#12077) 2018-09-27 03:09:54 -07:00
pybind_state_registry.h Move registry fully to c10 (#12077) 2018-09-27 03:09:54 -07:00
python_op_test.py
queue_util.py
record_queue.py
recurrent.py
regularizer.py Add GroupL1Norm regularizer (#9115) 2018-07-06 13:26:09 -07:00
regularizer_context.py Update from Facebook (#8887) 2018-06-26 14:55:48 -07:00
regularizer_test.py Add GroupL1Norm regularizer (#9115) 2018-07-06 13:26:09 -07:00
rnn_cell.py
schema.py Add util function from core type to dtype (#10716) 2018-08-21 10:55:19 -07:00
schema_test.py Add util function from core type to dtype (#10716) 2018-08-21 10:55:19 -07:00
scope.py Update from Facebook (#8887) 2018-06-26 14:55:48 -07:00
scope_test.py Update from Facebook (#8887) 2018-06-26 14:55:48 -07:00
session.py
session_test.py
sparse_to_dense_mask_test.py
sparse_to_dense_test.py
task.py Revert D9566744: [New Checkpoint] Kill the dummy TaskOutput when task.get_step() (#11164) 2018-08-31 22:25:57 -07:00
test_util.py nomnigraph - easy - some code cleanup for transformations_test (#12101) 2018-10-01 11:31:08 -07:00
text_file_reader.py
timeout_guard.py
toy_regression_test.py
transformations.py Enable Conv fusion optimizations in optimizeForIdeep (#9255) 2018-07-16 21:28:50 -07:00
transformations_test.py nomnigraph - easy - some code cleanup for transformations_test (#12101) 2018-10-01 11:31:08 -07:00
tt_core.py
tt_core_test.py
utils.py migrating deprecated calls without abc module for containers (#11515) 2018-09-13 15:09:22 -07:00
visualize.py
workspace.py Revert D10098106: [pytorch][PR] [WIP] New version of PT1 model format 2018-10-02 00:43:40 -07:00
workspace_test.py Add workspace.RunPlanInBackground (#9637) 2018-07-20 14:56:12 -07:00