pytorch/caffe2
Jiakai Liu 72b0447f8d [pytorch] move tracing logic to a separate dispatch backend (#38467)
Summary:
This PR moves tracing logic out of the generated VariableType kernels, to associate it with a new dedicated dispatch key Tracer.
It also toggles the dispatch key set at various places to keep the semantics unchanged - see the inline [Tracing Mode Switches] note.

Sample generated code:
```
Tensor & __ilshift___Tensor(Tensor & self, const Tensor & other) {
  #if !defined(PYTORCH_DISABLE_TRACING)
  torch::jit::Node* node = nullptr;
  std::shared_ptr<jit::tracer::TracingState> tracer_state;
  if (jit::tracer::isTracing()) {
    tracer_state = jit::tracer::getTracingState();
    at::Symbol op_name;
    op_name = jit::Symbol::fromQualString("aten::__ilshift__");
    node = tracer_state->graph->create(op_name, /*num_outputs=*/0);
    jit::tracer::recordSourceLocation(node);
    jit::tracer::addInputs(node, "self", self);
    jit::tracer::addInputs(node, "other", other);
    tracer_state->graph->insertNode(node);

    jit::tracer::setTracingState(nullptr);
  }
  #endif
  static auto op = c10::Dispatcher::singleton().findSchemaOrThrow("aten::__ilshift__", "Tensor");
  c10::Dispatcher::singleton().redispatch<Tensor &, Tensor &, const Tensor &>(op, c10::DispatchKey::Tracer, self, other);
  #if !defined(PYTORCH_DISABLE_TRACING)
  if (tracer_state) {
    jit::tracer::setTracingState(std::move(tracer_state));
    jit::tracer::addOutput(node, self);
  }
  #endif
  return self;
}
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/38467

ghstack-source-id: 105215150

Test Plan: CI

Differential Revision: D21570684

fbshipit-source-id: 1a96761830307f9a934f38bfb9fe8b5b1763e0e0
2020-06-04 01:51:30 -07:00
..
contrib fix internal targets for layernorm (#39501) 2020-06-03 22:09:16 -07:00
core fix asserts in cuda code (#39047) 2020-05-28 15:51:38 -07:00
cuda_rtc
db
distributed Fix type annotations and make MyPy run on torch/ (#36584) 2020-04-22 14:17:08 -07:00
experiments Fix type annotations and make MyPy run on torch/ (#36584) 2020-04-22 14:17:08 -07:00
ideep Refactor error msg stack handling, add TORCH_RETHROW (#37101) 2020-05-04 11:56:45 -07:00
image
mobile Make msg() and msg_with_backtrace() private (#37094) 2020-05-04 11:54:34 -07:00
mpi
observers
onnx Remove datatype from Storage and StorageImpl (#38870) 2020-05-21 15:26:08 -07:00
operators Initial support for building on Ampere GPU, CUDA 11, cuDNN 8 (#39277) 2020-06-02 10:03:42 -07:00
opt [Onnxifi] Support quantized output in Onnxifi (#39230) 2020-06-02 11:29:17 -07:00
perfkernels implement L2 regularization for Adagrad in caffe2 and dper (#37705) 2020-05-03 10:42:49 -07:00
predictor
proto
python [ONNX] Bump up ONNX submodule to a82c6a7010e2e332d8f74ad5b0c726fd47c85376 (#39372) 2020-06-02 21:08:14 -07:00
quantization [FakeLowp] Open source more c2 ops (#38878) 2020-05-21 19:10:04 -07:00
queue
serialize [JIT] Make new zip serialization for torch save/load significantly (~70%) faster (#38379) 2020-05-29 01:56:18 -07:00
sgd [caffe2] compute r_correction only for radam to avoid sqrt(negative) (#39393) 2020-06-03 19:09:28 -07:00
share
test
transforms
utils Initial support for building on Ampere GPU, CUDA 11, cuDNN 8 (#39277) 2020-06-02 10:03:42 -07:00
video
.clang-format
__init__.py
c2_aten_srcs.bzl
CMakeLists.txt [pytorch] move tracing logic to a separate dispatch backend (#38467) 2020-06-04 01:51:30 -07:00
README.md
release-notes.md
requirements.txt
VERSION_NUMBER

Caffe2

Jenkins Build Status

Caffe2 is a lightweight, modular, and scalable deep learning framework. Building on the original Caffe, Caffe2 is designed with expression, speed, and modularity in mind.

Questions and Feedback

Please use Github issues (https://github.com/pytorch/pytorch/issues) to ask questions, report bugs, and request new features.

Further Resources on Caffe2.ai