pytorch/caffe2
BowenBao 8726f08e15 [ONNX] Update documentation (#58712) (#60249)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60249

* Add introductory paragraph explaining what ONNX is and what the
  torch.onnx module does.
* In "Tracing vs Scripting" and doc-string for torch.onnx.export(),
  clarify that exporting always happens on ScriptModules and that
  tracing and scripting are the two ways to produce a ScriptModule.
* Remove examples of using Caffe2 to run exported models.
  Caffe2's website says it's deprecated, so it's probably best not to
  encourage people to use it by including it in examples.
* Remove a lot of content that's redundant:
  * The example of how to mix tracing and scripting, and instead
    link to Introduction to TorchScript, which includes very similar
    content.
  * "Type annotations" section. Link to TorchScript docs which explain
    that in more detail.
  * "Using dictionaries to handle Named Arguments as model inputs"
    section. It's redundant with the description of the `args` argument
    to `export()`, which appears on the same page once the HTML
    is generated.
  * Remove the list of supported Tensor indexing patterns. If it's not
    in the list of unsupported patterns, users can assume it's
    supported, so having both is redundant.
  * Remove the list of supported operators and models.
    I think the list of supported operators is not very useful.
    A list of supported model architectures may be useful, but in
    reality it's already very out of date. We should add it back if
    / when we have a system for keeping it up to date.
  * "Operator Export Type" section. It's redundant with the description
    of the `operator_export_type` arg to to `export()`, which appears on
    the same page once the HTML is generated.
  * "Use external data format" section. It's redundant with the
    description of the `use_external_data_format` arg to `export()`.
  * "Training" section.  It's redundant with the
    description of the `training` arg to `export()`.
* Move the content about different operator implementations producing
  different results from the "Limitations" section into the doc for the
  `operator_export_type` arg.
* Document "quantized" -> "caffe2" behavior of
  OperatorExportTypes.ONNX_ATEN_FALLBACK.
* Combing the text about using torch.Tensor.item() and the text about
  using NumPy types into a section titled
  "Avoid NumPy and built-in Python types", since they're both
  fundamentally about the same issue.
* Rename "Write PyTorch model in Torch way" to "Avoiding Pitfalls".
* Lots of minor fixes: spelling, grammar, brevity, fixing links, adding
  links.
* Clarify limitation on input and output types. Phrasing it in terms of
  PyTorch types is much more accessible than in terms of TorchScript
  types. Also clarify what actually happens when dict and str are used
  as inputs and outputs.
* In Supported operators, use torch function and class names and link
  to them. This is more user friendly than using the internal aten
  op names.
* Remove references to VariableType.h, which doesn't appear to contain
  the information that it once did. Instead refer to the generated
  .pyi files.
* Remove the text in the FAQ about appending to lists within loops.
  I think this limitation is no longer present
  (perhaps since https://github.com/pytorch/pytorch/pull/51577).
* Minor fixes to some code I read along the way.
* Explain the current rationale for the weird ::prim_PythonOp op name.

Test Plan: Imported from OSS

Reviewed By: zou3519, ZolotukhinM

Differential Revision: D29494912

Pulled By: SplitInfinity

fbshipit-source-id: 7756c010b2320de0692369289604403d28877719

Co-authored-by: Gary Miguel <garymiguel@microsoft.com>
2021-07-08 16:29:32 -07:00
..
contrib [ONNX] Update documentation (#58712) (#60249) 2021-07-08 16:29:32 -07:00
core [Caffe2][Testing] Check for equality first in assertTensorEqualsWithType<float> (#61006) 2021-06-29 23:31:37 -07:00
cuda_rtc
db [caffe2] update db::Transaction::Put() to accept the value by rvalue reference (#60208) 2021-06-23 22:12:53 -07:00
distributed remove unused type: ignore directives (#60006) 2021-06-18 07:23:31 -07:00
experiments mkl_scsrmm needs to be disabled when MKL is not used (#60051) 2021-06-30 10:40:18 -07:00
ideep External stream (#59527) 2021-06-14 13:46:11 -07:00
image
mobile
mpi
observers
onnx [ONNX] Update ONNX to rel-1.9 (#55889) (#57080) 2021-06-02 08:27:17 -07:00
operators [1/n]support double for Caffe2 ScatterWeightedSum (#60402) 2021-06-29 14:17:04 -07:00
opt [caffe2] Check for number of created subnets and optionally throw an error (#57366) 2021-07-08 14:29:03 -07:00
perfkernels [caffe2/utils] Add explicit rule to avoid package boundary violation (#60677) 2021-06-28 14:43:30 -07:00
predictor [caffe2] Fix include of corresponding header 2021-06-28 14:45:32 -07:00
proto Fix some typing issues (#59952) 2021-06-15 14:11:06 -07:00
python [1/n]support double for Caffe2 ScatterWeightedSum (#60402) 2021-06-29 14:17:04 -07:00
quantization Delete empty caffe2/quantization/CMakeLists.txt (#59717) 2021-06-09 14:20:33 -07:00
queue
serialize Enable implicit operator versioning via number of arguments (#58852) 2021-06-15 02:07:40 -07:00
sgd use explicitly non-returning GPU atomics (#60607) 2021-06-28 18:17:29 -07:00
share
test
transforms
utils add thrust/host_vector.h header for cuda 11.4 build (#61004) 2021-07-06 12:44:56 -07:00
video
.clang-format
__init__.py
c2_aten_srcs.bzl
CMakeLists.txt Fix breakpad build + add test canary (#60990) 2021-07-06 14:15:07 -07:00
README.md
release-notes.md
requirements.txt
VERSION_NUMBER

Caffe2

Jenkins Build Status

Caffe2 is a lightweight, modular, and scalable deep learning framework. Building on the original Caffe, Caffe2 is designed with expression, speed, and modularity in mind.

Questions and Feedback

Please use GitHub issues (https://github.com/pytorch/pytorch/issues) to ask questions, report bugs, and request new features.

Further Resources on Caffe2.ai