mirror of
https://github.com/saymrwulf/pytorch.git
synced 2026-05-15 21:00:47 +00:00
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35941 The key step of mobile custom build is to find out ops used by specific model, with which it can produce a tailored build of optimal size. However, ops can not only be called from TorchScript model but can also be called from C++ code directly, e.g.: via torch::jit:: APIs. With static dispatch, ops called this way will be statically linked into client code. With dynamic dispatch, we need obtain & keep these ops explicitly. This PR improves static code analyzer to dump ops that are called from visible c++ symbols matching specific regex. This provides a mechanism to solve the custom build problem with dynamic dispatch. It starts with dumping ops that are callable from functions in torch::jit namespace and include them in custom build with dynamic dispatch. We can extend it to analyze custom code / to refine the set of JIT APIs that are relevant, and etc. This is just a preliminary version. We need improve its usability for more general purpose. Test Plan: Imported from OSS Differential Revision: D20835166 Pulled By: ljk53 fbshipit-source-id: a87cfb22b34f89545edd0674a5dfca6b7cff2b0c |
||
|---|---|---|
| .. | ||
| analyzer.cpp | ||
| build.sh | ||
| CMakeLists.txt | ||
| gen_op_registration_whitelist.py | ||
| op_dependency.cpp | ||
| run_analyzer.sh | ||