2021-02-26 19:51:29 +00:00
|
|
|
#include <test/cpp/jit/test_utils.h>
|
|
|
|
|
|
2020-09-25 18:35:39 +00:00
|
|
|
#include <gtest/gtest.h>
|
|
|
|
|
|
2020-03-14 01:21:21 +00:00
|
|
|
#include <c10/core/TensorOptions.h>
|
Bytecode export flow (#25187)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25187
The bytecode export flow: dump the bytecode format for the light weighted interpreter.
* The bytecode is generated without input spec optimization. It would be more generic (input independent) with no obvious performance degradation (to be tested).
* Main API: torch::jit::script::Module::save(filename, extra_files, bool *bytecode_format* = false).
* Both bytecode and module object are exported in pickle format.
* The module object (in data.pkl) is the same as the original JIT model.
* The serializer is dependent on pickle only (no protobuf or Json).
* The major functionality is forked in ScriptModuleSerializer2::serialize().
* The test loader is test_bc_export.cpp.
* Simple APIs are added in Code and its implementation to get necessary information (instructions, operators and constants).
* Since there's no dependency on graph/node, GetAttr is promoted from an operator to first-class instruction (https://github.com/pytorch/pytorch/pull/25151) .
* Some definitions (instructions, writeArchive, etc) that are shared by full JIT and bytecode are pulled out of the local namespace (https://github.com/pytorch/pytorch/pull/25148).
The output layout looks like:
* folders of methods.
* In each method folder (for example, forward/):
* bytecode.pkl: instructions and operators
* constants{.pkl,/}: constant list in constants.pkl. If there are tensors in constants, the binary tensor files in constants/ folder.
* data{.pkl,/}: the module object, with binary tensor files in data/ folder. The same as in torchscript.
Test Plan: Imported from OSS
Differential Revision: D17076411
fbshipit-source-id: 46eb298e7320d1e585b0101effc0fcfd09219046
2019-09-25 23:34:05 +00:00
|
|
|
#include <torch/csrc/autograd/generated/variable_factories.h>
|
2020-03-14 01:21:21 +00:00
|
|
|
#include <torch/csrc/jit/api/module.h>
|
2021-02-04 05:51:12 +00:00
|
|
|
#include <torch/csrc/jit/frontend/resolver.h>
|
2021-05-08 01:11:15 +00:00
|
|
|
#include <torch/csrc/jit/mobile/backport.h>
|
|
|
|
|
#include <torch/csrc/jit/mobile/backport_manager.h>
|
Bytecode export flow (#25187)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25187
The bytecode export flow: dump the bytecode format for the light weighted interpreter.
* The bytecode is generated without input spec optimization. It would be more generic (input independent) with no obvious performance degradation (to be tested).
* Main API: torch::jit::script::Module::save(filename, extra_files, bool *bytecode_format* = false).
* Both bytecode and module object are exported in pickle format.
* The module object (in data.pkl) is the same as the original JIT model.
* The serializer is dependent on pickle only (no protobuf or Json).
* The major functionality is forked in ScriptModuleSerializer2::serialize().
* The test loader is test_bc_export.cpp.
* Simple APIs are added in Code and its implementation to get necessary information (instructions, operators and constants).
* Since there's no dependency on graph/node, GetAttr is promoted from an operator to first-class instruction (https://github.com/pytorch/pytorch/pull/25151) .
* Some definitions (instructions, writeArchive, etc) that are shared by full JIT and bytecode are pulled out of the local namespace (https://github.com/pytorch/pytorch/pull/25148).
The output layout looks like:
* folders of methods.
* In each method folder (for example, forward/):
* bytecode.pkl: instructions and operators
* constants{.pkl,/}: constant list in constants.pkl. If there are tensors in constants, the binary tensor files in constants/ folder.
* data{.pkl,/}: the module object, with binary tensor files in data/ folder. The same as in torchscript.
Test Plan: Imported from OSS
Differential Revision: D17076411
fbshipit-source-id: 46eb298e7320d1e585b0101effc0fcfd09219046
2019-09-25 23:34:05 +00:00
|
|
|
#include <torch/csrc/jit/mobile/import.h>
|
2021-05-05 16:16:16 +00:00
|
|
|
#include <torch/csrc/jit/mobile/model_compatibility.h>
|
Bytecode export flow (#25187)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25187
The bytecode export flow: dump the bytecode format for the light weighted interpreter.
* The bytecode is generated without input spec optimization. It would be more generic (input independent) with no obvious performance degradation (to be tested).
* Main API: torch::jit::script::Module::save(filename, extra_files, bool *bytecode_format* = false).
* Both bytecode and module object are exported in pickle format.
* The module object (in data.pkl) is the same as the original JIT model.
* The serializer is dependent on pickle only (no protobuf or Json).
* The major functionality is forked in ScriptModuleSerializer2::serialize().
* The test loader is test_bc_export.cpp.
* Simple APIs are added in Code and its implementation to get necessary information (instructions, operators and constants).
* Since there's no dependency on graph/node, GetAttr is promoted from an operator to first-class instruction (https://github.com/pytorch/pytorch/pull/25151) .
* Some definitions (instructions, writeArchive, etc) that are shared by full JIT and bytecode are pulled out of the local namespace (https://github.com/pytorch/pytorch/pull/25148).
The output layout looks like:
* folders of methods.
* In each method folder (for example, forward/):
* bytecode.pkl: instructions and operators
* constants{.pkl,/}: constant list in constants.pkl. If there are tensors in constants, the binary tensor files in constants/ folder.
* data{.pkl,/}: the module object, with binary tensor files in data/ folder. The same as in torchscript.
Test Plan: Imported from OSS
Differential Revision: D17076411
fbshipit-source-id: 46eb298e7320d1e585b0101effc0fcfd09219046
2019-09-25 23:34:05 +00:00
|
|
|
#include <torch/csrc/jit/mobile/module.h>
|
2021-05-03 18:24:59 +00:00
|
|
|
#include <torch/csrc/jit/mobile/runtime_compatibility.h>
|
2020-12-18 19:15:42 +00:00
|
|
|
#include <torch/csrc/jit/serialization/export.h>
|
2020-02-27 20:18:24 +00:00
|
|
|
#include <torch/csrc/jit/serialization/import.h>
|
2020-03-14 01:21:21 +00:00
|
|
|
#include <torch/custom_class.h>
|
2020-03-04 07:31:03 +00:00
|
|
|
#include <torch/torch.h>
|
Bytecode export flow (#25187)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25187
The bytecode export flow: dump the bytecode format for the light weighted interpreter.
* The bytecode is generated without input spec optimization. It would be more generic (input independent) with no obvious performance degradation (to be tested).
* Main API: torch::jit::script::Module::save(filename, extra_files, bool *bytecode_format* = false).
* Both bytecode and module object are exported in pickle format.
* The module object (in data.pkl) is the same as the original JIT model.
* The serializer is dependent on pickle only (no protobuf or Json).
* The major functionality is forked in ScriptModuleSerializer2::serialize().
* The test loader is test_bc_export.cpp.
* Simple APIs are added in Code and its implementation to get necessary information (instructions, operators and constants).
* Since there's no dependency on graph/node, GetAttr is promoted from an operator to first-class instruction (https://github.com/pytorch/pytorch/pull/25151) .
* Some definitions (instructions, writeArchive, etc) that are shared by full JIT and bytecode are pulled out of the local namespace (https://github.com/pytorch/pytorch/pull/25148).
The output layout looks like:
* folders of methods.
* In each method folder (for example, forward/):
* bytecode.pkl: instructions and operators
* constants{.pkl,/}: constant list in constants.pkl. If there are tensors in constants, the binary tensor files in constants/ folder.
* data{.pkl,/}: the module object, with binary tensor files in data/ folder. The same as in torchscript.
Test Plan: Imported from OSS
Differential Revision: D17076411
fbshipit-source-id: 46eb298e7320d1e585b0101effc0fcfd09219046
2019-09-25 23:34:05 +00:00
|
|
|
|
2020-08-14 08:23:53 +00:00
|
|
|
#include <unordered_set>
|
|
|
|
|
|
Bytecode export flow (#25187)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25187
The bytecode export flow: dump the bytecode format for the light weighted interpreter.
* The bytecode is generated without input spec optimization. It would be more generic (input independent) with no obvious performance degradation (to be tested).
* Main API: torch::jit::script::Module::save(filename, extra_files, bool *bytecode_format* = false).
* Both bytecode and module object are exported in pickle format.
* The module object (in data.pkl) is the same as the original JIT model.
* The serializer is dependent on pickle only (no protobuf or Json).
* The major functionality is forked in ScriptModuleSerializer2::serialize().
* The test loader is test_bc_export.cpp.
* Simple APIs are added in Code and its implementation to get necessary information (instructions, operators and constants).
* Since there's no dependency on graph/node, GetAttr is promoted from an operator to first-class instruction (https://github.com/pytorch/pytorch/pull/25151) .
* Some definitions (instructions, writeArchive, etc) that are shared by full JIT and bytecode are pulled out of the local namespace (https://github.com/pytorch/pytorch/pull/25148).
The output layout looks like:
* folders of methods.
* In each method folder (for example, forward/):
* bytecode.pkl: instructions and operators
* constants{.pkl,/}: constant list in constants.pkl. If there are tensors in constants, the binary tensor files in constants/ folder.
* data{.pkl,/}: the module object, with binary tensor files in data/ folder. The same as in torchscript.
Test Plan: Imported from OSS
Differential Revision: D17076411
fbshipit-source-id: 46eb298e7320d1e585b0101effc0fcfd09219046
2019-09-25 23:34:05 +00:00
|
|
|
// Tests go in torch::jit
|
|
|
|
|
namespace torch {
|
|
|
|
|
namespace jit {
|
|
|
|
|
|
Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os
def get_compiled_files_list():
import json
with open("build/compile_commands.json") as f:
data = json.load(f)
files = [os.path.relpath(node['file']) for node in data]
for idx, fname in enumerate(files):
if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
return files
def run_clang_tidy(fname):
check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
changes = check_output(["git", "ls-files", "-m"])
if len(changes) == 0:
return
check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])
def main():
git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
compiled_files = get_compiled_files_list()
for idx, fname in enumerate(git_files):
if fname not in compiled_files:
continue
if fname.startswith("caffe2/contrib/aten/"):
continue
print(f"[{idx}/{len(git_files)}] Processing {fname}")
run_clang_tidy(fname)
if __name__ == "__main__":
main()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892
Reviewed By: H-Huang
Differential Revision: D27991944
Pulled By: malfet
fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 21:09:06 +00:00
|
|
|
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
|
2020-09-25 18:35:39 +00:00
|
|
|
TEST(LiteInterpreterTest, UpsampleNearest2d) {
|
2020-03-12 06:29:34 +00:00
|
|
|
Module m("m");
|
2020-02-04 15:56:47 +00:00
|
|
|
m.define(R"(
|
|
|
|
|
def forward(self, input: Tensor, scale:float):
|
|
|
|
|
return torch.upsample_nearest2d(input, [1, 1], float(scale), float(scale))
|
|
|
|
|
)");
|
|
|
|
|
|
|
|
|
|
std::vector<IValue> inputs;
|
|
|
|
|
inputs.emplace_back(torch::rand({1, 3, 128, 128}));
|
|
|
|
|
inputs.emplace_back(at::Scalar(2.0));
|
|
|
|
|
auto ref = m.forward(inputs);
|
|
|
|
|
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
IValue res;
|
|
|
|
|
res = bc.forward(inputs);
|
|
|
|
|
|
|
|
|
|
auto resd = res.toTensor();
|
|
|
|
|
auto refd = ref.toTensor();
|
|
|
|
|
ASSERT_TRUE(resd.equal(refd));
|
|
|
|
|
}
|
|
|
|
|
|
Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os
def get_compiled_files_list():
import json
with open("build/compile_commands.json") as f:
data = json.load(f)
files = [os.path.relpath(node['file']) for node in data]
for idx, fname in enumerate(files):
if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
return files
def run_clang_tidy(fname):
check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
changes = check_output(["git", "ls-files", "-m"])
if len(changes) == 0:
return
check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])
def main():
git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
compiled_files = get_compiled_files_list()
for idx, fname in enumerate(git_files):
if fname not in compiled_files:
continue
if fname.startswith("caffe2/contrib/aten/"):
continue
print(f"[{idx}/{len(git_files)}] Processing {fname}")
run_clang_tidy(fname)
if __name__ == "__main__":
main()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892
Reviewed By: H-Huang
Differential Revision: D27991944
Pulled By: malfet
fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 21:09:06 +00:00
|
|
|
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
|
2020-11-03 00:28:24 +00:00
|
|
|
TEST(LiteInterpreterTest, CheckAttrAccess) {
|
|
|
|
|
Module m("m");
|
|
|
|
|
m.register_attribute("mobile_optimized", BoolType::get(), true);
|
|
|
|
|
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
bool mobile_optimized = bc.attr("mobile_optimized", false).toBool();
|
|
|
|
|
|
|
|
|
|
AT_ASSERT(mobile_optimized);
|
|
|
|
|
m.setattr("mobile_optimized", false);
|
|
|
|
|
ss = std::stringstream();
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
bc = _load_for_mobile(ss);
|
|
|
|
|
mobile_optimized = bc.attr("mobile_optimized", false).toBool();
|
|
|
|
|
|
|
|
|
|
AT_ASSERT(!mobile_optimized);
|
|
|
|
|
}
|
|
|
|
|
|
2021-02-02 02:32:57 +00:00
|
|
|
TEST(LiteInterpreterTest, MethodInvocation) { // NOLINT (use =delete in gtest)
|
|
|
|
|
const std::vector<std::string> test_programs{
|
|
|
|
|
// test invoking a method with default parameter
|
|
|
|
|
R"(
|
|
|
|
|
def test_func(self, x, b : int = 4):
|
|
|
|
|
return self.foo + x + b
|
|
|
|
|
)",
|
|
|
|
|
// inner method call with default parameter (gets inlined)
|
|
|
|
|
R"(
|
|
|
|
|
def add_with_default_arg(self, x, b : int = 4):
|
|
|
|
|
return self.foo + x + b
|
|
|
|
|
def test_func(self, x):
|
|
|
|
|
return self.add_with_default_arg(x) # invoke method w/ default arg
|
|
|
|
|
)",
|
|
|
|
|
// simple method call
|
|
|
|
|
R"(
|
|
|
|
|
def test_func(self, x):
|
|
|
|
|
b = 4
|
|
|
|
|
return self.foo + x + b
|
|
|
|
|
)",
|
|
|
|
|
};
|
|
|
|
|
for (const auto& test_program : test_programs) {
|
|
|
|
|
Module m("m");
|
|
|
|
|
m.register_parameter("foo", torch::ones({}), false);
|
|
|
|
|
m.define(test_program);
|
|
|
|
|
|
|
|
|
|
const int fortyTwo = 42; // (keep linter happy)
|
|
|
|
|
auto minput = fortyTwo * torch::ones({});
|
|
|
|
|
auto ref = m.run_method("test_func", minput);
|
|
|
|
|
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
const auto& test_func = bc.get_method("test_func");
|
|
|
|
|
IValue res;
|
|
|
|
|
for (int i = 0; i < 3; ++i) {
|
|
|
|
|
res = test_func({minput});
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
auto resd = res.toTensor().item<float>();
|
|
|
|
|
auto refd = ref.toTensor().item<float>();
|
|
|
|
|
AT_ASSERT(resd == refd);
|
Bytecode export flow (#25187)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25187
The bytecode export flow: dump the bytecode format for the light weighted interpreter.
* The bytecode is generated without input spec optimization. It would be more generic (input independent) with no obvious performance degradation (to be tested).
* Main API: torch::jit::script::Module::save(filename, extra_files, bool *bytecode_format* = false).
* Both bytecode and module object are exported in pickle format.
* The module object (in data.pkl) is the same as the original JIT model.
* The serializer is dependent on pickle only (no protobuf or Json).
* The major functionality is forked in ScriptModuleSerializer2::serialize().
* The test loader is test_bc_export.cpp.
* Simple APIs are added in Code and its implementation to get necessary information (instructions, operators and constants).
* Since there's no dependency on graph/node, GetAttr is promoted from an operator to first-class instruction (https://github.com/pytorch/pytorch/pull/25151) .
* Some definitions (instructions, writeArchive, etc) that are shared by full JIT and bytecode are pulled out of the local namespace (https://github.com/pytorch/pytorch/pull/25148).
The output layout looks like:
* folders of methods.
* In each method folder (for example, forward/):
* bytecode.pkl: instructions and operators
* constants{.pkl,/}: constant list in constants.pkl. If there are tensors in constants, the binary tensor files in constants/ folder.
* data{.pkl,/}: the module object, with binary tensor files in data/ folder. The same as in torchscript.
Test Plan: Imported from OSS
Differential Revision: D17076411
fbshipit-source-id: 46eb298e7320d1e585b0101effc0fcfd09219046
2019-09-25 23:34:05 +00:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os
def get_compiled_files_list():
import json
with open("build/compile_commands.json") as f:
data = json.load(f)
files = [os.path.relpath(node['file']) for node in data]
for idx, fname in enumerate(files):
if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
return files
def run_clang_tidy(fname):
check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
changes = check_output(["git", "ls-files", "-m"])
if len(changes) == 0:
return
check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])
def main():
git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
compiled_files = get_compiled_files_list()
for idx, fname in enumerate(git_files):
if fname not in compiled_files:
continue
if fname.startswith("caffe2/contrib/aten/"):
continue
print(f"[{idx}/{len(git_files)}] Processing {fname}")
run_clang_tidy(fname)
if __name__ == "__main__":
main()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892
Reviewed By: H-Huang
Differential Revision: D27991944
Pulled By: malfet
fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 21:09:06 +00:00
|
|
|
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
|
2020-09-25 18:35:39 +00:00
|
|
|
TEST(LiteInterpreterTest, Conv) {
|
2019-10-11 21:03:55 +00:00
|
|
|
auto s = std::getenv("PYTORCH_TEST_WITH_TSAN");
|
|
|
|
|
if (s && strcmp(s, "1") == 0)
|
|
|
|
|
return;
|
|
|
|
|
|
Add OPN instruction and vararg operator table (#27104)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27104
* The use case here is to replace prim::ListConstruct, which requires Node, but Node is not available in mobile lite interpreter.
* (OPN, X, N), X is the index to the vararg operator-name and operator tables. N is number of inputs. For ListConstruct example, operator name can be "aten::listconstruct" and the overloaded name is the output type ("int", "float", "bool", "tensor" and "generic").
* A vararg operator table is built with void(int input_size, Stack& stack) functions.
## Unit test
LiteInterpreterConv covers OPN instruction and conv operator.
Test Plan: Imported from OSS
Differential Revision: D17762853
fbshipit-source-id: 475aa0c6678e3760cec805862a78510913a89c83
2019-10-04 16:34:21 +00:00
|
|
|
std::vector<torch::jit::IValue> inputs;
|
|
|
|
|
|
2020-03-12 06:29:34 +00:00
|
|
|
Module m("m");
|
Add OPN instruction and vararg operator table (#27104)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27104
* The use case here is to replace prim::ListConstruct, which requires Node, but Node is not available in mobile lite interpreter.
* (OPN, X, N), X is the index to the vararg operator-name and operator tables. N is number of inputs. For ListConstruct example, operator name can be "aten::listconstruct" and the overloaded name is the output type ("int", "float", "bool", "tensor" and "generic").
* A vararg operator table is built with void(int input_size, Stack& stack) functions.
## Unit test
LiteInterpreterConv covers OPN instruction and conv operator.
Test Plan: Imported from OSS
Differential Revision: D17762853
fbshipit-source-id: 475aa0c6678e3760cec805862a78510913a89c83
2019-10-04 16:34:21 +00:00
|
|
|
m.register_parameter("weight", torch::ones({20, 1, 5, 5}), false);
|
|
|
|
|
m.register_parameter("bias", torch::ones({20}), false);
|
|
|
|
|
m.define(R"(
|
|
|
|
|
def forward(self, input):
|
2020-09-01 22:32:15 +00:00
|
|
|
return torch._convolution(input, self.weight, self.bias, [1, 1], [0, 0], [1, 1], False, [0, 0], 1, False, False, True, True)
|
Add OPN instruction and vararg operator table (#27104)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27104
* The use case here is to replace prim::ListConstruct, which requires Node, but Node is not available in mobile lite interpreter.
* (OPN, X, N), X is the index to the vararg operator-name and operator tables. N is number of inputs. For ListConstruct example, operator name can be "aten::listconstruct" and the overloaded name is the output type ("int", "float", "bool", "tensor" and "generic").
* A vararg operator table is built with void(int input_size, Stack& stack) functions.
## Unit test
LiteInterpreterConv covers OPN instruction and conv operator.
Test Plan: Imported from OSS
Differential Revision: D17762853
fbshipit-source-id: 475aa0c6678e3760cec805862a78510913a89c83
2019-10-04 16:34:21 +00:00
|
|
|
)");
|
|
|
|
|
|
Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os
def get_compiled_files_list():
import json
with open("build/compile_commands.json") as f:
data = json.load(f)
files = [os.path.relpath(node['file']) for node in data]
for idx, fname in enumerate(files):
if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
return files
def run_clang_tidy(fname):
check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
changes = check_output(["git", "ls-files", "-m"])
if len(changes) == 0:
return
check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])
def main():
git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
compiled_files = get_compiled_files_list()
for idx, fname in enumerate(git_files):
if fname not in compiled_files:
continue
if fname.startswith("caffe2/contrib/aten/"):
continue
print(f"[{idx}/{len(git_files)}] Processing {fname}")
run_clang_tidy(fname)
if __name__ == "__main__":
main()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892
Reviewed By: H-Huang
Differential Revision: D27991944
Pulled By: malfet
fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 21:09:06 +00:00
|
|
|
// NOLINTNEXTLINE(cppcoreguidelines-avoid-magic-numbers,modernize-use-emplace)
|
Add OPN instruction and vararg operator table (#27104)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27104
* The use case here is to replace prim::ListConstruct, which requires Node, but Node is not available in mobile lite interpreter.
* (OPN, X, N), X is the index to the vararg operator-name and operator tables. N is number of inputs. For ListConstruct example, operator name can be "aten::listconstruct" and the overloaded name is the output type ("int", "float", "bool", "tensor" and "generic").
* A vararg operator table is built with void(int input_size, Stack& stack) functions.
## Unit test
LiteInterpreterConv covers OPN instruction and conv operator.
Test Plan: Imported from OSS
Differential Revision: D17762853
fbshipit-source-id: 475aa0c6678e3760cec805862a78510913a89c83
2019-10-04 16:34:21 +00:00
|
|
|
inputs.push_back(torch::ones({1, 1, 28, 28}));
|
|
|
|
|
|
|
|
|
|
auto outputref = m.forward(inputs).toTensor();
|
|
|
|
|
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
IValue res;
|
|
|
|
|
for (int i = 0; i < 3; ++i) {
|
2020-09-11 17:14:09 +00:00
|
|
|
res = bc.get_method("forward")(inputs);
|
Add OPN instruction and vararg operator table (#27104)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27104
* The use case here is to replace prim::ListConstruct, which requires Node, but Node is not available in mobile lite interpreter.
* (OPN, X, N), X is the index to the vararg operator-name and operator tables. N is number of inputs. For ListConstruct example, operator name can be "aten::listconstruct" and the overloaded name is the output type ("int", "float", "bool", "tensor" and "generic").
* A vararg operator table is built with void(int input_size, Stack& stack) functions.
## Unit test
LiteInterpreterConv covers OPN instruction and conv operator.
Test Plan: Imported from OSS
Differential Revision: D17762853
fbshipit-source-id: 475aa0c6678e3760cec805862a78510913a89c83
2019-10-04 16:34:21 +00:00
|
|
|
}
|
|
|
|
|
auto output = res.toTensor();
|
|
|
|
|
AT_ASSERT(outputref.dim() == output.dim());
|
2020-03-26 18:15:49 +00:00
|
|
|
AT_ASSERT(
|
|
|
|
|
outputref[0][0][0][0].item<int>() == output[0][0][0][0].item<int>());
|
Add OPN instruction and vararg operator table (#27104)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27104
* The use case here is to replace prim::ListConstruct, which requires Node, but Node is not available in mobile lite interpreter.
* (OPN, X, N), X is the index to the vararg operator-name and operator tables. N is number of inputs. For ListConstruct example, operator name can be "aten::listconstruct" and the overloaded name is the output type ("int", "float", "bool", "tensor" and "generic").
* A vararg operator table is built with void(int input_size, Stack& stack) functions.
## Unit test
LiteInterpreterConv covers OPN instruction and conv operator.
Test Plan: Imported from OSS
Differential Revision: D17762853
fbshipit-source-id: 475aa0c6678e3760cec805862a78510913a89c83
2019-10-04 16:34:21 +00:00
|
|
|
}
|
2019-11-08 21:21:55 +00:00
|
|
|
|
Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os
def get_compiled_files_list():
import json
with open("build/compile_commands.json") as f:
data = json.load(f)
files = [os.path.relpath(node['file']) for node in data]
for idx, fname in enumerate(files):
if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
return files
def run_clang_tidy(fname):
check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
changes = check_output(["git", "ls-files", "-m"])
if len(changes) == 0:
return
check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])
def main():
git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
compiled_files = get_compiled_files_list()
for idx, fname in enumerate(git_files):
if fname not in compiled_files:
continue
if fname.startswith("caffe2/contrib/aten/"):
continue
print(f"[{idx}/{len(git_files)}] Processing {fname}")
run_clang_tidy(fname)
if __name__ == "__main__":
main()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892
Reviewed By: H-Huang
Differential Revision: D27991944
Pulled By: malfet
fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 21:09:06 +00:00
|
|
|
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
|
2020-09-25 18:35:39 +00:00
|
|
|
TEST(LiteInterpreterTest, Inline) {
|
2020-03-12 06:29:34 +00:00
|
|
|
Module m("m");
|
2019-11-08 21:21:55 +00:00
|
|
|
m.define(R"JIT(
|
|
|
|
|
def foo1(self, x):
|
|
|
|
|
return x + 1
|
|
|
|
|
|
|
|
|
|
def foo2(self, x):
|
|
|
|
|
return self.foo1(x) + 2
|
|
|
|
|
|
|
|
|
|
def foo3(self, x):
|
|
|
|
|
return self.foo2(x) + 3
|
|
|
|
|
)JIT");
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
std::vector<torch::jit::IValue> inputs({torch::ones({})});
|
2020-09-11 17:14:09 +00:00
|
|
|
auto output = bc.get_method("foo3")(inputs);
|
2019-11-08 21:21:55 +00:00
|
|
|
AT_ASSERT(output.toTensor().item<float>() == 7.0);
|
|
|
|
|
}
|
2019-11-16 00:14:53 +00:00
|
|
|
|
Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os
def get_compiled_files_list():
import json
with open("build/compile_commands.json") as f:
data = json.load(f)
files = [os.path.relpath(node['file']) for node in data]
for idx, fname in enumerate(files):
if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
return files
def run_clang_tidy(fname):
check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
changes = check_output(["git", "ls-files", "-m"])
if len(changes) == 0:
return
check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])
def main():
git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
compiled_files = get_compiled_files_list()
for idx, fname in enumerate(git_files):
if fname not in compiled_files:
continue
if fname.startswith("caffe2/contrib/aten/"):
continue
print(f"[{idx}/{len(git_files)}] Processing {fname}")
run_clang_tidy(fname)
if __name__ == "__main__":
main()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892
Reviewed By: H-Huang
Differential Revision: D27991944
Pulled By: malfet
fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 21:09:06 +00:00
|
|
|
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
|
2020-09-25 18:35:39 +00:00
|
|
|
TEST(LiteInterpreterTest, Tuple) {
|
2020-03-12 06:29:34 +00:00
|
|
|
Module m("m");
|
2019-11-16 00:14:53 +00:00
|
|
|
m.define(R"JIT(
|
|
|
|
|
def foo(self, x):
|
|
|
|
|
return (1, 2, x + 3)
|
|
|
|
|
|
|
|
|
|
def forward(self, x):
|
|
|
|
|
tuple = self.foo(x)
|
|
|
|
|
return tuple
|
|
|
|
|
)JIT");
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
std::vector<torch::jit::IValue> inputs({torch::ones({})});
|
2020-09-11 17:14:09 +00:00
|
|
|
auto output = bc.get_method("forward")(inputs);
|
2019-11-16 00:14:53 +00:00
|
|
|
AT_ASSERT(output.toTuple()->elements()[1].toInt() == 2);
|
|
|
|
|
}
|
2019-11-17 07:55:53 +00:00
|
|
|
|
Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os
def get_compiled_files_list():
import json
with open("build/compile_commands.json") as f:
data = json.load(f)
files = [os.path.relpath(node['file']) for node in data]
for idx, fname in enumerate(files):
if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
return files
def run_clang_tidy(fname):
check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
changes = check_output(["git", "ls-files", "-m"])
if len(changes) == 0:
return
check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])
def main():
git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
compiled_files = get_compiled_files_list()
for idx, fname in enumerate(git_files):
if fname not in compiled_files:
continue
if fname.startswith("caffe2/contrib/aten/"):
continue
print(f"[{idx}/{len(git_files)}] Processing {fname}")
run_clang_tidy(fname)
if __name__ == "__main__":
main()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892
Reviewed By: H-Huang
Differential Revision: D27991944
Pulled By: malfet
fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 21:09:06 +00:00
|
|
|
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
|
2020-09-25 18:35:39 +00:00
|
|
|
TEST(LiteInterpreterTest, Dict) {
|
2020-04-04 16:50:39 +00:00
|
|
|
Module m("m");
|
|
|
|
|
m.define(R"JIT(
|
|
|
|
|
def foo(self, x):
|
|
|
|
|
return {"result": x + 1}
|
|
|
|
|
|
|
|
|
|
def forward(self, x):
|
|
|
|
|
d = self.foo(x)
|
|
|
|
|
return d
|
|
|
|
|
)JIT");
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
std::vector<torch::jit::IValue> inputs({torch::ones({})});
|
2020-09-11 17:14:09 +00:00
|
|
|
auto output = bc.get_method("forward")(inputs);
|
2020-04-04 16:50:39 +00:00
|
|
|
AT_ASSERT(output.toGenericDict().at("result").toTensor().item().toInt() == 2);
|
|
|
|
|
}
|
|
|
|
|
|
Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os
def get_compiled_files_list():
import json
with open("build/compile_commands.json") as f:
data = json.load(f)
files = [os.path.relpath(node['file']) for node in data]
for idx, fname in enumerate(files):
if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
return files
def run_clang_tidy(fname):
check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
changes = check_output(["git", "ls-files", "-m"])
if len(changes) == 0:
return
check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])
def main():
git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
compiled_files = get_compiled_files_list()
for idx, fname in enumerate(git_files):
if fname not in compiled_files:
continue
if fname.startswith("caffe2/contrib/aten/"):
continue
print(f"[{idx}/{len(git_files)}] Processing {fname}")
run_clang_tidy(fname)
if __name__ == "__main__":
main()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892
Reviewed By: H-Huang
Differential Revision: D27991944
Pulled By: malfet
fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 21:09:06 +00:00
|
|
|
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
|
2020-09-25 18:35:39 +00:00
|
|
|
TEST(LiteInterpreterTest, PrimOverload) {
|
2020-03-13 00:46:37 +00:00
|
|
|
/*
|
|
|
|
|
// temporarily disabled
|
|
|
|
|
script::Module m("m");
|
2019-11-17 07:55:53 +00:00
|
|
|
m.define(R"JIT(
|
|
|
|
|
def forward(self, x):
|
|
|
|
|
result = [1, 2]
|
|
|
|
|
result.append(3)
|
|
|
|
|
return result
|
|
|
|
|
)JIT");
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
std::vector<torch::jit::IValue> inputs({torch::ones({})});
|
2020-09-11 17:14:09 +00:00
|
|
|
auto output = bc.get_method("forward")(inputs);
|
2019-11-17 07:55:53 +00:00
|
|
|
AT_ASSERT(output.toIntList()[2] == 3);
|
2020-03-13 00:46:37 +00:00
|
|
|
*/
|
2019-11-17 07:55:53 +00:00
|
|
|
}
|
2020-01-04 21:46:05 +00:00
|
|
|
|
Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os
def get_compiled_files_list():
import json
with open("build/compile_commands.json") as f:
data = json.load(f)
files = [os.path.relpath(node['file']) for node in data]
for idx, fname in enumerate(files):
if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
return files
def run_clang_tidy(fname):
check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
changes = check_output(["git", "ls-files", "-m"])
if len(changes) == 0:
return
check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])
def main():
git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
compiled_files = get_compiled_files_list()
for idx, fname in enumerate(git_files):
if fname not in compiled_files:
continue
if fname.startswith("caffe2/contrib/aten/"):
continue
print(f"[{idx}/{len(git_files)}] Processing {fname}")
run_clang_tidy(fname)
if __name__ == "__main__":
main()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892
Reviewed By: H-Huang
Differential Revision: D27991944
Pulled By: malfet
fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 21:09:06 +00:00
|
|
|
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
|
2020-09-25 18:35:39 +00:00
|
|
|
TEST(LiteInterpreterTest, Prim) {
|
2020-03-12 06:29:34 +00:00
|
|
|
Module m("m");
|
2020-01-04 21:46:05 +00:00
|
|
|
m.define(R"JIT(
|
|
|
|
|
def forward(self, x):
|
|
|
|
|
return int(x)
|
|
|
|
|
)JIT");
|
|
|
|
|
|
|
|
|
|
std::vector<IValue> inputs;
|
|
|
|
|
auto minput = 3.5 * torch::ones({});
|
|
|
|
|
inputs.emplace_back(minput);
|
|
|
|
|
auto ref = m.run_method("forward", minput);
|
|
|
|
|
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
IValue res;
|
|
|
|
|
for (int i = 0; i < 3; ++i) {
|
Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os
def get_compiled_files_list():
import json
with open("build/compile_commands.json") as f:
data = json.load(f)
files = [os.path.relpath(node['file']) for node in data]
for idx, fname in enumerate(files):
if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
return files
def run_clang_tidy(fname):
check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
changes = check_output(["git", "ls-files", "-m"])
if len(changes) == 0:
return
check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])
def main():
git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
compiled_files = get_compiled_files_list()
for idx, fname in enumerate(git_files):
if fname not in compiled_files:
continue
if fname.startswith("caffe2/contrib/aten/"):
continue
print(f"[{idx}/{len(git_files)}] Processing {fname}")
run_clang_tidy(fname)
if __name__ == "__main__":
main()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892
Reviewed By: H-Huang
Differential Revision: D27991944
Pulled By: malfet
fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 21:09:06 +00:00
|
|
|
// NOLINTNEXTLINE(performance-unnecessary-copy-initialization)
|
2020-01-04 21:46:05 +00:00
|
|
|
auto bcinputs = inputs;
|
2020-09-11 17:14:09 +00:00
|
|
|
res = bc.get_method("forward")(bcinputs);
|
2020-01-04 21:46:05 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
auto resi = res.toInt();
|
|
|
|
|
auto refi = ref.toInt();
|
|
|
|
|
AT_ASSERT(resi == refi);
|
|
|
|
|
}
|
2020-02-13 00:25:13 +00:00
|
|
|
|
Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os
def get_compiled_files_list():
import json
with open("build/compile_commands.json") as f:
data = json.load(f)
files = [os.path.relpath(node['file']) for node in data]
for idx, fname in enumerate(files):
if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
return files
def run_clang_tidy(fname):
check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
changes = check_output(["git", "ls-files", "-m"])
if len(changes) == 0:
return
check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])
def main():
git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
compiled_files = get_compiled_files_list()
for idx, fname in enumerate(git_files):
if fname not in compiled_files:
continue
if fname.startswith("caffe2/contrib/aten/"):
continue
print(f"[{idx}/{len(git_files)}] Processing {fname}")
run_clang_tidy(fname)
if __name__ == "__main__":
main()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892
Reviewed By: H-Huang
Differential Revision: D27991944
Pulled By: malfet
fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 21:09:06 +00:00
|
|
|
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
|
2020-09-25 18:35:39 +00:00
|
|
|
TEST(LiteInterpreterTest, PrimScalar) {
|
2020-09-24 16:36:53 +00:00
|
|
|
Module m("m");
|
|
|
|
|
m.define(R"JIT(
|
|
|
|
|
def forward(self, x):
|
|
|
|
|
return int(x.item())
|
|
|
|
|
)JIT");
|
|
|
|
|
|
|
|
|
|
std::vector<IValue> inputs;
|
|
|
|
|
auto minput = 3.5 * torch::ones({});
|
|
|
|
|
inputs.emplace_back(minput);
|
|
|
|
|
auto ref = m.run_method("forward", minput);
|
|
|
|
|
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
IValue res;
|
|
|
|
|
for (int i = 0; i < 3; ++i) {
|
Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os
def get_compiled_files_list():
import json
with open("build/compile_commands.json") as f:
data = json.load(f)
files = [os.path.relpath(node['file']) for node in data]
for idx, fname in enumerate(files):
if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
return files
def run_clang_tidy(fname):
check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
changes = check_output(["git", "ls-files", "-m"])
if len(changes) == 0:
return
check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])
def main():
git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
compiled_files = get_compiled_files_list()
for idx, fname in enumerate(git_files):
if fname not in compiled_files:
continue
if fname.startswith("caffe2/contrib/aten/"):
continue
print(f"[{idx}/{len(git_files)}] Processing {fname}")
run_clang_tidy(fname)
if __name__ == "__main__":
main()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892
Reviewed By: H-Huang
Differential Revision: D27991944
Pulled By: malfet
fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 21:09:06 +00:00
|
|
|
// NOLINTNEXTLINE(performance-unnecessary-copy-initialization)
|
2020-09-24 16:36:53 +00:00
|
|
|
auto bcinputs = inputs;
|
|
|
|
|
res = bc.get_method("forward")(bcinputs);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
auto resi = res.toInt();
|
|
|
|
|
auto refi = ref.toInt();
|
|
|
|
|
AT_ASSERT(resi == refi);
|
|
|
|
|
}
|
|
|
|
|
|
Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os
def get_compiled_files_list():
import json
with open("build/compile_commands.json") as f:
data = json.load(f)
files = [os.path.relpath(node['file']) for node in data]
for idx, fname in enumerate(files):
if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
return files
def run_clang_tidy(fname):
check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
changes = check_output(["git", "ls-files", "-m"])
if len(changes) == 0:
return
check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])
def main():
git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
compiled_files = get_compiled_files_list()
for idx, fname in enumerate(git_files):
if fname not in compiled_files:
continue
if fname.startswith("caffe2/contrib/aten/"):
continue
print(f"[{idx}/{len(git_files)}] Processing {fname}")
run_clang_tidy(fname)
if __name__ == "__main__":
main()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892
Reviewed By: H-Huang
Differential Revision: D27991944
Pulled By: malfet
fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 21:09:06 +00:00
|
|
|
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
|
2020-09-25 18:35:39 +00:00
|
|
|
TEST(LiteInterpreterTest, LoadOrigJit) {
|
2020-03-12 06:29:34 +00:00
|
|
|
Module m("m");
|
2020-02-13 00:25:13 +00:00
|
|
|
m.register_parameter("foo", torch::ones({}), false);
|
|
|
|
|
m.define(R"(
|
|
|
|
|
def forward(self, x):
|
|
|
|
|
b = 4
|
|
|
|
|
return self.foo + x + b
|
|
|
|
|
)");
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m.save(ss);
|
2021-02-26 19:51:29 +00:00
|
|
|
ASSERT_THROWS_WITH_MESSAGE(_load_for_mobile(ss), "file not found");
|
2020-02-13 00:25:13 +00:00
|
|
|
}
|
|
|
|
|
|
Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os
def get_compiled_files_list():
import json
with open("build/compile_commands.json") as f:
data = json.load(f)
files = [os.path.relpath(node['file']) for node in data]
for idx, fname in enumerate(files):
if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
return files
def run_clang_tidy(fname):
check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
changes = check_output(["git", "ls-files", "-m"])
if len(changes) == 0:
return
check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])
def main():
git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
compiled_files = get_compiled_files_list()
for idx, fname in enumerate(git_files):
if fname not in compiled_files:
continue
if fname.startswith("caffe2/contrib/aten/"):
continue
print(f"[{idx}/{len(git_files)}] Processing {fname}")
run_clang_tidy(fname)
if __name__ == "__main__":
main()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892
Reviewed By: H-Huang
Differential Revision: D27991944
Pulled By: malfet
fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 21:09:06 +00:00
|
|
|
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
|
2020-09-25 18:35:39 +00:00
|
|
|
TEST(LiteInterpreterTest, WrongMethodName) {
|
2020-03-12 06:29:34 +00:00
|
|
|
Module m("m");
|
2020-02-29 02:25:30 +00:00
|
|
|
m.register_parameter("foo", torch::ones({}), false);
|
|
|
|
|
m.define(R"(
|
|
|
|
|
def add(self, x):
|
|
|
|
|
b = 4
|
|
|
|
|
return self.foo + x + b
|
|
|
|
|
)");
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
std::vector<IValue> inputs;
|
|
|
|
|
auto minput = 5 * torch::ones({});
|
|
|
|
|
inputs.emplace_back(minput);
|
2021-02-26 19:51:29 +00:00
|
|
|
ASSERT_THROWS_WITH_MESSAGE(
|
|
|
|
|
bc.get_method("forward")(inputs), "is not defined");
|
2020-02-29 02:25:30 +00:00
|
|
|
}
|
|
|
|
|
|
Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os
def get_compiled_files_list():
import json
with open("build/compile_commands.json") as f:
data = json.load(f)
files = [os.path.relpath(node['file']) for node in data]
for idx, fname in enumerate(files):
if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
return files
def run_clang_tidy(fname):
check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
changes = check_output(["git", "ls-files", "-m"])
if len(changes) == 0:
return
check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])
def main():
git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
compiled_files = get_compiled_files_list()
for idx, fname in enumerate(git_files):
if fname not in compiled_files:
continue
if fname.startswith("caffe2/contrib/aten/"):
continue
print(f"[{idx}/{len(git_files)}] Processing {fname}")
run_clang_tidy(fname)
if __name__ == "__main__":
main()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892
Reviewed By: H-Huang
Differential Revision: D27991944
Pulled By: malfet
fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 21:09:06 +00:00
|
|
|
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
|
2020-09-25 18:35:39 +00:00
|
|
|
TEST(LiteInterpreterTest, SetState) {
|
2020-03-12 06:29:34 +00:00
|
|
|
Module m("m");
|
2020-03-05 23:21:00 +00:00
|
|
|
m.register_parameter("foo", torch::ones({}), false);
|
|
|
|
|
m.define(R"(
|
|
|
|
|
def __getstate__(self):
|
|
|
|
|
return self.foo + self.foo
|
|
|
|
|
def __setstate__(self, a):
|
|
|
|
|
self.foo = a
|
|
|
|
|
def forward(self, x):
|
|
|
|
|
b = 4
|
|
|
|
|
return self.foo + x + b
|
|
|
|
|
)");
|
|
|
|
|
|
|
|
|
|
std::vector<IValue> inputs;
|
|
|
|
|
auto minput = 5 * torch::ones({});
|
|
|
|
|
inputs.emplace_back(minput);
|
|
|
|
|
|
|
|
|
|
std::stringstream ms;
|
|
|
|
|
m.save(ms);
|
|
|
|
|
auto loaded_m = load(ms);
|
|
|
|
|
auto ref = loaded_m.run_method("forward", minput);
|
|
|
|
|
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
IValue res;
|
|
|
|
|
for (int i = 0; i < 3; ++i) {
|
Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os
def get_compiled_files_list():
import json
with open("build/compile_commands.json") as f:
data = json.load(f)
files = [os.path.relpath(node['file']) for node in data]
for idx, fname in enumerate(files):
if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
return files
def run_clang_tidy(fname):
check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
changes = check_output(["git", "ls-files", "-m"])
if len(changes) == 0:
return
check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])
def main():
git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
compiled_files = get_compiled_files_list()
for idx, fname in enumerate(git_files):
if fname not in compiled_files:
continue
if fname.startswith("caffe2/contrib/aten/"):
continue
print(f"[{idx}/{len(git_files)}] Processing {fname}")
run_clang_tidy(fname)
if __name__ == "__main__":
main()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892
Reviewed By: H-Huang
Differential Revision: D27991944
Pulled By: malfet
fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 21:09:06 +00:00
|
|
|
// NOLINTNEXTLINE(performance-unnecessary-copy-initialization)
|
2020-03-05 23:21:00 +00:00
|
|
|
auto bcinputs = inputs;
|
2020-09-11 17:14:09 +00:00
|
|
|
res = bc.get_method("forward")(bcinputs);
|
2020-03-05 23:21:00 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
auto resd = res.toTensor().item<float>();
|
|
|
|
|
auto refd = ref.toTensor().item<float>();
|
|
|
|
|
AT_ASSERT(resd == refd);
|
|
|
|
|
}
|
|
|
|
|
|
2020-03-14 01:21:21 +00:00
|
|
|
class TorchBindLiteInterpreterTestStruct
|
|
|
|
|
: public torch::jit::CustomClassHolder {
|
|
|
|
|
public:
|
|
|
|
|
std::string get(at::Tensor t) {
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
ss << "Hello! Your tensor has ";
|
|
|
|
|
ss << t.numel();
|
|
|
|
|
ss << " elements!";
|
|
|
|
|
return ss.str();
|
|
|
|
|
}
|
|
|
|
|
};
|
|
|
|
|
|
2021-02-04 05:51:12 +00:00
|
|
|
namespace {
|
|
|
|
|
struct ClassNamespaceValue : public SugaredValue {
|
|
|
|
|
explicit ClassNamespaceValue(c10::QualifiedName name)
|
|
|
|
|
: basename_(std::move(name)) {}
|
|
|
|
|
|
|
|
|
|
std::shared_ptr<SugaredValue> attr(
|
|
|
|
|
const SourceRange& loc,
|
|
|
|
|
Function& m,
|
|
|
|
|
const std::string& name) override {
|
|
|
|
|
const auto fullName = c10::QualifiedName(basename_, name);
|
|
|
|
|
|
|
|
|
|
// Check to see if it is a custom class.
|
|
|
|
|
if (auto custom_class = getCustomClass(fullName.qualifiedName())) {
|
|
|
|
|
return std::make_shared<ClassValue>(custom_class);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// If it's not a custom class, assume it's another namespace
|
Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os
def get_compiled_files_list():
import json
with open("build/compile_commands.json") as f:
data = json.load(f)
files = [os.path.relpath(node['file']) for node in data]
for idx, fname in enumerate(files):
if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
return files
def run_clang_tidy(fname):
check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
changes = check_output(["git", "ls-files", "-m"])
if len(changes) == 0:
return
check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])
def main():
git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
compiled_files = get_compiled_files_list()
for idx, fname in enumerate(git_files):
if fname not in compiled_files:
continue
if fname.startswith("caffe2/contrib/aten/"):
continue
print(f"[{idx}/{len(git_files)}] Processing {fname}")
run_clang_tidy(fname)
if __name__ == "__main__":
main()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892
Reviewed By: H-Huang
Differential Revision: D27991944
Pulled By: malfet
fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 21:09:06 +00:00
|
|
|
// NOLINTNEXTLINE(performance-move-const-arg)
|
2021-02-04 05:51:12 +00:00
|
|
|
return std::make_shared<ClassNamespaceValue>(std::move(fullName));
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
std::string kind() const override {
|
|
|
|
|
return "Class Namespace";
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
private:
|
|
|
|
|
c10::QualifiedName basename_;
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
struct TestModuleResolver : public Resolver {
|
|
|
|
|
std::shared_ptr<SugaredValue> resolveValue(
|
|
|
|
|
const std::string& name,
|
|
|
|
|
Function& m,
|
|
|
|
|
const SourceRange& loc) override {
|
|
|
|
|
if (name == "torch") {
|
|
|
|
|
return std::make_shared<BuiltinModule>("aten");
|
|
|
|
|
} else if (name == "__torch__") {
|
|
|
|
|
return std::make_shared<ClassNamespaceValue>(c10::QualifiedName(name));
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return nullptr;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
TypePtr resolveType(const std::string& name, const SourceRange& loc)
|
|
|
|
|
override {
|
|
|
|
|
return nullptr;
|
|
|
|
|
}
|
|
|
|
|
};
|
|
|
|
|
} // namespace
|
|
|
|
|
|
Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os
def get_compiled_files_list():
import json
with open("build/compile_commands.json") as f:
data = json.load(f)
files = [os.path.relpath(node['file']) for node in data]
for idx, fname in enumerate(files):
if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
return files
def run_clang_tidy(fname):
check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
changes = check_output(["git", "ls-files", "-m"])
if len(changes) == 0:
return
check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])
def main():
git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
compiled_files = get_compiled_files_list()
for idx, fname in enumerate(git_files):
if fname not in compiled_files:
continue
if fname.startswith("caffe2/contrib/aten/"):
continue
print(f"[{idx}/{len(git_files)}] Processing {fname}")
run_clang_tidy(fname)
if __name__ == "__main__":
main()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892
Reviewed By: H-Huang
Differential Revision: D27991944
Pulled By: malfet
fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 21:09:06 +00:00
|
|
|
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
|
2021-02-04 05:51:12 +00:00
|
|
|
TEST(LiteInterpreterTest, BuiltinClass) {
|
|
|
|
|
script::Module m("m");
|
|
|
|
|
|
|
|
|
|
auto cls = getCustomClass(
|
|
|
|
|
"__torch__.torch.classes._TorchScriptTesting._LiteInterpreterTest");
|
|
|
|
|
TORCH_INTERNAL_ASSERT(cls);
|
|
|
|
|
c10::intrusive_ptr<torch::CustomClassHolder> obj_holder;
|
|
|
|
|
m.register_attribute("my_obj", cls, IValue::make_capsule(obj_holder));
|
|
|
|
|
|
|
|
|
|
m.register_parameter("foo", torch::ones({}), false);
|
|
|
|
|
m.define(
|
|
|
|
|
R"(
|
|
|
|
|
def __getstate__(self):
|
|
|
|
|
return 1
|
|
|
|
|
def __setstate__(self, a):
|
|
|
|
|
self.my_obj = __torch__.torch.classes._TorchScriptTesting._LiteInterpreterTest()
|
|
|
|
|
|
|
|
|
|
def forward(self, x) -> str:
|
|
|
|
|
return self.my_obj.get(x)
|
|
|
|
|
)",
|
|
|
|
|
std::make_shared<TestModuleResolver>());
|
|
|
|
|
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
auto res =
|
|
|
|
|
bc.get_method("forward")(std::vector<IValue>{torch::zeros({3, 4})});
|
|
|
|
|
const auto& str = res.toStringRef();
|
|
|
|
|
std::string expected = "Hello! Your tensor has 12 elements!";
|
|
|
|
|
AT_ASSERT(str == expected);
|
|
|
|
|
}
|
|
|
|
|
|
Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os
def get_compiled_files_list():
import json
with open("build/compile_commands.json") as f:
data = json.load(f)
files = [os.path.relpath(node['file']) for node in data]
for idx, fname in enumerate(files):
if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
return files
def run_clang_tidy(fname):
check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
changes = check_output(["git", "ls-files", "-m"])
if len(changes) == 0:
return
check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])
def main():
git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
compiled_files = get_compiled_files_list()
for idx, fname in enumerate(git_files):
if fname not in compiled_files:
continue
if fname.startswith("caffe2/contrib/aten/"):
continue
print(f"[{idx}/{len(git_files)}] Processing {fname}")
run_clang_tidy(fname)
if __name__ == "__main__":
main()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892
Reviewed By: H-Huang
Differential Revision: D27991944
Pulled By: malfet
fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 21:09:06 +00:00
|
|
|
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
|
2020-09-25 18:35:39 +00:00
|
|
|
TEST(LiteInterpreterTest, BuiltinFunction) {
|
2020-03-14 01:21:21 +00:00
|
|
|
script::Module m("m");
|
|
|
|
|
auto custom_class_obj =
|
|
|
|
|
make_custom_class<TorchBindLiteInterpreterTestStruct>();
|
|
|
|
|
m.register_attribute("my_obj", custom_class_obj.type(), custom_class_obj);
|
|
|
|
|
m.define(R"(
|
|
|
|
|
def forward(self, x) -> str:
|
|
|
|
|
return self.my_obj.get(x)
|
|
|
|
|
)");
|
|
|
|
|
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
auto res =
|
2020-09-11 17:14:09 +00:00
|
|
|
bc.get_method("forward")(std::vector<IValue>{torch::zeros({3, 4})});
|
Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os
def get_compiled_files_list():
import json
with open("build/compile_commands.json") as f:
data = json.load(f)
files = [os.path.relpath(node['file']) for node in data]
for idx, fname in enumerate(files):
if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
return files
def run_clang_tidy(fname):
check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
changes = check_output(["git", "ls-files", "-m"])
if len(changes) == 0:
return
check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])
def main():
git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
compiled_files = get_compiled_files_list()
for idx, fname in enumerate(git_files):
if fname not in compiled_files:
continue
if fname.startswith("caffe2/contrib/aten/"):
continue
print(f"[{idx}/{len(git_files)}] Processing {fname}")
run_clang_tidy(fname)
if __name__ == "__main__":
main()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892
Reviewed By: H-Huang
Differential Revision: D27991944
Pulled By: malfet
fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 21:09:06 +00:00
|
|
|
// NOLINTNEXTLINE(performance-unnecessary-copy-initialization)
|
2020-03-14 01:21:21 +00:00
|
|
|
auto str = res.toStringRef();
|
|
|
|
|
std::string expected = "Hello! Your tensor has 12 elements!";
|
|
|
|
|
AT_ASSERT(str == expected);
|
|
|
|
|
}
|
|
|
|
|
|
Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os
def get_compiled_files_list():
import json
with open("build/compile_commands.json") as f:
data = json.load(f)
files = [os.path.relpath(node['file']) for node in data]
for idx, fname in enumerate(files):
if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
return files
def run_clang_tidy(fname):
check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
changes = check_output(["git", "ls-files", "-m"])
if len(changes) == 0:
return
check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])
def main():
git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
compiled_files = get_compiled_files_list()
for idx, fname in enumerate(git_files):
if fname not in compiled_files:
continue
if fname.startswith("caffe2/contrib/aten/"):
continue
print(f"[{idx}/{len(git_files)}] Processing {fname}")
run_clang_tidy(fname)
if __name__ == "__main__":
main()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892
Reviewed By: H-Huang
Differential Revision: D27991944
Pulled By: malfet
fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 21:09:06 +00:00
|
|
|
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
|
2020-09-25 18:35:39 +00:00
|
|
|
TEST(LiteInterpreterTest, ModuleInfoBasic) {
|
2020-08-14 08:23:53 +00:00
|
|
|
Module m("M");
|
|
|
|
|
m.define(R"JIT(
|
|
|
|
|
def forward(self, x):
|
|
|
|
|
return 2 * x
|
|
|
|
|
)JIT");
|
|
|
|
|
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss, {}, true);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
|
|
|
|
|
std::unordered_set<std::string> module_debug_info_set;
|
|
|
|
|
size_t pc = 0;
|
|
|
|
|
while (true) {
|
|
|
|
|
try {
|
|
|
|
|
std::string module_info = bc.get_forward_method_debug_info(pc);
|
[Pytorch, Mobile] Serialize inlined callstack pointer with debug handle. (#55062)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55062
This diff introduces the following changes:
1. InlinedCallStack pickler/serializer is introduced. It is serialized
as a tuple of {module_instance_info, source range tag, callee:InlinedCallStack}
Module instance info is serialized as tuple of {class_type_name,
instance_name}.
Note that callee of the serialized inlined callstack points to the tuple
of already serialized callstack. This means the first callstack ptr to
serialize, will serialize entire path of the tree, where some callee
nodes might be shared with callstack pointers that will be serialized
subsequently. Pickler supports memoization of pickled objects, where if
a tuple has been serialized then object id is obtained instead of
serialized object again. Thus we stll serialize the tree and not every
path from the root separately. Furthermore, InlinedCallStackSerializer
also uses cache to lookup the pointer and return the serialized IValue.
Furthermore, note that we must also serialize the source range of
InlinedCallStack. In order to this serializer requires map of
source-range-tags-to-source-range map. This was done in the previous
diff, where as part of source range serialization we also generate
unique tags. These are the tags that are serialized in InlinedCallStack.
Thus during deserialization we would have to deserialize source range
before deserializing InlinedCallStacks.
2. Furthermore, each serialized InlinedCallStack is serialized with a
unique debug_handle and source range tag.
BackendDebugHandleManager manages generation of
unique debug handles and saves the map of
debug-handles-to-{source_range_tag, inlined-callstack-ptr}.
This map is then serialized as callstack_debug_map.pkl. Note that
inlined callstack is not sufficient to get all the source information
since it contains source information about the nodes which are inlined.
The top-of-the-stack (or bottom) node, which is the actual op node, is
not part of the inlined callstack pointer and thus the source range of
this node is serialized separately using source_range_tag. This is
similar to how JIT creates callstack in
torch/csrc/jit/runtime/interpreter.cpp
Unique debug handles facilitates exception throwing or profiling using
just the debug handle without any further qualifications, such as which
function or module the inlined-callstack belongs to.
Furthermore, this diff refactors the old mobile code for tracking
module hierarchy information per op. Mainly now bytecode serialization
will serialize debug handles corresponding to ops/nodes in graph and
have callstack_debug_map.pkl help generate:
1. Entire callstack and
2. Module hierarchy information.
Test Plan:
python test/mobile/test_lite_script_module.py TestLiteScriptModule
./build/bin/test_jit --gtest_filter=*ModuleInfo
Imported from OSS
Reviewed By: raziel
Differential Revision: D27468709
fbshipit-source-id: 53e2413e7703ead01c77718b7c333c7c6ff50a23
2021-05-04 16:17:43 +00:00
|
|
|
if (!module_info.empty() &&
|
|
|
|
|
(module_info.find("debug_handle") == std::string::npos)) {
|
2020-08-14 08:23:53 +00:00
|
|
|
module_debug_info_set.insert(module_info);
|
|
|
|
|
}
|
|
|
|
|
++pc;
|
|
|
|
|
} catch (const std::exception& e) {
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
[Pytorch, Mobile] Serialize inlined callstack pointer with debug handle. (#55062)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55062
This diff introduces the following changes:
1. InlinedCallStack pickler/serializer is introduced. It is serialized
as a tuple of {module_instance_info, source range tag, callee:InlinedCallStack}
Module instance info is serialized as tuple of {class_type_name,
instance_name}.
Note that callee of the serialized inlined callstack points to the tuple
of already serialized callstack. This means the first callstack ptr to
serialize, will serialize entire path of the tree, where some callee
nodes might be shared with callstack pointers that will be serialized
subsequently. Pickler supports memoization of pickled objects, where if
a tuple has been serialized then object id is obtained instead of
serialized object again. Thus we stll serialize the tree and not every
path from the root separately. Furthermore, InlinedCallStackSerializer
also uses cache to lookup the pointer and return the serialized IValue.
Furthermore, note that we must also serialize the source range of
InlinedCallStack. In order to this serializer requires map of
source-range-tags-to-source-range map. This was done in the previous
diff, where as part of source range serialization we also generate
unique tags. These are the tags that are serialized in InlinedCallStack.
Thus during deserialization we would have to deserialize source range
before deserializing InlinedCallStacks.
2. Furthermore, each serialized InlinedCallStack is serialized with a
unique debug_handle and source range tag.
BackendDebugHandleManager manages generation of
unique debug handles and saves the map of
debug-handles-to-{source_range_tag, inlined-callstack-ptr}.
This map is then serialized as callstack_debug_map.pkl. Note that
inlined callstack is not sufficient to get all the source information
since it contains source information about the nodes which are inlined.
The top-of-the-stack (or bottom) node, which is the actual op node, is
not part of the inlined callstack pointer and thus the source range of
this node is serialized separately using source_range_tag. This is
similar to how JIT creates callstack in
torch/csrc/jit/runtime/interpreter.cpp
Unique debug handles facilitates exception throwing or profiling using
just the debug handle without any further qualifications, such as which
function or module the inlined-callstack belongs to.
Furthermore, this diff refactors the old mobile code for tracking
module hierarchy information per op. Mainly now bytecode serialization
will serialize debug handles corresponding to ops/nodes in graph and
have callstack_debug_map.pkl help generate:
1. Entire callstack and
2. Module hierarchy information.
Test Plan:
python test/mobile/test_lite_script_module.py TestLiteScriptModule
./build/bin/test_jit --gtest_filter=*ModuleInfo
Imported from OSS
Reviewed By: raziel
Differential Revision: D27468709
fbshipit-source-id: 53e2413e7703ead01c77718b7c333c7c6ff50a23
2021-05-04 16:17:43 +00:00
|
|
|
std::unordered_set<std::string> expected_result({"top(M)"});
|
2020-08-14 08:23:53 +00:00
|
|
|
AT_ASSERT(module_debug_info_set == expected_result);
|
|
|
|
|
}
|
|
|
|
|
|
Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os
def get_compiled_files_list():
import json
with open("build/compile_commands.json") as f:
data = json.load(f)
files = [os.path.relpath(node['file']) for node in data]
for idx, fname in enumerate(files):
if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
return files
def run_clang_tidy(fname):
check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
changes = check_output(["git", "ls-files", "-m"])
if len(changes) == 0:
return
check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])
def main():
git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
compiled_files = get_compiled_files_list()
for idx, fname in enumerate(git_files):
if fname not in compiled_files:
continue
if fname.startswith("caffe2/contrib/aten/"):
continue
print(f"[{idx}/{len(git_files)}] Processing {fname}")
run_clang_tidy(fname)
if __name__ == "__main__":
main()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892
Reviewed By: H-Huang
Differential Revision: D27991944
Pulled By: malfet
fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 21:09:06 +00:00
|
|
|
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
|
2020-09-25 18:35:39 +00:00
|
|
|
TEST(LiteInterpreterTest, NotSaveModuleInfo) {
|
2020-08-14 08:23:53 +00:00
|
|
|
Module m("M");
|
|
|
|
|
m.define(R"JIT(
|
|
|
|
|
def forward(self, x):
|
|
|
|
|
return x + 5
|
|
|
|
|
)JIT");
|
|
|
|
|
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
|
|
|
|
|
size_t pc = 0;
|
|
|
|
|
while (true) {
|
|
|
|
|
try {
|
|
|
|
|
std::string module_info = bc.get_forward_method_debug_info(pc);
|
[Pytorch, Mobile] Serialize inlined callstack pointer with debug handle. (#55062)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55062
This diff introduces the following changes:
1. InlinedCallStack pickler/serializer is introduced. It is serialized
as a tuple of {module_instance_info, source range tag, callee:InlinedCallStack}
Module instance info is serialized as tuple of {class_type_name,
instance_name}.
Note that callee of the serialized inlined callstack points to the tuple
of already serialized callstack. This means the first callstack ptr to
serialize, will serialize entire path of the tree, where some callee
nodes might be shared with callstack pointers that will be serialized
subsequently. Pickler supports memoization of pickled objects, where if
a tuple has been serialized then object id is obtained instead of
serialized object again. Thus we stll serialize the tree and not every
path from the root separately. Furthermore, InlinedCallStackSerializer
also uses cache to lookup the pointer and return the serialized IValue.
Furthermore, note that we must also serialize the source range of
InlinedCallStack. In order to this serializer requires map of
source-range-tags-to-source-range map. This was done in the previous
diff, where as part of source range serialization we also generate
unique tags. These are the tags that are serialized in InlinedCallStack.
Thus during deserialization we would have to deserialize source range
before deserializing InlinedCallStacks.
2. Furthermore, each serialized InlinedCallStack is serialized with a
unique debug_handle and source range tag.
BackendDebugHandleManager manages generation of
unique debug handles and saves the map of
debug-handles-to-{source_range_tag, inlined-callstack-ptr}.
This map is then serialized as callstack_debug_map.pkl. Note that
inlined callstack is not sufficient to get all the source information
since it contains source information about the nodes which are inlined.
The top-of-the-stack (or bottom) node, which is the actual op node, is
not part of the inlined callstack pointer and thus the source range of
this node is serialized separately using source_range_tag. This is
similar to how JIT creates callstack in
torch/csrc/jit/runtime/interpreter.cpp
Unique debug handles facilitates exception throwing or profiling using
just the debug handle without any further qualifications, such as which
function or module the inlined-callstack belongs to.
Furthermore, this diff refactors the old mobile code for tracking
module hierarchy information per op. Mainly now bytecode serialization
will serialize debug handles corresponding to ops/nodes in graph and
have callstack_debug_map.pkl help generate:
1. Entire callstack and
2. Module hierarchy information.
Test Plan:
python test/mobile/test_lite_script_module.py TestLiteScriptModule
./build/bin/test_jit --gtest_filter=*ModuleInfo
Imported from OSS
Reviewed By: raziel
Differential Revision: D27468709
fbshipit-source-id: 53e2413e7703ead01c77718b7c333c7c6ff50a23
2021-05-04 16:17:43 +00:00
|
|
|
AT_ASSERT(
|
|
|
|
|
module_info.empty() ||
|
|
|
|
|
(module_info.find("debug_handle") != std::string::npos));
|
2020-08-14 08:23:53 +00:00
|
|
|
++pc;
|
|
|
|
|
} catch (const std::exception& e) {
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os
def get_compiled_files_list():
import json
with open("build/compile_commands.json") as f:
data = json.load(f)
files = [os.path.relpath(node['file']) for node in data]
for idx, fname in enumerate(files):
if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
return files
def run_clang_tidy(fname):
check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
changes = check_output(["git", "ls-files", "-m"])
if len(changes) == 0:
return
check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])
def main():
git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
compiled_files = get_compiled_files_list()
for idx, fname in enumerate(git_files):
if fname not in compiled_files:
continue
if fname.startswith("caffe2/contrib/aten/"):
continue
print(f"[{idx}/{len(git_files)}] Processing {fname}")
run_clang_tidy(fname)
if __name__ == "__main__":
main()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892
Reviewed By: H-Huang
Differential Revision: D27991944
Pulled By: malfet
fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 21:09:06 +00:00
|
|
|
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
|
2020-09-25 18:35:39 +00:00
|
|
|
TEST(LiteInterpreterTest, OneSubmoduleModuleInfo) {
|
2020-08-14 08:23:53 +00:00
|
|
|
Module a("A");
|
|
|
|
|
a.define(R"JIT(
|
|
|
|
|
def forward(self, x):
|
|
|
|
|
return 2 * x + 5
|
|
|
|
|
)JIT");
|
|
|
|
|
Module b("B");
|
|
|
|
|
b.register_module("A0", a);
|
|
|
|
|
b.define(R"JIT(
|
|
|
|
|
def forward(self, x):
|
|
|
|
|
return self.A0.forward(x) + 1
|
|
|
|
|
)JIT");
|
|
|
|
|
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
b._save_for_mobile(ss, {}, true);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
|
[Pytorch, Mobile] Serialize inlined callstack pointer with debug handle. (#55062)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55062
This diff introduces the following changes:
1. InlinedCallStack pickler/serializer is introduced. It is serialized
as a tuple of {module_instance_info, source range tag, callee:InlinedCallStack}
Module instance info is serialized as tuple of {class_type_name,
instance_name}.
Note that callee of the serialized inlined callstack points to the tuple
of already serialized callstack. This means the first callstack ptr to
serialize, will serialize entire path of the tree, where some callee
nodes might be shared with callstack pointers that will be serialized
subsequently. Pickler supports memoization of pickled objects, where if
a tuple has been serialized then object id is obtained instead of
serialized object again. Thus we stll serialize the tree and not every
path from the root separately. Furthermore, InlinedCallStackSerializer
also uses cache to lookup the pointer and return the serialized IValue.
Furthermore, note that we must also serialize the source range of
InlinedCallStack. In order to this serializer requires map of
source-range-tags-to-source-range map. This was done in the previous
diff, where as part of source range serialization we also generate
unique tags. These are the tags that are serialized in InlinedCallStack.
Thus during deserialization we would have to deserialize source range
before deserializing InlinedCallStacks.
2. Furthermore, each serialized InlinedCallStack is serialized with a
unique debug_handle and source range tag.
BackendDebugHandleManager manages generation of
unique debug handles and saves the map of
debug-handles-to-{source_range_tag, inlined-callstack-ptr}.
This map is then serialized as callstack_debug_map.pkl. Note that
inlined callstack is not sufficient to get all the source information
since it contains source information about the nodes which are inlined.
The top-of-the-stack (or bottom) node, which is the actual op node, is
not part of the inlined callstack pointer and thus the source range of
this node is serialized separately using source_range_tag. This is
similar to how JIT creates callstack in
torch/csrc/jit/runtime/interpreter.cpp
Unique debug handles facilitates exception throwing or profiling using
just the debug handle without any further qualifications, such as which
function or module the inlined-callstack belongs to.
Furthermore, this diff refactors the old mobile code for tracking
module hierarchy information per op. Mainly now bytecode serialization
will serialize debug handles corresponding to ops/nodes in graph and
have callstack_debug_map.pkl help generate:
1. Entire callstack and
2. Module hierarchy information.
Test Plan:
python test/mobile/test_lite_script_module.py TestLiteScriptModule
./build/bin/test_jit --gtest_filter=*ModuleInfo
Imported from OSS
Reviewed By: raziel
Differential Revision: D27468709
fbshipit-source-id: 53e2413e7703ead01c77718b7c333c7c6ff50a23
2021-05-04 16:17:43 +00:00
|
|
|
std::set<std::string> module_debug_info_set;
|
2020-08-14 08:23:53 +00:00
|
|
|
size_t pc = 0;
|
|
|
|
|
while (true) {
|
|
|
|
|
try {
|
|
|
|
|
std::string module_info = bc.get_forward_method_debug_info(pc);
|
[Pytorch, Mobile] Serialize inlined callstack pointer with debug handle. (#55062)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55062
This diff introduces the following changes:
1. InlinedCallStack pickler/serializer is introduced. It is serialized
as a tuple of {module_instance_info, source range tag, callee:InlinedCallStack}
Module instance info is serialized as tuple of {class_type_name,
instance_name}.
Note that callee of the serialized inlined callstack points to the tuple
of already serialized callstack. This means the first callstack ptr to
serialize, will serialize entire path of the tree, where some callee
nodes might be shared with callstack pointers that will be serialized
subsequently. Pickler supports memoization of pickled objects, where if
a tuple has been serialized then object id is obtained instead of
serialized object again. Thus we stll serialize the tree and not every
path from the root separately. Furthermore, InlinedCallStackSerializer
also uses cache to lookup the pointer and return the serialized IValue.
Furthermore, note that we must also serialize the source range of
InlinedCallStack. In order to this serializer requires map of
source-range-tags-to-source-range map. This was done in the previous
diff, where as part of source range serialization we also generate
unique tags. These are the tags that are serialized in InlinedCallStack.
Thus during deserialization we would have to deserialize source range
before deserializing InlinedCallStacks.
2. Furthermore, each serialized InlinedCallStack is serialized with a
unique debug_handle and source range tag.
BackendDebugHandleManager manages generation of
unique debug handles and saves the map of
debug-handles-to-{source_range_tag, inlined-callstack-ptr}.
This map is then serialized as callstack_debug_map.pkl. Note that
inlined callstack is not sufficient to get all the source information
since it contains source information about the nodes which are inlined.
The top-of-the-stack (or bottom) node, which is the actual op node, is
not part of the inlined callstack pointer and thus the source range of
this node is serialized separately using source_range_tag. This is
similar to how JIT creates callstack in
torch/csrc/jit/runtime/interpreter.cpp
Unique debug handles facilitates exception throwing or profiling using
just the debug handle without any further qualifications, such as which
function or module the inlined-callstack belongs to.
Furthermore, this diff refactors the old mobile code for tracking
module hierarchy information per op. Mainly now bytecode serialization
will serialize debug handles corresponding to ops/nodes in graph and
have callstack_debug_map.pkl help generate:
1. Entire callstack and
2. Module hierarchy information.
Test Plan:
python test/mobile/test_lite_script_module.py TestLiteScriptModule
./build/bin/test_jit --gtest_filter=*ModuleInfo
Imported from OSS
Reviewed By: raziel
Differential Revision: D27468709
fbshipit-source-id: 53e2413e7703ead01c77718b7c333c7c6ff50a23
2021-05-04 16:17:43 +00:00
|
|
|
if (!module_info.empty() &&
|
|
|
|
|
(module_info.find("debug_handle") == std::string::npos)) {
|
2020-08-14 08:23:53 +00:00
|
|
|
module_debug_info_set.insert(module_info);
|
|
|
|
|
}
|
|
|
|
|
++pc;
|
|
|
|
|
} catch (const std::exception& e) {
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
[Pytorch, Mobile] Serialize inlined callstack pointer with debug handle. (#55062)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55062
This diff introduces the following changes:
1. InlinedCallStack pickler/serializer is introduced. It is serialized
as a tuple of {module_instance_info, source range tag, callee:InlinedCallStack}
Module instance info is serialized as tuple of {class_type_name,
instance_name}.
Note that callee of the serialized inlined callstack points to the tuple
of already serialized callstack. This means the first callstack ptr to
serialize, will serialize entire path of the tree, where some callee
nodes might be shared with callstack pointers that will be serialized
subsequently. Pickler supports memoization of pickled objects, where if
a tuple has been serialized then object id is obtained instead of
serialized object again. Thus we stll serialize the tree and not every
path from the root separately. Furthermore, InlinedCallStackSerializer
also uses cache to lookup the pointer and return the serialized IValue.
Furthermore, note that we must also serialize the source range of
InlinedCallStack. In order to this serializer requires map of
source-range-tags-to-source-range map. This was done in the previous
diff, where as part of source range serialization we also generate
unique tags. These are the tags that are serialized in InlinedCallStack.
Thus during deserialization we would have to deserialize source range
before deserializing InlinedCallStacks.
2. Furthermore, each serialized InlinedCallStack is serialized with a
unique debug_handle and source range tag.
BackendDebugHandleManager manages generation of
unique debug handles and saves the map of
debug-handles-to-{source_range_tag, inlined-callstack-ptr}.
This map is then serialized as callstack_debug_map.pkl. Note that
inlined callstack is not sufficient to get all the source information
since it contains source information about the nodes which are inlined.
The top-of-the-stack (or bottom) node, which is the actual op node, is
not part of the inlined callstack pointer and thus the source range of
this node is serialized separately using source_range_tag. This is
similar to how JIT creates callstack in
torch/csrc/jit/runtime/interpreter.cpp
Unique debug handles facilitates exception throwing or profiling using
just the debug handle without any further qualifications, such as which
function or module the inlined-callstack belongs to.
Furthermore, this diff refactors the old mobile code for tracking
module hierarchy information per op. Mainly now bytecode serialization
will serialize debug handles corresponding to ops/nodes in graph and
have callstack_debug_map.pkl help generate:
1. Entire callstack and
2. Module hierarchy information.
Test Plan:
python test/mobile/test_lite_script_module.py TestLiteScriptModule
./build/bin/test_jit --gtest_filter=*ModuleInfo
Imported from OSS
Reviewed By: raziel
Differential Revision: D27468709
fbshipit-source-id: 53e2413e7703ead01c77718b7c333c7c6ff50a23
2021-05-04 16:17:43 +00:00
|
|
|
std::set<std::string> expected_result({"top(B)", "top(B).A0(A)"});
|
2020-08-14 08:23:53 +00:00
|
|
|
AT_ASSERT(module_debug_info_set == expected_result);
|
|
|
|
|
}
|
|
|
|
|
|
Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os
def get_compiled_files_list():
import json
with open("build/compile_commands.json") as f:
data = json.load(f)
files = [os.path.relpath(node['file']) for node in data]
for idx, fname in enumerate(files):
if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
return files
def run_clang_tidy(fname):
check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
changes = check_output(["git", "ls-files", "-m"])
if len(changes) == 0:
return
check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])
def main():
git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
compiled_files = get_compiled_files_list()
for idx, fname in enumerate(git_files):
if fname not in compiled_files:
continue
if fname.startswith("caffe2/contrib/aten/"):
continue
print(f"[{idx}/{len(git_files)}] Processing {fname}")
run_clang_tidy(fname)
if __name__ == "__main__":
main()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892
Reviewed By: H-Huang
Differential Revision: D27991944
Pulled By: malfet
fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 21:09:06 +00:00
|
|
|
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
|
2020-09-25 18:35:39 +00:00
|
|
|
TEST(LiteInterpreterTest, TwoSubmodulesModuleInfo) {
|
2020-08-14 08:23:53 +00:00
|
|
|
Module a("A");
|
|
|
|
|
a.define(R"JIT(
|
|
|
|
|
def forward(self, x):
|
|
|
|
|
return x + 1
|
|
|
|
|
)JIT");
|
|
|
|
|
Module b("B");
|
|
|
|
|
b.define(R"JIT(
|
|
|
|
|
def forward(self, x):
|
|
|
|
|
return x + 2
|
|
|
|
|
)JIT");
|
|
|
|
|
Module c("C");
|
|
|
|
|
c.register_module("A0", a);
|
|
|
|
|
c.register_module("B0", b);
|
|
|
|
|
c.define(R"JIT(
|
|
|
|
|
def forward(self, x):
|
|
|
|
|
return self.A0.forward(x) + self.B0.forward(x)
|
|
|
|
|
)JIT");
|
|
|
|
|
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
c._save_for_mobile(ss, {}, true);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
|
[Pytorch, Mobile] Serialize inlined callstack pointer with debug handle. (#55062)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55062
This diff introduces the following changes:
1. InlinedCallStack pickler/serializer is introduced. It is serialized
as a tuple of {module_instance_info, source range tag, callee:InlinedCallStack}
Module instance info is serialized as tuple of {class_type_name,
instance_name}.
Note that callee of the serialized inlined callstack points to the tuple
of already serialized callstack. This means the first callstack ptr to
serialize, will serialize entire path of the tree, where some callee
nodes might be shared with callstack pointers that will be serialized
subsequently. Pickler supports memoization of pickled objects, where if
a tuple has been serialized then object id is obtained instead of
serialized object again. Thus we stll serialize the tree and not every
path from the root separately. Furthermore, InlinedCallStackSerializer
also uses cache to lookup the pointer and return the serialized IValue.
Furthermore, note that we must also serialize the source range of
InlinedCallStack. In order to this serializer requires map of
source-range-tags-to-source-range map. This was done in the previous
diff, where as part of source range serialization we also generate
unique tags. These are the tags that are serialized in InlinedCallStack.
Thus during deserialization we would have to deserialize source range
before deserializing InlinedCallStacks.
2. Furthermore, each serialized InlinedCallStack is serialized with a
unique debug_handle and source range tag.
BackendDebugHandleManager manages generation of
unique debug handles and saves the map of
debug-handles-to-{source_range_tag, inlined-callstack-ptr}.
This map is then serialized as callstack_debug_map.pkl. Note that
inlined callstack is not sufficient to get all the source information
since it contains source information about the nodes which are inlined.
The top-of-the-stack (or bottom) node, which is the actual op node, is
not part of the inlined callstack pointer and thus the source range of
this node is serialized separately using source_range_tag. This is
similar to how JIT creates callstack in
torch/csrc/jit/runtime/interpreter.cpp
Unique debug handles facilitates exception throwing or profiling using
just the debug handle without any further qualifications, such as which
function or module the inlined-callstack belongs to.
Furthermore, this diff refactors the old mobile code for tracking
module hierarchy information per op. Mainly now bytecode serialization
will serialize debug handles corresponding to ops/nodes in graph and
have callstack_debug_map.pkl help generate:
1. Entire callstack and
2. Module hierarchy information.
Test Plan:
python test/mobile/test_lite_script_module.py TestLiteScriptModule
./build/bin/test_jit --gtest_filter=*ModuleInfo
Imported from OSS
Reviewed By: raziel
Differential Revision: D27468709
fbshipit-source-id: 53e2413e7703ead01c77718b7c333c7c6ff50a23
2021-05-04 16:17:43 +00:00
|
|
|
std::set<std::string> module_debug_info_set;
|
2020-08-14 08:23:53 +00:00
|
|
|
size_t pc = 0;
|
|
|
|
|
while (true) {
|
|
|
|
|
try {
|
|
|
|
|
std::string module_info = bc.get_forward_method_debug_info(pc);
|
[Pytorch, Mobile] Serialize inlined callstack pointer with debug handle. (#55062)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55062
This diff introduces the following changes:
1. InlinedCallStack pickler/serializer is introduced. It is serialized
as a tuple of {module_instance_info, source range tag, callee:InlinedCallStack}
Module instance info is serialized as tuple of {class_type_name,
instance_name}.
Note that callee of the serialized inlined callstack points to the tuple
of already serialized callstack. This means the first callstack ptr to
serialize, will serialize entire path of the tree, where some callee
nodes might be shared with callstack pointers that will be serialized
subsequently. Pickler supports memoization of pickled objects, where if
a tuple has been serialized then object id is obtained instead of
serialized object again. Thus we stll serialize the tree and not every
path from the root separately. Furthermore, InlinedCallStackSerializer
also uses cache to lookup the pointer and return the serialized IValue.
Furthermore, note that we must also serialize the source range of
InlinedCallStack. In order to this serializer requires map of
source-range-tags-to-source-range map. This was done in the previous
diff, where as part of source range serialization we also generate
unique tags. These are the tags that are serialized in InlinedCallStack.
Thus during deserialization we would have to deserialize source range
before deserializing InlinedCallStacks.
2. Furthermore, each serialized InlinedCallStack is serialized with a
unique debug_handle and source range tag.
BackendDebugHandleManager manages generation of
unique debug handles and saves the map of
debug-handles-to-{source_range_tag, inlined-callstack-ptr}.
This map is then serialized as callstack_debug_map.pkl. Note that
inlined callstack is not sufficient to get all the source information
since it contains source information about the nodes which are inlined.
The top-of-the-stack (or bottom) node, which is the actual op node, is
not part of the inlined callstack pointer and thus the source range of
this node is serialized separately using source_range_tag. This is
similar to how JIT creates callstack in
torch/csrc/jit/runtime/interpreter.cpp
Unique debug handles facilitates exception throwing or profiling using
just the debug handle without any further qualifications, such as which
function or module the inlined-callstack belongs to.
Furthermore, this diff refactors the old mobile code for tracking
module hierarchy information per op. Mainly now bytecode serialization
will serialize debug handles corresponding to ops/nodes in graph and
have callstack_debug_map.pkl help generate:
1. Entire callstack and
2. Module hierarchy information.
Test Plan:
python test/mobile/test_lite_script_module.py TestLiteScriptModule
./build/bin/test_jit --gtest_filter=*ModuleInfo
Imported from OSS
Reviewed By: raziel
Differential Revision: D27468709
fbshipit-source-id: 53e2413e7703ead01c77718b7c333c7c6ff50a23
2021-05-04 16:17:43 +00:00
|
|
|
if (!module_info.empty() &&
|
|
|
|
|
(module_info.find("debug_handle") == std::string::npos)) {
|
|
|
|
|
std::cout << "Module info:" << module_info << std::endl;
|
2020-08-14 08:23:53 +00:00
|
|
|
module_debug_info_set.insert(module_info);
|
|
|
|
|
}
|
|
|
|
|
++pc;
|
|
|
|
|
} catch (const std::exception& e) {
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
[Pytorch, Mobile] Serialize inlined callstack pointer with debug handle. (#55062)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55062
This diff introduces the following changes:
1. InlinedCallStack pickler/serializer is introduced. It is serialized
as a tuple of {module_instance_info, source range tag, callee:InlinedCallStack}
Module instance info is serialized as tuple of {class_type_name,
instance_name}.
Note that callee of the serialized inlined callstack points to the tuple
of already serialized callstack. This means the first callstack ptr to
serialize, will serialize entire path of the tree, where some callee
nodes might be shared with callstack pointers that will be serialized
subsequently. Pickler supports memoization of pickled objects, where if
a tuple has been serialized then object id is obtained instead of
serialized object again. Thus we stll serialize the tree and not every
path from the root separately. Furthermore, InlinedCallStackSerializer
also uses cache to lookup the pointer and return the serialized IValue.
Furthermore, note that we must also serialize the source range of
InlinedCallStack. In order to this serializer requires map of
source-range-tags-to-source-range map. This was done in the previous
diff, where as part of source range serialization we also generate
unique tags. These are the tags that are serialized in InlinedCallStack.
Thus during deserialization we would have to deserialize source range
before deserializing InlinedCallStacks.
2. Furthermore, each serialized InlinedCallStack is serialized with a
unique debug_handle and source range tag.
BackendDebugHandleManager manages generation of
unique debug handles and saves the map of
debug-handles-to-{source_range_tag, inlined-callstack-ptr}.
This map is then serialized as callstack_debug_map.pkl. Note that
inlined callstack is not sufficient to get all the source information
since it contains source information about the nodes which are inlined.
The top-of-the-stack (or bottom) node, which is the actual op node, is
not part of the inlined callstack pointer and thus the source range of
this node is serialized separately using source_range_tag. This is
similar to how JIT creates callstack in
torch/csrc/jit/runtime/interpreter.cpp
Unique debug handles facilitates exception throwing or profiling using
just the debug handle without any further qualifications, such as which
function or module the inlined-callstack belongs to.
Furthermore, this diff refactors the old mobile code for tracking
module hierarchy information per op. Mainly now bytecode serialization
will serialize debug handles corresponding to ops/nodes in graph and
have callstack_debug_map.pkl help generate:
1. Entire callstack and
2. Module hierarchy information.
Test Plan:
python test/mobile/test_lite_script_module.py TestLiteScriptModule
./build/bin/test_jit --gtest_filter=*ModuleInfo
Imported from OSS
Reviewed By: raziel
Differential Revision: D27468709
fbshipit-source-id: 53e2413e7703ead01c77718b7c333c7c6ff50a23
2021-05-04 16:17:43 +00:00
|
|
|
std::set<std::string> expected_result(
|
|
|
|
|
{"top(C)", "top(C).A0(A)", "top(C).B0(B)"});
|
2020-08-14 08:23:53 +00:00
|
|
|
AT_ASSERT(module_debug_info_set == expected_result);
|
|
|
|
|
}
|
|
|
|
|
|
2021-05-03 18:24:59 +00:00
|
|
|
TEST(LiteInterpreterTest, GetRuntimeByteCodeVersion) {
|
|
|
|
|
auto runtime_bytecode_version = _get_runtime_bytecode_version();
|
|
|
|
|
AT_ASSERT(
|
|
|
|
|
runtime_bytecode_version == caffe2::serialize::kProducedBytecodeVersion);
|
|
|
|
|
}
|
|
|
|
|
|
2021-05-05 16:16:16 +00:00
|
|
|
TEST(LiteInterpreterTest, GetByteCodeVersion) {
|
|
|
|
|
std::string filePath(__FILE__);
|
|
|
|
|
auto test_model_file_v4 =
|
|
|
|
|
filePath.substr(0, filePath.find_last_of("/\\") + 1);
|
|
|
|
|
test_model_file_v4.append("script_module_v4.ptl");
|
|
|
|
|
|
|
|
|
|
auto version_v4 = _get_model_bytecode_version(test_model_file_v4);
|
|
|
|
|
AT_ASSERT(version_v4 == 4);
|
|
|
|
|
}
|
|
|
|
|
|
2021-05-08 01:11:15 +00:00
|
|
|
namespace {
|
|
|
|
|
void runAndCheckBytecodeModel(
|
|
|
|
|
std::stringstream& input_model_stream,
|
|
|
|
|
const std::vector<IValue>& input_data,
|
|
|
|
|
const std::vector<Tensor>& expect_result_list,
|
|
|
|
|
const int64_t expect_version) {
|
|
|
|
|
auto actual_version = _get_model_bytecode_version(input_model_stream);
|
|
|
|
|
AT_ASSERT(actual_version == expect_version);
|
|
|
|
|
|
|
|
|
|
// Load and run the backport model, then compare the result with expect
|
|
|
|
|
// result
|
|
|
|
|
mobile::Module m_mobile = _load_for_mobile(input_model_stream);
|
|
|
|
|
|
|
|
|
|
auto actual_result = m_mobile.forward(input_data);
|
|
|
|
|
std::vector<IValue> actual_result_list = actual_result.toTuple()->elements();
|
|
|
|
|
|
|
|
|
|
AT_ASSERT(actual_result_list.size() == expect_result_list.size());
|
|
|
|
|
AT_ASSERT(actual_result_list[0].toTensor().equal(expect_result_list[0]));
|
|
|
|
|
AT_ASSERT(
|
|
|
|
|
actual_result_list[1].toTensor().dim() == expect_result_list[1].dim());
|
|
|
|
|
AT_ASSERT(actual_result_list[2].toTensor().equal(expect_result_list[2]));
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
void backportAllVersionCheck(
|
|
|
|
|
std::stringstream& test_model_file_stream,
|
|
|
|
|
std::vector<IValue>& input_data,
|
|
|
|
|
std::vector<Tensor>& expect_result_list,
|
|
|
|
|
const int64_t expect_from_version) {
|
|
|
|
|
auto from_version = _get_model_bytecode_version(test_model_file_stream);
|
|
|
|
|
AT_ASSERT(from_version == expect_from_version);
|
|
|
|
|
|
|
|
|
|
// Backport script_module_v5.ptl to an older version
|
|
|
|
|
constexpr int64_t minimum_to_version = 4;
|
|
|
|
|
int64_t current_to_version = from_version - 1;
|
|
|
|
|
|
|
|
|
|
std::ostringstream oss;
|
|
|
|
|
// Verify all candidate to_version work as expected. All backport to version
|
|
|
|
|
// larger than minimum_to_version should success.
|
|
|
|
|
while (current_to_version >= minimum_to_version) {
|
|
|
|
|
oss.clear();
|
|
|
|
|
bool backPortSuccess =
|
|
|
|
|
_backport_for_mobile(test_model_file_stream, oss, current_to_version);
|
|
|
|
|
AT_ASSERT(backPortSuccess);
|
|
|
|
|
|
|
|
|
|
// Check backport model version
|
|
|
|
|
std::stringstream iss(oss.str());
|
|
|
|
|
auto backport_version = _get_model_bytecode_version(iss);
|
|
|
|
|
AT_ASSERT(backport_version == current_to_version);
|
|
|
|
|
|
|
|
|
|
// Load and run the backport model, then compare the result with expect
|
|
|
|
|
// result
|
|
|
|
|
runAndCheckBytecodeModel(
|
|
|
|
|
iss, input_data, expect_result_list, current_to_version);
|
|
|
|
|
|
|
|
|
|
current_to_version--;
|
|
|
|
|
}
|
|
|
|
|
// backport to minimum version - 1 should fail
|
|
|
|
|
oss.clear();
|
|
|
|
|
bool backPortSuccess =
|
|
|
|
|
_backport_for_mobile(test_model_file_stream, oss, minimum_to_version - 1);
|
|
|
|
|
AT_ASSERT(!backPortSuccess);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
} // namespace
|
|
|
|
|
|
|
|
|
|
TEST(LiteInterpreterTest, BackPortByteCodeModelAllVersions) {
|
|
|
|
|
torch::jit::Module module("m");
|
|
|
|
|
// NOLINTNEXTLINE(cppcoreguidelines-avoid-magic-numbers)
|
|
|
|
|
module.register_parameter("weight", torch::ones({20, 1, 5, 5}), false);
|
|
|
|
|
// NOLINTNEXTLINE(cppcoreguidelines-avoid-magic-numbers)
|
|
|
|
|
module.register_parameter("bias", torch::ones({20}), false);
|
|
|
|
|
module.define(R"(
|
|
|
|
|
def forward(self, input):
|
|
|
|
|
x1 = torch.zeros(2, 2)
|
|
|
|
|
x2 = torch.empty_like(torch.empty(2, 2))
|
|
|
|
|
x3 = torch._convolution(input, self.weight, self.bias, [1, 1], [0, 0], [1, 1], False, [0, 0], 1, False, False, True, True)
|
|
|
|
|
return (x1, x2, x3)
|
|
|
|
|
)");
|
|
|
|
|
|
|
|
|
|
torch::jit::Module module_freeze = freeze(module);
|
|
|
|
|
|
|
|
|
|
std::stringstream input_model_stream;
|
|
|
|
|
module_freeze._save_for_mobile(input_model_stream);
|
|
|
|
|
std::vector<IValue> input_data =
|
|
|
|
|
std::vector<IValue>({torch::ones({1, 1, 28, 28})});
|
|
|
|
|
std::vector<Tensor> expect_result_list;
|
|
|
|
|
expect_result_list.emplace_back(at::ones({2, 2}, ScalarType::Float) * 0);
|
|
|
|
|
expect_result_list.emplace_back(at::ones({2, 2}, ScalarType::Float));
|
|
|
|
|
expect_result_list.emplace_back(
|
|
|
|
|
at::ones({1, 20, 24, 24}, ScalarType::Float) * 26);
|
|
|
|
|
backportAllVersionCheck(
|
|
|
|
|
input_model_stream,
|
|
|
|
|
input_data,
|
|
|
|
|
expect_result_list,
|
|
|
|
|
caffe2::serialize::kProducedBytecodeVersion);
|
|
|
|
|
}
|
|
|
|
|
|
2021-05-13 17:19:28 +00:00
|
|
|
TEST(LiteInterpreterTest, GetRuntimeOpsAndInfo) {
|
|
|
|
|
auto runtime_ops = _get_runtime_ops_and_info();
|
|
|
|
|
// Ballpark estimate of the minimal number of ops; just used to
|
|
|
|
|
// verify API returns a reasonably large number.
|
|
|
|
|
AT_ASSERT(runtime_ops.size() > 2900);
|
|
|
|
|
}
|
|
|
|
|
|
Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os
def get_compiled_files_list():
import json
with open("build/compile_commands.json") as f:
data = json.load(f)
files = [os.path.relpath(node['file']) for node in data]
for idx, fname in enumerate(files):
if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
return files
def run_clang_tidy(fname):
check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
changes = check_output(["git", "ls-files", "-m"])
if len(changes) == 0:
return
check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])
def main():
git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
compiled_files = get_compiled_files_list()
for idx, fname in enumerate(git_files):
if fname not in compiled_files:
continue
if fname.startswith("caffe2/contrib/aten/"):
continue
print(f"[{idx}/{len(git_files)}] Processing {fname}")
run_clang_tidy(fname)
if __name__ == "__main__":
main()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892
Reviewed By: H-Huang
Differential Revision: D27991944
Pulled By: malfet
fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 21:09:06 +00:00
|
|
|
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
|
2020-09-25 18:35:39 +00:00
|
|
|
TEST(LiteInterpreterTest, SequentialModuleInfo) {
|
2020-08-14 08:23:53 +00:00
|
|
|
Module a("A");
|
|
|
|
|
a.define(R"JIT(
|
|
|
|
|
def forward(self, x):
|
|
|
|
|
return x + 1
|
|
|
|
|
)JIT");
|
|
|
|
|
Module b("B");
|
|
|
|
|
b.define(R"JIT(
|
|
|
|
|
def forward(self, x):
|
|
|
|
|
return x + 2
|
|
|
|
|
)JIT");
|
|
|
|
|
Module c("C");
|
|
|
|
|
c.register_module("A0", a);
|
|
|
|
|
c.register_module("B0", b);
|
|
|
|
|
c.define(R"JIT(
|
|
|
|
|
def forward(self, x):
|
|
|
|
|
return self.A0.forward(self.B0.forward(x))
|
|
|
|
|
)JIT");
|
|
|
|
|
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
c._save_for_mobile(ss, {}, true);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
|
[Pytorch, Mobile] Serialize inlined callstack pointer with debug handle. (#55062)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55062
This diff introduces the following changes:
1. InlinedCallStack pickler/serializer is introduced. It is serialized
as a tuple of {module_instance_info, source range tag, callee:InlinedCallStack}
Module instance info is serialized as tuple of {class_type_name,
instance_name}.
Note that callee of the serialized inlined callstack points to the tuple
of already serialized callstack. This means the first callstack ptr to
serialize, will serialize entire path of the tree, where some callee
nodes might be shared with callstack pointers that will be serialized
subsequently. Pickler supports memoization of pickled objects, where if
a tuple has been serialized then object id is obtained instead of
serialized object again. Thus we stll serialize the tree and not every
path from the root separately. Furthermore, InlinedCallStackSerializer
also uses cache to lookup the pointer and return the serialized IValue.
Furthermore, note that we must also serialize the source range of
InlinedCallStack. In order to this serializer requires map of
source-range-tags-to-source-range map. This was done in the previous
diff, where as part of source range serialization we also generate
unique tags. These are the tags that are serialized in InlinedCallStack.
Thus during deserialization we would have to deserialize source range
before deserializing InlinedCallStacks.
2. Furthermore, each serialized InlinedCallStack is serialized with a
unique debug_handle and source range tag.
BackendDebugHandleManager manages generation of
unique debug handles and saves the map of
debug-handles-to-{source_range_tag, inlined-callstack-ptr}.
This map is then serialized as callstack_debug_map.pkl. Note that
inlined callstack is not sufficient to get all the source information
since it contains source information about the nodes which are inlined.
The top-of-the-stack (or bottom) node, which is the actual op node, is
not part of the inlined callstack pointer and thus the source range of
this node is serialized separately using source_range_tag. This is
similar to how JIT creates callstack in
torch/csrc/jit/runtime/interpreter.cpp
Unique debug handles facilitates exception throwing or profiling using
just the debug handle without any further qualifications, such as which
function or module the inlined-callstack belongs to.
Furthermore, this diff refactors the old mobile code for tracking
module hierarchy information per op. Mainly now bytecode serialization
will serialize debug handles corresponding to ops/nodes in graph and
have callstack_debug_map.pkl help generate:
1. Entire callstack and
2. Module hierarchy information.
Test Plan:
python test/mobile/test_lite_script_module.py TestLiteScriptModule
./build/bin/test_jit --gtest_filter=*ModuleInfo
Imported from OSS
Reviewed By: raziel
Differential Revision: D27468709
fbshipit-source-id: 53e2413e7703ead01c77718b7c333c7c6ff50a23
2021-05-04 16:17:43 +00:00
|
|
|
std::set<std::string> module_debug_info_set;
|
2020-08-14 08:23:53 +00:00
|
|
|
size_t pc = 0;
|
|
|
|
|
while (true) {
|
|
|
|
|
try {
|
|
|
|
|
std::string module_info = bc.get_forward_method_debug_info(pc);
|
[Pytorch, Mobile] Serialize inlined callstack pointer with debug handle. (#55062)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55062
This diff introduces the following changes:
1. InlinedCallStack pickler/serializer is introduced. It is serialized
as a tuple of {module_instance_info, source range tag, callee:InlinedCallStack}
Module instance info is serialized as tuple of {class_type_name,
instance_name}.
Note that callee of the serialized inlined callstack points to the tuple
of already serialized callstack. This means the first callstack ptr to
serialize, will serialize entire path of the tree, where some callee
nodes might be shared with callstack pointers that will be serialized
subsequently. Pickler supports memoization of pickled objects, where if
a tuple has been serialized then object id is obtained instead of
serialized object again. Thus we stll serialize the tree and not every
path from the root separately. Furthermore, InlinedCallStackSerializer
also uses cache to lookup the pointer and return the serialized IValue.
Furthermore, note that we must also serialize the source range of
InlinedCallStack. In order to this serializer requires map of
source-range-tags-to-source-range map. This was done in the previous
diff, where as part of source range serialization we also generate
unique tags. These are the tags that are serialized in InlinedCallStack.
Thus during deserialization we would have to deserialize source range
before deserializing InlinedCallStacks.
2. Furthermore, each serialized InlinedCallStack is serialized with a
unique debug_handle and source range tag.
BackendDebugHandleManager manages generation of
unique debug handles and saves the map of
debug-handles-to-{source_range_tag, inlined-callstack-ptr}.
This map is then serialized as callstack_debug_map.pkl. Note that
inlined callstack is not sufficient to get all the source information
since it contains source information about the nodes which are inlined.
The top-of-the-stack (or bottom) node, which is the actual op node, is
not part of the inlined callstack pointer and thus the source range of
this node is serialized separately using source_range_tag. This is
similar to how JIT creates callstack in
torch/csrc/jit/runtime/interpreter.cpp
Unique debug handles facilitates exception throwing or profiling using
just the debug handle without any further qualifications, such as which
function or module the inlined-callstack belongs to.
Furthermore, this diff refactors the old mobile code for tracking
module hierarchy information per op. Mainly now bytecode serialization
will serialize debug handles corresponding to ops/nodes in graph and
have callstack_debug_map.pkl help generate:
1. Entire callstack and
2. Module hierarchy information.
Test Plan:
python test/mobile/test_lite_script_module.py TestLiteScriptModule
./build/bin/test_jit --gtest_filter=*ModuleInfo
Imported from OSS
Reviewed By: raziel
Differential Revision: D27468709
fbshipit-source-id: 53e2413e7703ead01c77718b7c333c7c6ff50a23
2021-05-04 16:17:43 +00:00
|
|
|
if (!module_info.empty() &&
|
|
|
|
|
(module_info.find("debug_handle") == std::string::npos)) {
|
2020-08-14 08:23:53 +00:00
|
|
|
module_debug_info_set.insert(module_info);
|
|
|
|
|
}
|
|
|
|
|
++pc;
|
|
|
|
|
} catch (const std::exception& e) {
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2020-12-03 18:42:48 +00:00
|
|
|
// class A(nn.Module):
|
|
|
|
|
// def __init__(self):
|
|
|
|
|
// super(A, self).__init__()
|
|
|
|
|
|
|
|
|
|
// def forward(self, x):
|
|
|
|
|
// return x + 1
|
|
|
|
|
|
|
|
|
|
// class B(nn.Module):
|
|
|
|
|
// def __init__(self):
|
|
|
|
|
// super(B, self).__init__()
|
|
|
|
|
|
|
|
|
|
// def forward(self, x):
|
|
|
|
|
// return x + 2
|
|
|
|
|
|
|
|
|
|
// class C(nn.Module):
|
|
|
|
|
// def __init__(self):
|
|
|
|
|
// super(C, self).__init__()
|
|
|
|
|
// self.A0 = A()
|
|
|
|
|
// self.B0 = B()
|
|
|
|
|
|
|
|
|
|
// def forward(self, x):
|
|
|
|
|
// return self.A0.forward(self.B0.forward(x))
|
|
|
|
|
|
2021-05-04 16:17:43 +00:00
|
|
|
std::set<std::string> expected_result(
|
|
|
|
|
{"top(C)", "top(C).A0(A)", "top(C).B0(B)"});
|
2020-08-14 08:23:53 +00:00
|
|
|
AT_ASSERT(module_debug_info_set == expected_result);
|
|
|
|
|
}
|
|
|
|
|
|
Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os
def get_compiled_files_list():
import json
with open("build/compile_commands.json") as f:
data = json.load(f)
files = [os.path.relpath(node['file']) for node in data]
for idx, fname in enumerate(files):
if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
return files
def run_clang_tidy(fname):
check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
changes = check_output(["git", "ls-files", "-m"])
if len(changes) == 0:
return
check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])
def main():
git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
compiled_files = get_compiled_files_list()
for idx, fname in enumerate(git_files):
if fname not in compiled_files:
continue
if fname.startswith("caffe2/contrib/aten/"):
continue
print(f"[{idx}/{len(git_files)}] Processing {fname}")
run_clang_tidy(fname)
if __name__ == "__main__":
main()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892
Reviewed By: H-Huang
Differential Revision: D27991944
Pulled By: malfet
fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 21:09:06 +00:00
|
|
|
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
|
2020-09-25 18:35:39 +00:00
|
|
|
TEST(LiteInterpreterTest, HierarchyModuleInfo) {
|
2020-08-14 08:23:53 +00:00
|
|
|
Module a("A");
|
|
|
|
|
a.define(R"JIT(
|
|
|
|
|
def forward(self, x):
|
|
|
|
|
return x + 1
|
|
|
|
|
)JIT");
|
|
|
|
|
Module b("B");
|
|
|
|
|
b.register_module("A0", a);
|
|
|
|
|
b.define(R"JIT(
|
|
|
|
|
def forward(self, x):
|
|
|
|
|
return self.A0.forward(x) + 1
|
|
|
|
|
)JIT");
|
|
|
|
|
Module c("C");
|
|
|
|
|
c.register_module("B0", b);
|
|
|
|
|
c.define(R"JIT(
|
|
|
|
|
def forward(self, x):
|
|
|
|
|
return self.B0.forward(x) + 1
|
|
|
|
|
)JIT");
|
|
|
|
|
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
c._save_for_mobile(ss, {}, true);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
|
[Pytorch, Mobile] Serialize inlined callstack pointer with debug handle. (#55062)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55062
This diff introduces the following changes:
1. InlinedCallStack pickler/serializer is introduced. It is serialized
as a tuple of {module_instance_info, source range tag, callee:InlinedCallStack}
Module instance info is serialized as tuple of {class_type_name,
instance_name}.
Note that callee of the serialized inlined callstack points to the tuple
of already serialized callstack. This means the first callstack ptr to
serialize, will serialize entire path of the tree, where some callee
nodes might be shared with callstack pointers that will be serialized
subsequently. Pickler supports memoization of pickled objects, where if
a tuple has been serialized then object id is obtained instead of
serialized object again. Thus we stll serialize the tree and not every
path from the root separately. Furthermore, InlinedCallStackSerializer
also uses cache to lookup the pointer and return the serialized IValue.
Furthermore, note that we must also serialize the source range of
InlinedCallStack. In order to this serializer requires map of
source-range-tags-to-source-range map. This was done in the previous
diff, where as part of source range serialization we also generate
unique tags. These are the tags that are serialized in InlinedCallStack.
Thus during deserialization we would have to deserialize source range
before deserializing InlinedCallStacks.
2. Furthermore, each serialized InlinedCallStack is serialized with a
unique debug_handle and source range tag.
BackendDebugHandleManager manages generation of
unique debug handles and saves the map of
debug-handles-to-{source_range_tag, inlined-callstack-ptr}.
This map is then serialized as callstack_debug_map.pkl. Note that
inlined callstack is not sufficient to get all the source information
since it contains source information about the nodes which are inlined.
The top-of-the-stack (or bottom) node, which is the actual op node, is
not part of the inlined callstack pointer and thus the source range of
this node is serialized separately using source_range_tag. This is
similar to how JIT creates callstack in
torch/csrc/jit/runtime/interpreter.cpp
Unique debug handles facilitates exception throwing or profiling using
just the debug handle without any further qualifications, such as which
function or module the inlined-callstack belongs to.
Furthermore, this diff refactors the old mobile code for tracking
module hierarchy information per op. Mainly now bytecode serialization
will serialize debug handles corresponding to ops/nodes in graph and
have callstack_debug_map.pkl help generate:
1. Entire callstack and
2. Module hierarchy information.
Test Plan:
python test/mobile/test_lite_script_module.py TestLiteScriptModule
./build/bin/test_jit --gtest_filter=*ModuleInfo
Imported from OSS
Reviewed By: raziel
Differential Revision: D27468709
fbshipit-source-id: 53e2413e7703ead01c77718b7c333c7c6ff50a23
2021-05-04 16:17:43 +00:00
|
|
|
std::set<std::string> module_debug_info_set;
|
2020-08-14 08:23:53 +00:00
|
|
|
size_t pc = 0;
|
|
|
|
|
while (true) {
|
|
|
|
|
try {
|
|
|
|
|
std::string module_info = bc.get_forward_method_debug_info(pc);
|
[Pytorch, Mobile] Serialize inlined callstack pointer with debug handle. (#55062)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55062
This diff introduces the following changes:
1. InlinedCallStack pickler/serializer is introduced. It is serialized
as a tuple of {module_instance_info, source range tag, callee:InlinedCallStack}
Module instance info is serialized as tuple of {class_type_name,
instance_name}.
Note that callee of the serialized inlined callstack points to the tuple
of already serialized callstack. This means the first callstack ptr to
serialize, will serialize entire path of the tree, where some callee
nodes might be shared with callstack pointers that will be serialized
subsequently. Pickler supports memoization of pickled objects, where if
a tuple has been serialized then object id is obtained instead of
serialized object again. Thus we stll serialize the tree and not every
path from the root separately. Furthermore, InlinedCallStackSerializer
also uses cache to lookup the pointer and return the serialized IValue.
Furthermore, note that we must also serialize the source range of
InlinedCallStack. In order to this serializer requires map of
source-range-tags-to-source-range map. This was done in the previous
diff, where as part of source range serialization we also generate
unique tags. These are the tags that are serialized in InlinedCallStack.
Thus during deserialization we would have to deserialize source range
before deserializing InlinedCallStacks.
2. Furthermore, each serialized InlinedCallStack is serialized with a
unique debug_handle and source range tag.
BackendDebugHandleManager manages generation of
unique debug handles and saves the map of
debug-handles-to-{source_range_tag, inlined-callstack-ptr}.
This map is then serialized as callstack_debug_map.pkl. Note that
inlined callstack is not sufficient to get all the source information
since it contains source information about the nodes which are inlined.
The top-of-the-stack (or bottom) node, which is the actual op node, is
not part of the inlined callstack pointer and thus the source range of
this node is serialized separately using source_range_tag. This is
similar to how JIT creates callstack in
torch/csrc/jit/runtime/interpreter.cpp
Unique debug handles facilitates exception throwing or profiling using
just the debug handle without any further qualifications, such as which
function or module the inlined-callstack belongs to.
Furthermore, this diff refactors the old mobile code for tracking
module hierarchy information per op. Mainly now bytecode serialization
will serialize debug handles corresponding to ops/nodes in graph and
have callstack_debug_map.pkl help generate:
1. Entire callstack and
2. Module hierarchy information.
Test Plan:
python test/mobile/test_lite_script_module.py TestLiteScriptModule
./build/bin/test_jit --gtest_filter=*ModuleInfo
Imported from OSS
Reviewed By: raziel
Differential Revision: D27468709
fbshipit-source-id: 53e2413e7703ead01c77718b7c333c7c6ff50a23
2021-05-04 16:17:43 +00:00
|
|
|
if (!module_info.empty() &&
|
|
|
|
|
(module_info.find("debug_handle") == std::string::npos)) {
|
2020-08-14 08:23:53 +00:00
|
|
|
module_debug_info_set.insert(module_info);
|
|
|
|
|
}
|
|
|
|
|
++pc;
|
|
|
|
|
} catch (const std::exception& e) {
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// There are 3 module information strings here.
|
|
|
|
|
// "top(C).forward": for the add operator in top.
|
|
|
|
|
// "top(C).B0(B).forward": for the add operator in B0.
|
2020-12-03 18:42:48 +00:00
|
|
|
// "top(C).B0(B).forward.A0(A).forward": for the add operator in A0.
|
[Pytorch, Mobile] Serialize inlined callstack pointer with debug handle. (#55062)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55062
This diff introduces the following changes:
1. InlinedCallStack pickler/serializer is introduced. It is serialized
as a tuple of {module_instance_info, source range tag, callee:InlinedCallStack}
Module instance info is serialized as tuple of {class_type_name,
instance_name}.
Note that callee of the serialized inlined callstack points to the tuple
of already serialized callstack. This means the first callstack ptr to
serialize, will serialize entire path of the tree, where some callee
nodes might be shared with callstack pointers that will be serialized
subsequently. Pickler supports memoization of pickled objects, where if
a tuple has been serialized then object id is obtained instead of
serialized object again. Thus we stll serialize the tree and not every
path from the root separately. Furthermore, InlinedCallStackSerializer
also uses cache to lookup the pointer and return the serialized IValue.
Furthermore, note that we must also serialize the source range of
InlinedCallStack. In order to this serializer requires map of
source-range-tags-to-source-range map. This was done in the previous
diff, where as part of source range serialization we also generate
unique tags. These are the tags that are serialized in InlinedCallStack.
Thus during deserialization we would have to deserialize source range
before deserializing InlinedCallStacks.
2. Furthermore, each serialized InlinedCallStack is serialized with a
unique debug_handle and source range tag.
BackendDebugHandleManager manages generation of
unique debug handles and saves the map of
debug-handles-to-{source_range_tag, inlined-callstack-ptr}.
This map is then serialized as callstack_debug_map.pkl. Note that
inlined callstack is not sufficient to get all the source information
since it contains source information about the nodes which are inlined.
The top-of-the-stack (or bottom) node, which is the actual op node, is
not part of the inlined callstack pointer and thus the source range of
this node is serialized separately using source_range_tag. This is
similar to how JIT creates callstack in
torch/csrc/jit/runtime/interpreter.cpp
Unique debug handles facilitates exception throwing or profiling using
just the debug handle without any further qualifications, such as which
function or module the inlined-callstack belongs to.
Furthermore, this diff refactors the old mobile code for tracking
module hierarchy information per op. Mainly now bytecode serialization
will serialize debug handles corresponding to ops/nodes in graph and
have callstack_debug_map.pkl help generate:
1. Entire callstack and
2. Module hierarchy information.
Test Plan:
python test/mobile/test_lite_script_module.py TestLiteScriptModule
./build/bin/test_jit --gtest_filter=*ModuleInfo
Imported from OSS
Reviewed By: raziel
Differential Revision: D27468709
fbshipit-source-id: 53e2413e7703ead01c77718b7c333c7c6ff50a23
2021-05-04 16:17:43 +00:00
|
|
|
std::set<std::string> expected_result(
|
|
|
|
|
{"top(C)", "top(C).B0(B)", "top(C).B0(B).A0(A)"});
|
2020-08-14 08:23:53 +00:00
|
|
|
AT_ASSERT(module_debug_info_set == expected_result);
|
|
|
|
|
}
|
|
|
|
|
|
Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os
def get_compiled_files_list():
import json
with open("build/compile_commands.json") as f:
data = json.load(f)
files = [os.path.relpath(node['file']) for node in data]
for idx, fname in enumerate(files):
if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
return files
def run_clang_tidy(fname):
check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
changes = check_output(["git", "ls-files", "-m"])
if len(changes) == 0:
return
check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])
def main():
git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
compiled_files = get_compiled_files_list()
for idx, fname in enumerate(git_files):
if fname not in compiled_files:
continue
if fname.startswith("caffe2/contrib/aten/"):
continue
print(f"[{idx}/{len(git_files)}] Processing {fname}")
run_clang_tidy(fname)
if __name__ == "__main__":
main()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892
Reviewed By: H-Huang
Differential Revision: D27991944
Pulled By: malfet
fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 21:09:06 +00:00
|
|
|
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
|
2020-09-25 18:35:39 +00:00
|
|
|
TEST(LiteInterpreterTest, DuplicatedClassTypeModuleInfo) {
|
2020-08-14 08:23:53 +00:00
|
|
|
Module a("A");
|
|
|
|
|
a.define(R"JIT(
|
|
|
|
|
def forward(self, x):
|
|
|
|
|
return x + 5
|
|
|
|
|
)JIT");
|
|
|
|
|
Module b("B");
|
|
|
|
|
b.register_module("A0", a);
|
|
|
|
|
b.register_module("A1", a);
|
|
|
|
|
b.define(R"JIT(
|
|
|
|
|
def forward(self, x):
|
|
|
|
|
return self.A0.forward(x) + self.A1.forward(x)
|
|
|
|
|
)JIT");
|
|
|
|
|
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
b._save_for_mobile(ss, {}, true);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
|
[Pytorch, Mobile] Serialize inlined callstack pointer with debug handle. (#55062)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55062
This diff introduces the following changes:
1. InlinedCallStack pickler/serializer is introduced. It is serialized
as a tuple of {module_instance_info, source range tag, callee:InlinedCallStack}
Module instance info is serialized as tuple of {class_type_name,
instance_name}.
Note that callee of the serialized inlined callstack points to the tuple
of already serialized callstack. This means the first callstack ptr to
serialize, will serialize entire path of the tree, where some callee
nodes might be shared with callstack pointers that will be serialized
subsequently. Pickler supports memoization of pickled objects, where if
a tuple has been serialized then object id is obtained instead of
serialized object again. Thus we stll serialize the tree and not every
path from the root separately. Furthermore, InlinedCallStackSerializer
also uses cache to lookup the pointer and return the serialized IValue.
Furthermore, note that we must also serialize the source range of
InlinedCallStack. In order to this serializer requires map of
source-range-tags-to-source-range map. This was done in the previous
diff, where as part of source range serialization we also generate
unique tags. These are the tags that are serialized in InlinedCallStack.
Thus during deserialization we would have to deserialize source range
before deserializing InlinedCallStacks.
2. Furthermore, each serialized InlinedCallStack is serialized with a
unique debug_handle and source range tag.
BackendDebugHandleManager manages generation of
unique debug handles and saves the map of
debug-handles-to-{source_range_tag, inlined-callstack-ptr}.
This map is then serialized as callstack_debug_map.pkl. Note that
inlined callstack is not sufficient to get all the source information
since it contains source information about the nodes which are inlined.
The top-of-the-stack (or bottom) node, which is the actual op node, is
not part of the inlined callstack pointer and thus the source range of
this node is serialized separately using source_range_tag. This is
similar to how JIT creates callstack in
torch/csrc/jit/runtime/interpreter.cpp
Unique debug handles facilitates exception throwing or profiling using
just the debug handle without any further qualifications, such as which
function or module the inlined-callstack belongs to.
Furthermore, this diff refactors the old mobile code for tracking
module hierarchy information per op. Mainly now bytecode serialization
will serialize debug handles corresponding to ops/nodes in graph and
have callstack_debug_map.pkl help generate:
1. Entire callstack and
2. Module hierarchy information.
Test Plan:
python test/mobile/test_lite_script_module.py TestLiteScriptModule
./build/bin/test_jit --gtest_filter=*ModuleInfo
Imported from OSS
Reviewed By: raziel
Differential Revision: D27468709
fbshipit-source-id: 53e2413e7703ead01c77718b7c333c7c6ff50a23
2021-05-04 16:17:43 +00:00
|
|
|
std::set<std::string> module_debug_info_set;
|
2020-08-14 08:23:53 +00:00
|
|
|
size_t pc = 0;
|
|
|
|
|
while (true) {
|
|
|
|
|
try {
|
|
|
|
|
std::string module_info = bc.get_forward_method_debug_info(pc);
|
[Pytorch, Mobile] Serialize inlined callstack pointer with debug handle. (#55062)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55062
This diff introduces the following changes:
1. InlinedCallStack pickler/serializer is introduced. It is serialized
as a tuple of {module_instance_info, source range tag, callee:InlinedCallStack}
Module instance info is serialized as tuple of {class_type_name,
instance_name}.
Note that callee of the serialized inlined callstack points to the tuple
of already serialized callstack. This means the first callstack ptr to
serialize, will serialize entire path of the tree, where some callee
nodes might be shared with callstack pointers that will be serialized
subsequently. Pickler supports memoization of pickled objects, where if
a tuple has been serialized then object id is obtained instead of
serialized object again. Thus we stll serialize the tree and not every
path from the root separately. Furthermore, InlinedCallStackSerializer
also uses cache to lookup the pointer and return the serialized IValue.
Furthermore, note that we must also serialize the source range of
InlinedCallStack. In order to this serializer requires map of
source-range-tags-to-source-range map. This was done in the previous
diff, where as part of source range serialization we also generate
unique tags. These are the tags that are serialized in InlinedCallStack.
Thus during deserialization we would have to deserialize source range
before deserializing InlinedCallStacks.
2. Furthermore, each serialized InlinedCallStack is serialized with a
unique debug_handle and source range tag.
BackendDebugHandleManager manages generation of
unique debug handles and saves the map of
debug-handles-to-{source_range_tag, inlined-callstack-ptr}.
This map is then serialized as callstack_debug_map.pkl. Note that
inlined callstack is not sufficient to get all the source information
since it contains source information about the nodes which are inlined.
The top-of-the-stack (or bottom) node, which is the actual op node, is
not part of the inlined callstack pointer and thus the source range of
this node is serialized separately using source_range_tag. This is
similar to how JIT creates callstack in
torch/csrc/jit/runtime/interpreter.cpp
Unique debug handles facilitates exception throwing or profiling using
just the debug handle without any further qualifications, such as which
function or module the inlined-callstack belongs to.
Furthermore, this diff refactors the old mobile code for tracking
module hierarchy information per op. Mainly now bytecode serialization
will serialize debug handles corresponding to ops/nodes in graph and
have callstack_debug_map.pkl help generate:
1. Entire callstack and
2. Module hierarchy information.
Test Plan:
python test/mobile/test_lite_script_module.py TestLiteScriptModule
./build/bin/test_jit --gtest_filter=*ModuleInfo
Imported from OSS
Reviewed By: raziel
Differential Revision: D27468709
fbshipit-source-id: 53e2413e7703ead01c77718b7c333c7c6ff50a23
2021-05-04 16:17:43 +00:00
|
|
|
if (!module_info.empty() &&
|
|
|
|
|
(module_info.find("debug_handle") == std::string::npos)) {
|
2020-08-14 08:23:53 +00:00
|
|
|
module_debug_info_set.insert(module_info);
|
|
|
|
|
}
|
|
|
|
|
++pc;
|
|
|
|
|
} catch (const std::exception& e) {
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2020-12-03 18:42:48 +00:00
|
|
|
// class A(nn.Module):
|
|
|
|
|
// def __init__(self):
|
|
|
|
|
// super(A, self).__init__()
|
|
|
|
|
|
|
|
|
|
// def forward(self, x):
|
|
|
|
|
// return x + 5
|
|
|
|
|
|
|
|
|
|
// class B(nn.Module):
|
|
|
|
|
// def __init__(self):
|
|
|
|
|
// super(B, self).__init__()
|
|
|
|
|
// self.A0 = A()
|
|
|
|
|
// self.A1 = A()
|
|
|
|
|
|
|
|
|
|
// def forward(self, x):
|
|
|
|
|
// return self.A0.forward(x) + self.A1.forward(x)
|
|
|
|
|
|
|
|
|
|
// There are 3 module information strings here.
|
|
|
|
|
// "top(B).forward": for the add operator in top.
|
|
|
|
|
// "top(B).A0(A).forward": for the add operator in A0.
|
|
|
|
|
// "top(B).A1(A).forward": for the add operator in A1.
|
|
|
|
|
|
[Pytorch, Mobile] Serialize inlined callstack pointer with debug handle. (#55062)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55062
This diff introduces the following changes:
1. InlinedCallStack pickler/serializer is introduced. It is serialized
as a tuple of {module_instance_info, source range tag, callee:InlinedCallStack}
Module instance info is serialized as tuple of {class_type_name,
instance_name}.
Note that callee of the serialized inlined callstack points to the tuple
of already serialized callstack. This means the first callstack ptr to
serialize, will serialize entire path of the tree, where some callee
nodes might be shared with callstack pointers that will be serialized
subsequently. Pickler supports memoization of pickled objects, where if
a tuple has been serialized then object id is obtained instead of
serialized object again. Thus we stll serialize the tree and not every
path from the root separately. Furthermore, InlinedCallStackSerializer
also uses cache to lookup the pointer and return the serialized IValue.
Furthermore, note that we must also serialize the source range of
InlinedCallStack. In order to this serializer requires map of
source-range-tags-to-source-range map. This was done in the previous
diff, where as part of source range serialization we also generate
unique tags. These are the tags that are serialized in InlinedCallStack.
Thus during deserialization we would have to deserialize source range
before deserializing InlinedCallStacks.
2. Furthermore, each serialized InlinedCallStack is serialized with a
unique debug_handle and source range tag.
BackendDebugHandleManager manages generation of
unique debug handles and saves the map of
debug-handles-to-{source_range_tag, inlined-callstack-ptr}.
This map is then serialized as callstack_debug_map.pkl. Note that
inlined callstack is not sufficient to get all the source information
since it contains source information about the nodes which are inlined.
The top-of-the-stack (or bottom) node, which is the actual op node, is
not part of the inlined callstack pointer and thus the source range of
this node is serialized separately using source_range_tag. This is
similar to how JIT creates callstack in
torch/csrc/jit/runtime/interpreter.cpp
Unique debug handles facilitates exception throwing or profiling using
just the debug handle without any further qualifications, such as which
function or module the inlined-callstack belongs to.
Furthermore, this diff refactors the old mobile code for tracking
module hierarchy information per op. Mainly now bytecode serialization
will serialize debug handles corresponding to ops/nodes in graph and
have callstack_debug_map.pkl help generate:
1. Entire callstack and
2. Module hierarchy information.
Test Plan:
python test/mobile/test_lite_script_module.py TestLiteScriptModule
./build/bin/test_jit --gtest_filter=*ModuleInfo
Imported from OSS
Reviewed By: raziel
Differential Revision: D27468709
fbshipit-source-id: 53e2413e7703ead01c77718b7c333c7c6ff50a23
2021-05-04 16:17:43 +00:00
|
|
|
std::set<std::string> expected_result(
|
|
|
|
|
{"top(B)", "top(B).A0(A)", "top(B).A1(A)"});
|
2020-08-14 08:23:53 +00:00
|
|
|
AT_ASSERT(module_debug_info_set == expected_result);
|
|
|
|
|
}
|
|
|
|
|
|
Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os
def get_compiled_files_list():
import json
with open("build/compile_commands.json") as f:
data = json.load(f)
files = [os.path.relpath(node['file']) for node in data]
for idx, fname in enumerate(files):
if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
return files
def run_clang_tidy(fname):
check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
changes = check_output(["git", "ls-files", "-m"])
if len(changes) == 0:
return
check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])
def main():
git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
compiled_files = get_compiled_files_list()
for idx, fname in enumerate(git_files):
if fname not in compiled_files:
continue
if fname.startswith("caffe2/contrib/aten/"):
continue
print(f"[{idx}/{len(git_files)}] Processing {fname}")
run_clang_tidy(fname)
if __name__ == "__main__":
main()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892
Reviewed By: H-Huang
Differential Revision: D27991944
Pulled By: malfet
fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 21:09:06 +00:00
|
|
|
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
|
2020-09-25 18:35:39 +00:00
|
|
|
TEST(LiteInterpreterTest, Eval) {
|
2020-08-17 07:17:39 +00:00
|
|
|
std::vector<torch::jit::IValue> inputs;
|
|
|
|
|
|
|
|
|
|
Module m("m");
|
|
|
|
|
m.define(R"(
|
|
|
|
|
def __init__(self, x):
|
|
|
|
|
self.training = True
|
|
|
|
|
|
|
|
|
|
def forward(self, input):
|
|
|
|
|
return torch.dropout(input, 1.0, self.training)
|
|
|
|
|
)");
|
|
|
|
|
|
Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os
def get_compiled_files_list():
import json
with open("build/compile_commands.json") as f:
data = json.load(f)
files = [os.path.relpath(node['file']) for node in data]
for idx, fname in enumerate(files):
if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
return files
def run_clang_tidy(fname):
check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
changes = check_output(["git", "ls-files", "-m"])
if len(changes) == 0:
return
check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])
def main():
git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
compiled_files = get_compiled_files_list()
for idx, fname in enumerate(git_files):
if fname not in compiled_files:
continue
if fname.startswith("caffe2/contrib/aten/"):
continue
print(f"[{idx}/{len(git_files)}] Processing {fname}")
run_clang_tidy(fname)
if __name__ == "__main__":
main()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892
Reviewed By: H-Huang
Differential Revision: D27991944
Pulled By: malfet
fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 21:09:06 +00:00
|
|
|
// NOLINTNEXTLINE(cppcoreguidelines-avoid-magic-numbers,modernize-use-emplace)
|
2020-08-17 07:17:39 +00:00
|
|
|
inputs.push_back(torch::ones({1, 1, 28, 28}));
|
|
|
|
|
m.eval();
|
|
|
|
|
auto outputref = m.forward(inputs).toTensor();
|
|
|
|
|
|
|
|
|
|
// save m in training mode to make sure that mobile eval() will correctly
|
|
|
|
|
// change back to eval mode
|
|
|
|
|
m.train();
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
bc.eval();
|
|
|
|
|
IValue res;
|
|
|
|
|
for (int i = 0; i < 3; ++i) {
|
2020-09-11 17:14:09 +00:00
|
|
|
res = bc.get_method("forward")(inputs);
|
2020-08-17 07:17:39 +00:00
|
|
|
}
|
|
|
|
|
auto output = res.toTensor();
|
|
|
|
|
AT_ASSERT(outputref.dim() == output.dim());
|
|
|
|
|
AT_ASSERT(
|
|
|
|
|
outputref[0][0][0][0].item<int>() == output[0][0][0][0].item<int>());
|
|
|
|
|
}
|
|
|
|
|
|
Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os
def get_compiled_files_list():
import json
with open("build/compile_commands.json") as f:
data = json.load(f)
files = [os.path.relpath(node['file']) for node in data]
for idx, fname in enumerate(files):
if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
return files
def run_clang_tidy(fname):
check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
changes = check_output(["git", "ls-files", "-m"])
if len(changes) == 0:
return
check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])
def main():
git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
compiled_files = get_compiled_files_list()
for idx, fname in enumerate(git_files):
if fname not in compiled_files:
continue
if fname.startswith("caffe2/contrib/aten/"):
continue
print(f"[{idx}/{len(git_files)}] Processing {fname}")
run_clang_tidy(fname)
if __name__ == "__main__":
main()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892
Reviewed By: H-Huang
Differential Revision: D27991944
Pulled By: malfet
fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 21:09:06 +00:00
|
|
|
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
|
2020-09-25 18:35:39 +00:00
|
|
|
TEST(LiteInterpreterTest, FindWrongMethodName) {
|
2020-09-03 21:43:07 +00:00
|
|
|
Module m("m");
|
|
|
|
|
m.register_parameter("foo", torch::ones({}), false);
|
|
|
|
|
m.define(R"(
|
|
|
|
|
def add(self, x):
|
|
|
|
|
b = 4
|
|
|
|
|
return self.foo + x + b
|
|
|
|
|
)");
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
ASSERT_TRUE(bc.find_method("forward") == c10::nullopt);
|
|
|
|
|
}
|
|
|
|
|
|
Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os
def get_compiled_files_list():
import json
with open("build/compile_commands.json") as f:
data = json.load(f)
files = [os.path.relpath(node['file']) for node in data]
for idx, fname in enumerate(files):
if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
return files
def run_clang_tidy(fname):
check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
changes = check_output(["git", "ls-files", "-m"])
if len(changes) == 0:
return
check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])
def main():
git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
compiled_files = get_compiled_files_list()
for idx, fname in enumerate(git_files):
if fname not in compiled_files:
continue
if fname.startswith("caffe2/contrib/aten/"):
continue
print(f"[{idx}/{len(git_files)}] Processing {fname}")
run_clang_tidy(fname)
if __name__ == "__main__":
main()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892
Reviewed By: H-Huang
Differential Revision: D27991944
Pulled By: malfet
fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 21:09:06 +00:00
|
|
|
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
|
2020-09-25 18:35:39 +00:00
|
|
|
TEST(LiteInterpreterTest, FindAndRunMethod) {
|
2020-09-03 21:43:07 +00:00
|
|
|
Module m("m");
|
|
|
|
|
m.register_parameter("foo", torch::ones({}), false);
|
|
|
|
|
m.define(R"(
|
|
|
|
|
def add_it(self, x):
|
|
|
|
|
b = 4
|
|
|
|
|
return self.foo + x + b
|
|
|
|
|
)");
|
|
|
|
|
|
|
|
|
|
std::vector<IValue> inputs;
|
|
|
|
|
auto minput = 5 * torch::ones({});
|
|
|
|
|
inputs.emplace_back(minput);
|
|
|
|
|
auto ref = m.get_method("add_it")(inputs);
|
|
|
|
|
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
IValue res;
|
|
|
|
|
for (int i = 0; i < 3; ++i) {
|
|
|
|
|
auto bcinputs = inputs;
|
|
|
|
|
auto method = bc.find_method("add_it");
|
|
|
|
|
AT_ASSERT(method != c10::nullopt);
|
|
|
|
|
res = (*method)(std::move(bcinputs));
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
auto resd = res.toTensor().item<float>();
|
|
|
|
|
auto refd = ref.toTensor().item<float>();
|
|
|
|
|
AT_ASSERT(resd == refd);
|
|
|
|
|
}
|
|
|
|
|
|
Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os
def get_compiled_files_list():
import json
with open("build/compile_commands.json") as f:
data = json.load(f)
files = [os.path.relpath(node['file']) for node in data]
for idx, fname in enumerate(files):
if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
return files
def run_clang_tidy(fname):
check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
changes = check_output(["git", "ls-files", "-m"])
if len(changes) == 0:
return
check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])
def main():
git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
compiled_files = get_compiled_files_list()
for idx, fname in enumerate(git_files):
if fname not in compiled_files:
continue
if fname.startswith("caffe2/contrib/aten/"):
continue
print(f"[{idx}/{len(git_files)}] Processing {fname}")
run_clang_tidy(fname)
if __name__ == "__main__":
main()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892
Reviewed By: H-Huang
Differential Revision: D27991944
Pulled By: malfet
fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 21:09:06 +00:00
|
|
|
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
|
2020-09-25 18:35:39 +00:00
|
|
|
TEST(LiteInterpreterTest, RunMethodVariadic) {
|
2020-09-13 20:25:29 +00:00
|
|
|
Module m("m");
|
|
|
|
|
m.register_parameter("foo", torch::ones({}), false);
|
|
|
|
|
m.define(R"(
|
|
|
|
|
def add_three(self, x, y):
|
|
|
|
|
return self.foo + x + y
|
|
|
|
|
)");
|
|
|
|
|
|
|
|
|
|
std::vector<IValue> inputs;
|
|
|
|
|
auto inputx = 5 * torch::ones({});
|
|
|
|
|
auto inputy = 4 * torch::ones({});
|
|
|
|
|
auto ref = m.run_method("add_three", inputx, inputy);
|
|
|
|
|
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
IValue res = bc.run_method("add_three", inputx, inputy);
|
|
|
|
|
|
|
|
|
|
auto resd = res.toTensor().item<float>();
|
|
|
|
|
auto refd = ref.toTensor().item<float>();
|
|
|
|
|
AT_ASSERT(resd == refd);
|
|
|
|
|
}
|
|
|
|
|
|
Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os
def get_compiled_files_list():
import json
with open("build/compile_commands.json") as f:
data = json.load(f)
files = [os.path.relpath(node['file']) for node in data]
for idx, fname in enumerate(files):
if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
return files
def run_clang_tidy(fname):
check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
changes = check_output(["git", "ls-files", "-m"])
if len(changes) == 0:
return
check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])
def main():
git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
compiled_files = get_compiled_files_list()
for idx, fname in enumerate(git_files):
if fname not in compiled_files:
continue
if fname.startswith("caffe2/contrib/aten/"):
continue
print(f"[{idx}/{len(git_files)}] Processing {fname}")
run_clang_tidy(fname)
if __name__ == "__main__":
main()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892
Reviewed By: H-Huang
Differential Revision: D27991944
Pulled By: malfet
fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 21:09:06 +00:00
|
|
|
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
|
2021-03-16 22:20:39 +00:00
|
|
|
TEST(LiteInterpreterTest, DuplicateSetState) {
|
|
|
|
|
Module m("M");
|
|
|
|
|
m.register_parameter("foo", torch::ones({}), false);
|
|
|
|
|
m.define(R"(
|
|
|
|
|
def __getstate__(self):
|
|
|
|
|
return self.foo + self.foo
|
|
|
|
|
def __setstate__(self, a):
|
|
|
|
|
self.foo = a
|
|
|
|
|
def forward(self, x):
|
|
|
|
|
b = 4
|
|
|
|
|
return self.foo + x + b
|
|
|
|
|
)");
|
|
|
|
|
|
|
|
|
|
Module b("B");
|
|
|
|
|
b.register_module("M0", m);
|
|
|
|
|
b.register_module("M1", m);
|
|
|
|
|
b.define(R"(
|
|
|
|
|
def forward(self, x):
|
|
|
|
|
return self.M0.forward(x) + self.M1.forward(x)
|
|
|
|
|
)");
|
|
|
|
|
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
const auto methods = bc.get_methods();
|
|
|
|
|
const size_t expected_n = 3;
|
|
|
|
|
ASSERT_EQ(methods.size(), expected_n);
|
|
|
|
|
}
|
|
|
|
|
|
Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os
def get_compiled_files_list():
import json
with open("build/compile_commands.json") as f:
data = json.load(f)
files = [os.path.relpath(node['file']) for node in data]
for idx, fname in enumerate(files):
if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
return files
def run_clang_tidy(fname):
check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
changes = check_output(["git", "ls-files", "-m"])
if len(changes) == 0:
return
check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])
def main():
git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
compiled_files = get_compiled_files_list()
for idx, fname in enumerate(git_files):
if fname not in compiled_files:
continue
if fname.startswith("caffe2/contrib/aten/"):
continue
print(f"[{idx}/{len(git_files)}] Processing {fname}")
run_clang_tidy(fname)
if __name__ == "__main__":
main()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892
Reviewed By: H-Huang
Differential Revision: D27991944
Pulled By: malfet
fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 21:09:06 +00:00
|
|
|
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
|
2020-11-07 04:17:49 +00:00
|
|
|
TEST(LiteInterpreterTest, ExtraFiles) {
|
|
|
|
|
const auto script = R"JIT(
|
|
|
|
|
def forward(self):
|
|
|
|
|
x = torch.rand(5, 5)
|
|
|
|
|
x = x.mm(x)
|
|
|
|
|
return x
|
|
|
|
|
)JIT";
|
|
|
|
|
|
|
|
|
|
auto module =
|
|
|
|
|
std::make_shared<Module>("Module", std::make_shared<CompilationUnit>());
|
|
|
|
|
module->define(script);
|
|
|
|
|
std::ostringstream oss;
|
|
|
|
|
std::unordered_map<std::string, std::string> extra_files;
|
|
|
|
|
extra_files["metadata.json"] = "abc";
|
2021-02-24 05:55:07 +00:00
|
|
|
extra_files["mobile_info.json"] = "{\"key\": 23}";
|
2020-11-07 04:17:49 +00:00
|
|
|
module->_save_for_mobile(oss, extra_files);
|
|
|
|
|
|
|
|
|
|
std::istringstream iss(oss.str());
|
|
|
|
|
caffe2::serialize::IStreamAdapter adapter{&iss};
|
|
|
|
|
std::unordered_map<std::string, std::string> loaded_extra_files;
|
|
|
|
|
loaded_extra_files["metadata.json"] = "";
|
2021-02-24 05:55:07 +00:00
|
|
|
torch::jit::_load_for_mobile(iss, torch::kCPU, loaded_extra_files);
|
2020-11-07 04:17:49 +00:00
|
|
|
ASSERT_EQ(loaded_extra_files["metadata.json"], "abc");
|
2021-02-24 05:55:07 +00:00
|
|
|
|
|
|
|
|
loaded_extra_files.clear();
|
|
|
|
|
std::vector<std::string> all_files =
|
|
|
|
|
caffe2::serialize::PyTorchStreamReader(&iss).getAllRecords();
|
|
|
|
|
|
|
|
|
|
for (auto& file_name : all_files) {
|
|
|
|
|
if (file_name.find("extra/") == 0) {
|
|
|
|
|
loaded_extra_files[file_name.substr(6)] = "";
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
torch::jit::_load_for_mobile(iss, torch::kCPU, loaded_extra_files);
|
|
|
|
|
ASSERT_EQ(loaded_extra_files["metadata.json"], "abc");
|
|
|
|
|
ASSERT_EQ(loaded_extra_files["mobile_info.json"], "{\"key\": 23}");
|
2020-11-07 04:17:49 +00:00
|
|
|
}
|
|
|
|
|
|
Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os
def get_compiled_files_list():
import json
with open("build/compile_commands.json") as f:
data = json.load(f)
files = [os.path.relpath(node['file']) for node in data]
for idx, fname in enumerate(files):
if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
return files
def run_clang_tidy(fname):
check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
changes = check_output(["git", "ls-files", "-m"])
if len(changes) == 0:
return
check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])
def main():
git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
compiled_files = get_compiled_files_list()
for idx, fname in enumerate(git_files):
if fname not in compiled_files:
continue
if fname.startswith("caffe2/contrib/aten/"):
continue
print(f"[{idx}/{len(git_files)}] Processing {fname}")
run_clang_tidy(fname)
if __name__ == "__main__":
main()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892
Reviewed By: H-Huang
Differential Revision: D27991944
Pulled By: malfet
fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 21:09:06 +00:00
|
|
|
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
|
2020-12-18 19:15:42 +00:00
|
|
|
TEST(LiteInterpreterTest, OpNameExportFetchRootOperators) {
|
|
|
|
|
torch::jit::Module m("m");
|
|
|
|
|
m.register_parameter("weight", torch::ones({20, 1, 5, 5}), false);
|
|
|
|
|
m.register_parameter("bias", torch::ones({20}), false);
|
|
|
|
|
m.define(R"(
|
|
|
|
|
def forward(self, input):
|
|
|
|
|
x1 = torch.zeros(2, 2)
|
|
|
|
|
x2 = torch.empty_like(torch.empty(2, 2))
|
|
|
|
|
x3 = torch._convolution(input, self.weight, self.bias, [1, 1], [0, 0], [1, 1], False, [0, 0], 1, False, False, True, True)
|
|
|
|
|
return (x1, x2, x3)
|
|
|
|
|
)");
|
|
|
|
|
m.eval();
|
|
|
|
|
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
|
|
|
|
|
torch::jit::mobile::Module ptl_model = torch::jit::_load_for_mobile(ss);
|
|
|
|
|
std::set<std::string> operator_names =
|
|
|
|
|
torch::jit::mobile::_export_operator_list(ptl_model);
|
|
|
|
|
std::set<std::string> expected_operator_names = {
|
|
|
|
|
"aten::_convolution",
|
|
|
|
|
"aten::empty.memory_format",
|
|
|
|
|
"aten::empty_like",
|
|
|
|
|
"aten::zeros",
|
|
|
|
|
};
|
|
|
|
|
EXPECT_EQ(operator_names, expected_operator_names)
|
|
|
|
|
<< "Expected the root operator lists to be the same";
|
|
|
|
|
}
|
|
|
|
|
|
2020-03-14 01:21:21 +00:00
|
|
|
namespace {
|
Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os
def get_compiled_files_list():
import json
with open("build/compile_commands.json") as f:
data = json.load(f)
files = [os.path.relpath(node['file']) for node in data]
for idx, fname in enumerate(files):
if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
return files
def run_clang_tidy(fname):
check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
changes = check_output(["git", "ls-files", "-m"])
if len(changes) == 0:
return
check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])
def main():
git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
compiled_files = get_compiled_files_list()
for idx, fname in enumerate(git_files):
if fname not in compiled_files:
continue
if fname.startswith("caffe2/contrib/aten/"):
continue
print(f"[{idx}/{len(git_files)}] Processing {fname}")
run_clang_tidy(fname)
if __name__ == "__main__":
main()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892
Reviewed By: H-Huang
Differential Revision: D27991944
Pulled By: malfet
fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 21:09:06 +00:00
|
|
|
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
|
2020-03-14 01:21:21 +00:00
|
|
|
static auto reg =
|
2020-07-06 22:49:02 +00:00
|
|
|
torch::class_<TorchBindLiteInterpreterTestStruct>(
|
2020-03-24 07:34:43 +00:00
|
|
|
"_TorchScriptTesting",
|
|
|
|
|
"_LiteInterpreterTest")
|
2021-02-04 05:51:12 +00:00
|
|
|
.def(torch::init<>())
|
2020-03-14 01:21:21 +00:00
|
|
|
.def("get", &TorchBindLiteInterpreterTestStruct::get)
|
|
|
|
|
.def_pickle(
|
|
|
|
|
// __getattr__
|
|
|
|
|
[](const c10::intrusive_ptr<TorchBindLiteInterpreterTestStruct>&
|
|
|
|
|
self) -> int64_t { return 0; },
|
|
|
|
|
// __setattr__
|
|
|
|
|
[](int64_t state) {
|
|
|
|
|
return c10::make_intrusive<TorchBindLiteInterpreterTestStruct>();
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
} // namespace
|
|
|
|
|
|
Bytecode export flow (#25187)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25187
The bytecode export flow: dump the bytecode format for the light weighted interpreter.
* The bytecode is generated without input spec optimization. It would be more generic (input independent) with no obvious performance degradation (to be tested).
* Main API: torch::jit::script::Module::save(filename, extra_files, bool *bytecode_format* = false).
* Both bytecode and module object are exported in pickle format.
* The module object (in data.pkl) is the same as the original JIT model.
* The serializer is dependent on pickle only (no protobuf or Json).
* The major functionality is forked in ScriptModuleSerializer2::serialize().
* The test loader is test_bc_export.cpp.
* Simple APIs are added in Code and its implementation to get necessary information (instructions, operators and constants).
* Since there's no dependency on graph/node, GetAttr is promoted from an operator to first-class instruction (https://github.com/pytorch/pytorch/pull/25151) .
* Some definitions (instructions, writeArchive, etc) that are shared by full JIT and bytecode are pulled out of the local namespace (https://github.com/pytorch/pytorch/pull/25148).
The output layout looks like:
* folders of methods.
* In each method folder (for example, forward/):
* bytecode.pkl: instructions and operators
* constants{.pkl,/}: constant list in constants.pkl. If there are tensors in constants, the binary tensor files in constants/ folder.
* data{.pkl,/}: the module object, with binary tensor files in data/ folder. The same as in torchscript.
Test Plan: Imported from OSS
Differential Revision: D17076411
fbshipit-source-id: 46eb298e7320d1e585b0101effc0fcfd09219046
2019-09-25 23:34:05 +00:00
|
|
|
} // namespace jit
|
2020-02-13 00:25:13 +00:00
|
|
|
} // namespace torch
|