2021-02-26 19:51:29 +00:00
|
|
|
#include <test/cpp/jit/test_utils.h>
|
|
|
|
|
|
2020-03-14 01:21:21 +00:00
|
|
|
#include <c10/core/TensorOptions.h>
|
2023-05-01 23:27:56 +00:00
|
|
|
#include <gtest/gtest.h>
|
Bytecode export flow (#25187)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25187
The bytecode export flow: dump the bytecode format for the light weighted interpreter.
* The bytecode is generated without input spec optimization. It would be more generic (input independent) with no obvious performance degradation (to be tested).
* Main API: torch::jit::script::Module::save(filename, extra_files, bool *bytecode_format* = false).
* Both bytecode and module object are exported in pickle format.
* The module object (in data.pkl) is the same as the original JIT model.
* The serializer is dependent on pickle only (no protobuf or Json).
* The major functionality is forked in ScriptModuleSerializer2::serialize().
* The test loader is test_bc_export.cpp.
* Simple APIs are added in Code and its implementation to get necessary information (instructions, operators and constants).
* Since there's no dependency on graph/node, GetAttr is promoted from an operator to first-class instruction (https://github.com/pytorch/pytorch/pull/25151) .
* Some definitions (instructions, writeArchive, etc) that are shared by full JIT and bytecode are pulled out of the local namespace (https://github.com/pytorch/pytorch/pull/25148).
The output layout looks like:
* folders of methods.
* In each method folder (for example, forward/):
* bytecode.pkl: instructions and operators
* constants{.pkl,/}: constant list in constants.pkl. If there are tensors in constants, the binary tensor files in constants/ folder.
* data{.pkl,/}: the module object, with binary tensor files in data/ folder. The same as in torchscript.
Test Plan: Imported from OSS
Differential Revision: D17076411
fbshipit-source-id: 46eb298e7320d1e585b0101effc0fcfd09219046
2019-09-25 23:34:05 +00:00
|
|
|
#include <torch/csrc/autograd/generated/variable_factories.h>
|
2020-03-14 01:21:21 +00:00
|
|
|
#include <torch/csrc/jit/api/module.h>
|
2021-02-04 05:51:12 +00:00
|
|
|
#include <torch/csrc/jit/frontend/resolver.h>
|
2022-01-20 18:04:43 +00:00
|
|
|
#include <torch/csrc/jit/mobile/compatibility/backport.h>
|
|
|
|
|
#include <torch/csrc/jit/mobile/compatibility/backport_manager.h>
|
|
|
|
|
#include <torch/csrc/jit/mobile/compatibility/model_compatibility.h>
|
|
|
|
|
#include <torch/csrc/jit/mobile/compatibility/runtime_compatibility.h>
|
Bytecode export flow (#25187)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25187
The bytecode export flow: dump the bytecode format for the light weighted interpreter.
* The bytecode is generated without input spec optimization. It would be more generic (input independent) with no obvious performance degradation (to be tested).
* Main API: torch::jit::script::Module::save(filename, extra_files, bool *bytecode_format* = false).
* Both bytecode and module object are exported in pickle format.
* The module object (in data.pkl) is the same as the original JIT model.
* The serializer is dependent on pickle only (no protobuf or Json).
* The major functionality is forked in ScriptModuleSerializer2::serialize().
* The test loader is test_bc_export.cpp.
* Simple APIs are added in Code and its implementation to get necessary information (instructions, operators and constants).
* Since there's no dependency on graph/node, GetAttr is promoted from an operator to first-class instruction (https://github.com/pytorch/pytorch/pull/25151) .
* Some definitions (instructions, writeArchive, etc) that are shared by full JIT and bytecode are pulled out of the local namespace (https://github.com/pytorch/pytorch/pull/25148).
The output layout looks like:
* folders of methods.
* In each method folder (for example, forward/):
* bytecode.pkl: instructions and operators
* constants{.pkl,/}: constant list in constants.pkl. If there are tensors in constants, the binary tensor files in constants/ folder.
* data{.pkl,/}: the module object, with binary tensor files in data/ folder. The same as in torchscript.
Test Plan: Imported from OSS
Differential Revision: D17076411
fbshipit-source-id: 46eb298e7320d1e585b0101effc0fcfd09219046
2019-09-25 23:34:05 +00:00
|
|
|
#include <torch/csrc/jit/mobile/import.h>
|
2021-10-25 21:43:08 +00:00
|
|
|
#include <torch/csrc/jit/mobile/interpreter.h>
|
Bytecode export flow (#25187)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25187
The bytecode export flow: dump the bytecode format for the light weighted interpreter.
* The bytecode is generated without input spec optimization. It would be more generic (input independent) with no obvious performance degradation (to be tested).
* Main API: torch::jit::script::Module::save(filename, extra_files, bool *bytecode_format* = false).
* Both bytecode and module object are exported in pickle format.
* The module object (in data.pkl) is the same as the original JIT model.
* The serializer is dependent on pickle only (no protobuf or Json).
* The major functionality is forked in ScriptModuleSerializer2::serialize().
* The test loader is test_bc_export.cpp.
* Simple APIs are added in Code and its implementation to get necessary information (instructions, operators and constants).
* Since there's no dependency on graph/node, GetAttr is promoted from an operator to first-class instruction (https://github.com/pytorch/pytorch/pull/25151) .
* Some definitions (instructions, writeArchive, etc) that are shared by full JIT and bytecode are pulled out of the local namespace (https://github.com/pytorch/pytorch/pull/25148).
The output layout looks like:
* folders of methods.
* In each method folder (for example, forward/):
* bytecode.pkl: instructions and operators
* constants{.pkl,/}: constant list in constants.pkl. If there are tensors in constants, the binary tensor files in constants/ folder.
* data{.pkl,/}: the module object, with binary tensor files in data/ folder. The same as in torchscript.
Test Plan: Imported from OSS
Differential Revision: D17076411
fbshipit-source-id: 46eb298e7320d1e585b0101effc0fcfd09219046
2019-09-25 23:34:05 +00:00
|
|
|
#include <torch/csrc/jit/mobile/module.h>
|
2021-09-12 05:22:28 +00:00
|
|
|
#include <torch/csrc/jit/mobile/parse_bytecode.h>
|
2021-09-17 19:57:48 +00:00
|
|
|
#include <torch/csrc/jit/mobile/parse_operators.h>
|
2021-12-15 03:04:32 +00:00
|
|
|
#include <torch/csrc/jit/mobile/upgrader_mobile.h>
|
2020-12-18 19:15:42 +00:00
|
|
|
#include <torch/csrc/jit/serialization/export.h>
|
2020-02-27 20:18:24 +00:00
|
|
|
#include <torch/csrc/jit/serialization/import.h>
|
2020-03-14 01:21:21 +00:00
|
|
|
#include <torch/custom_class.h>
|
2020-03-04 07:31:03 +00:00
|
|
|
#include <torch/torch.h>
|
Bytecode export flow (#25187)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25187
The bytecode export flow: dump the bytecode format for the light weighted interpreter.
* The bytecode is generated without input spec optimization. It would be more generic (input independent) with no obvious performance degradation (to be tested).
* Main API: torch::jit::script::Module::save(filename, extra_files, bool *bytecode_format* = false).
* Both bytecode and module object are exported in pickle format.
* The module object (in data.pkl) is the same as the original JIT model.
* The serializer is dependent on pickle only (no protobuf or Json).
* The major functionality is forked in ScriptModuleSerializer2::serialize().
* The test loader is test_bc_export.cpp.
* Simple APIs are added in Code and its implementation to get necessary information (instructions, operators and constants).
* Since there's no dependency on graph/node, GetAttr is promoted from an operator to first-class instruction (https://github.com/pytorch/pytorch/pull/25151) .
* Some definitions (instructions, writeArchive, etc) that are shared by full JIT and bytecode are pulled out of the local namespace (https://github.com/pytorch/pytorch/pull/25148).
The output layout looks like:
* folders of methods.
* In each method folder (for example, forward/):
* bytecode.pkl: instructions and operators
* constants{.pkl,/}: constant list in constants.pkl. If there are tensors in constants, the binary tensor files in constants/ folder.
* data{.pkl,/}: the module object, with binary tensor files in data/ folder. The same as in torchscript.
Test Plan: Imported from OSS
Differential Revision: D17076411
fbshipit-source-id: 46eb298e7320d1e585b0101effc0fcfd09219046
2019-09-25 23:34:05 +00:00
|
|
|
|
2021-12-09 10:12:41 +00:00
|
|
|
#include <torch/csrc/jit/serialization/import_export_functions.h>
|
2020-08-14 08:23:53 +00:00
|
|
|
#include <unordered_set>
|
[Operator Versioning][Edge] Change OP to CALL when there is a valid upgrader (#67731)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67731
1. Register upgrader function at loading stage
2. Change OP to CALL when there operator_version from model is smaller than current runtime version and there exists a valid upgrader
The interpreter log is :
```
RUNNING 0 STOREN 1 3
RUNNING 1 DROPR 1
RUNNING 2 LOAD 2
RUNNING 3 LOAD 3
RUNNING 4 CALL 0
RUNNING 0 STOREN 1 2
RUNNING 1 LOAD 1
RUNNING 2 OP 0, aten::is_floating_point
RUNNING 3 JF 3
RUNNING 4 LOADC 1
RUNNING 5 JMP 3
RUNNING 8 STORE 3
RUNNING 9 MOVE 3
RUNNING 10 JF 5
RUNNING 11 LOAD 1
RUNNING 12 LOAD 2
RUNNING 13 OP 1, aten::div.Tensor
RUNNING 14 JMP 5
RUNNING 19 STORE 4
RUNNING 20 DROPR 2
RUNNING 21 DROPR 1
RUNNING 22 MOVE 4
RUNNING 23 RET
RUNNING 5 LOAD 2
RUNNING 6 LOAD 3
RUNNING 7 CALL 0
RUNNING 0 STOREN 1 2
RUNNING 1 LOAD 1
RUNNING 2 OP 0, aten::is_floating_point
RUNNING 3 JF 3
RUNNING 4 LOADC 1
RUNNING 5 JMP 3
RUNNING 8 STORE 3
RUNNING 9 MOVE 3
RUNNING 10 JF 5
RUNNING 11 LOAD 1
RUNNING 12 LOAD 2
RUNNING 13 OP 1, aten::div.Tensor
RUNNING 14 JMP 5
RUNNING 19 STORE 4
RUNNING 20 DROPR 2
RUNNING 21 DROPR 1
RUNNING 22 MOVE 4
RUNNING 23 RET
RUNNING 8 MOVE 2
RUNNING 9 MOVE 3
RUNNING 10 CALL 0
RUNNING 0 STOREN 1 2
RUNNING 1 LOAD 1
RUNNING 2 OP 0, aten::is_floating_point
RUNNING 3 JF 3
RUNNING 4 LOADC 1
RUNNING 5 JMP 3
RUNNING 8 STORE 3
RUNNING 9 MOVE 3
RUNNING 10 JF 5
RUNNING 11 LOAD 1
RUNNING 12 LOAD 2
RUNNING 13 OP 1, aten::div.Tensor
RUNNING 14 JMP 5
RUNNING 19 STORE 4
RUNNING 20 DROPR 2
RUNNING 21 DROPR 1
RUNNING 22 MOVE 4
RUNNING 23 RET
RUNNING 11 TUPLE_CONSTRUCT 3
RUNNING 12 RET
```
The upgrader bytecode is:
```
(STOREN, 1, 2)
(LOAD, 1, 0)
(OP, 0, 0)
(JF, 3, 0)
(LOADC, 1, 0)
(JMP, 3, 0)
(LOAD, 2, 0)
(OP, 0, 0)
(STORE, 3, 0)
(MOVE, 3, 0)
(JF, 5, 0)
(LOAD, 1, 0)
(LOAD, 2, 0)
(OP, 1, 0)
(JMP, 5, 0)
(LOAD, 1, 0)
(LOAD, 2, 0)
(LOADC, 0, 0)
(OP, 2, 0)
(STORE, 4, 0)
(DROPR, 2, 0)
(DROPR, 1, 0)
(MOVE, 4, 0)
(RET, 0, 0)
```
ghstack-source-id: 145635622
Test Plan: describe in summary and CI
Reviewed By: iseeyuan
Differential Revision: D32092517
fbshipit-source-id: 0314b4bda5d2578cdd4e7cfbfd1e3c07fbccf8a3
2021-12-15 03:04:32 +00:00
|
|
|
|
Bytecode export flow (#25187)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25187
The bytecode export flow: dump the bytecode format for the light weighted interpreter.
* The bytecode is generated without input spec optimization. It would be more generic (input independent) with no obvious performance degradation (to be tested).
* Main API: torch::jit::script::Module::save(filename, extra_files, bool *bytecode_format* = false).
* Both bytecode and module object are exported in pickle format.
* The module object (in data.pkl) is the same as the original JIT model.
* The serializer is dependent on pickle only (no protobuf or Json).
* The major functionality is forked in ScriptModuleSerializer2::serialize().
* The test loader is test_bc_export.cpp.
* Simple APIs are added in Code and its implementation to get necessary information (instructions, operators and constants).
* Since there's no dependency on graph/node, GetAttr is promoted from an operator to first-class instruction (https://github.com/pytorch/pytorch/pull/25151) .
* Some definitions (instructions, writeArchive, etc) that are shared by full JIT and bytecode are pulled out of the local namespace (https://github.com/pytorch/pytorch/pull/25148).
The output layout looks like:
* folders of methods.
* In each method folder (for example, forward/):
* bytecode.pkl: instructions and operators
* constants{.pkl,/}: constant list in constants.pkl. If there are tensors in constants, the binary tensor files in constants/ folder.
* data{.pkl,/}: the module object, with binary tensor files in data/ folder. The same as in torchscript.
Test Plan: Imported from OSS
Differential Revision: D17076411
fbshipit-source-id: 46eb298e7320d1e585b0101effc0fcfd09219046
2019-09-25 23:34:05 +00:00
|
|
|
// Tests go in torch::jit
|
|
|
|
|
namespace torch {
|
|
|
|
|
namespace jit {
|
|
|
|
|
|
2020-09-25 18:35:39 +00:00
|
|
|
TEST(LiteInterpreterTest, UpsampleNearest2d) {
|
2020-03-12 06:29:34 +00:00
|
|
|
Module m("m");
|
2020-02-04 15:56:47 +00:00
|
|
|
m.define(R"(
|
|
|
|
|
def forward(self, input: Tensor, scale:float):
|
|
|
|
|
return torch.upsample_nearest2d(input, [1, 1], float(scale), float(scale))
|
|
|
|
|
)");
|
|
|
|
|
|
|
|
|
|
std::vector<IValue> inputs;
|
|
|
|
|
inputs.emplace_back(torch::rand({1, 3, 128, 128}));
|
|
|
|
|
inputs.emplace_back(at::Scalar(2.0));
|
|
|
|
|
auto ref = m.forward(inputs);
|
|
|
|
|
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
IValue res;
|
|
|
|
|
res = bc.forward(inputs);
|
|
|
|
|
|
|
|
|
|
auto resd = res.toTensor();
|
|
|
|
|
auto refd = ref.toTensor();
|
|
|
|
|
ASSERT_TRUE(resd.equal(refd));
|
|
|
|
|
}
|
|
|
|
|
|
2020-11-03 00:28:24 +00:00
|
|
|
TEST(LiteInterpreterTest, CheckAttrAccess) {
|
|
|
|
|
Module m("m");
|
|
|
|
|
m.register_attribute("mobile_optimized", BoolType::get(), true);
|
|
|
|
|
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
bool mobile_optimized = bc.attr("mobile_optimized", false).toBool();
|
|
|
|
|
|
|
|
|
|
AT_ASSERT(mobile_optimized);
|
|
|
|
|
m.setattr("mobile_optimized", false);
|
|
|
|
|
ss = std::stringstream();
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
bc = _load_for_mobile(ss);
|
|
|
|
|
mobile_optimized = bc.attr("mobile_optimized", false).toBool();
|
|
|
|
|
|
|
|
|
|
AT_ASSERT(!mobile_optimized);
|
|
|
|
|
}
|
|
|
|
|
|
2021-02-02 02:32:57 +00:00
|
|
|
TEST(LiteInterpreterTest, MethodInvocation) { // NOLINT (use =delete in gtest)
|
|
|
|
|
const std::vector<std::string> test_programs{
|
|
|
|
|
// test invoking a method with default parameter
|
|
|
|
|
R"(
|
|
|
|
|
def test_func(self, x, b : int = 4):
|
|
|
|
|
return self.foo + x + b
|
|
|
|
|
)",
|
|
|
|
|
// inner method call with default parameter (gets inlined)
|
|
|
|
|
R"(
|
|
|
|
|
def add_with_default_arg(self, x, b : int = 4):
|
|
|
|
|
return self.foo + x + b
|
|
|
|
|
def test_func(self, x):
|
|
|
|
|
return self.add_with_default_arg(x) # invoke method w/ default arg
|
|
|
|
|
)",
|
|
|
|
|
// simple method call
|
|
|
|
|
R"(
|
|
|
|
|
def test_func(self, x):
|
|
|
|
|
b = 4
|
|
|
|
|
return self.foo + x + b
|
|
|
|
|
)",
|
|
|
|
|
};
|
|
|
|
|
for (const auto& test_program : test_programs) {
|
|
|
|
|
Module m("m");
|
|
|
|
|
m.register_parameter("foo", torch::ones({}), false);
|
|
|
|
|
m.define(test_program);
|
|
|
|
|
|
|
|
|
|
const int fortyTwo = 42; // (keep linter happy)
|
|
|
|
|
auto minput = fortyTwo * torch::ones({});
|
|
|
|
|
auto ref = m.run_method("test_func", minput);
|
|
|
|
|
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
const auto& test_func = bc.get_method("test_func");
|
|
|
|
|
IValue res;
|
|
|
|
|
for (int i = 0; i < 3; ++i) {
|
|
|
|
|
res = test_func({minput});
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
auto resd = res.toTensor().item<float>();
|
|
|
|
|
auto refd = ref.toTensor().item<float>();
|
|
|
|
|
AT_ASSERT(resd == refd);
|
Bytecode export flow (#25187)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25187
The bytecode export flow: dump the bytecode format for the light weighted interpreter.
* The bytecode is generated without input spec optimization. It would be more generic (input independent) with no obvious performance degradation (to be tested).
* Main API: torch::jit::script::Module::save(filename, extra_files, bool *bytecode_format* = false).
* Both bytecode and module object are exported in pickle format.
* The module object (in data.pkl) is the same as the original JIT model.
* The serializer is dependent on pickle only (no protobuf or Json).
* The major functionality is forked in ScriptModuleSerializer2::serialize().
* The test loader is test_bc_export.cpp.
* Simple APIs are added in Code and its implementation to get necessary information (instructions, operators and constants).
* Since there's no dependency on graph/node, GetAttr is promoted from an operator to first-class instruction (https://github.com/pytorch/pytorch/pull/25151) .
* Some definitions (instructions, writeArchive, etc) that are shared by full JIT and bytecode are pulled out of the local namespace (https://github.com/pytorch/pytorch/pull/25148).
The output layout looks like:
* folders of methods.
* In each method folder (for example, forward/):
* bytecode.pkl: instructions and operators
* constants{.pkl,/}: constant list in constants.pkl. If there are tensors in constants, the binary tensor files in constants/ folder.
* data{.pkl,/}: the module object, with binary tensor files in data/ folder. The same as in torchscript.
Test Plan: Imported from OSS
Differential Revision: D17076411
fbshipit-source-id: 46eb298e7320d1e585b0101effc0fcfd09219046
2019-09-25 23:34:05 +00:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2020-09-25 18:35:39 +00:00
|
|
|
TEST(LiteInterpreterTest, Conv) {
|
2019-10-11 21:03:55 +00:00
|
|
|
auto s = std::getenv("PYTORCH_TEST_WITH_TSAN");
|
|
|
|
|
if (s && strcmp(s, "1") == 0)
|
|
|
|
|
return;
|
|
|
|
|
|
Add OPN instruction and vararg operator table (#27104)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27104
* The use case here is to replace prim::ListConstruct, which requires Node, but Node is not available in mobile lite interpreter.
* (OPN, X, N), X is the index to the vararg operator-name and operator tables. N is number of inputs. For ListConstruct example, operator name can be "aten::listconstruct" and the overloaded name is the output type ("int", "float", "bool", "tensor" and "generic").
* A vararg operator table is built with void(int input_size, Stack& stack) functions.
## Unit test
LiteInterpreterConv covers OPN instruction and conv operator.
Test Plan: Imported from OSS
Differential Revision: D17762853
fbshipit-source-id: 475aa0c6678e3760cec805862a78510913a89c83
2019-10-04 16:34:21 +00:00
|
|
|
std::vector<torch::jit::IValue> inputs;
|
|
|
|
|
|
2020-03-12 06:29:34 +00:00
|
|
|
Module m("m");
|
Add OPN instruction and vararg operator table (#27104)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27104
* The use case here is to replace prim::ListConstruct, which requires Node, but Node is not available in mobile lite interpreter.
* (OPN, X, N), X is the index to the vararg operator-name and operator tables. N is number of inputs. For ListConstruct example, operator name can be "aten::listconstruct" and the overloaded name is the output type ("int", "float", "bool", "tensor" and "generic").
* A vararg operator table is built with void(int input_size, Stack& stack) functions.
## Unit test
LiteInterpreterConv covers OPN instruction and conv operator.
Test Plan: Imported from OSS
Differential Revision: D17762853
fbshipit-source-id: 475aa0c6678e3760cec805862a78510913a89c83
2019-10-04 16:34:21 +00:00
|
|
|
m.register_parameter("weight", torch::ones({20, 1, 5, 5}), false);
|
|
|
|
|
m.register_parameter("bias", torch::ones({20}), false);
|
|
|
|
|
m.define(R"(
|
|
|
|
|
def forward(self, input):
|
2020-09-01 22:32:15 +00:00
|
|
|
return torch._convolution(input, self.weight, self.bias, [1, 1], [0, 0], [1, 1], False, [0, 0], 1, False, False, True, True)
|
Add OPN instruction and vararg operator table (#27104)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27104
* The use case here is to replace prim::ListConstruct, which requires Node, but Node is not available in mobile lite interpreter.
* (OPN, X, N), X is the index to the vararg operator-name and operator tables. N is number of inputs. For ListConstruct example, operator name can be "aten::listconstruct" and the overloaded name is the output type ("int", "float", "bool", "tensor" and "generic").
* A vararg operator table is built with void(int input_size, Stack& stack) functions.
## Unit test
LiteInterpreterConv covers OPN instruction and conv operator.
Test Plan: Imported from OSS
Differential Revision: D17762853
fbshipit-source-id: 475aa0c6678e3760cec805862a78510913a89c83
2019-10-04 16:34:21 +00:00
|
|
|
)");
|
|
|
|
|
|
Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os
def get_compiled_files_list():
import json
with open("build/compile_commands.json") as f:
data = json.load(f)
files = [os.path.relpath(node['file']) for node in data]
for idx, fname in enumerate(files):
if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
return files
def run_clang_tidy(fname):
check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
changes = check_output(["git", "ls-files", "-m"])
if len(changes) == 0:
return
check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])
def main():
git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
compiled_files = get_compiled_files_list()
for idx, fname in enumerate(git_files):
if fname not in compiled_files:
continue
if fname.startswith("caffe2/contrib/aten/"):
continue
print(f"[{idx}/{len(git_files)}] Processing {fname}")
run_clang_tidy(fname)
if __name__ == "__main__":
main()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892
Reviewed By: H-Huang
Differential Revision: D27991944
Pulled By: malfet
fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 21:09:06 +00:00
|
|
|
// NOLINTNEXTLINE(cppcoreguidelines-avoid-magic-numbers,modernize-use-emplace)
|
Add OPN instruction and vararg operator table (#27104)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27104
* The use case here is to replace prim::ListConstruct, which requires Node, but Node is not available in mobile lite interpreter.
* (OPN, X, N), X is the index to the vararg operator-name and operator tables. N is number of inputs. For ListConstruct example, operator name can be "aten::listconstruct" and the overloaded name is the output type ("int", "float", "bool", "tensor" and "generic").
* A vararg operator table is built with void(int input_size, Stack& stack) functions.
## Unit test
LiteInterpreterConv covers OPN instruction and conv operator.
Test Plan: Imported from OSS
Differential Revision: D17762853
fbshipit-source-id: 475aa0c6678e3760cec805862a78510913a89c83
2019-10-04 16:34:21 +00:00
|
|
|
inputs.push_back(torch::ones({1, 1, 28, 28}));
|
|
|
|
|
|
|
|
|
|
auto outputref = m.forward(inputs).toTensor();
|
|
|
|
|
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
IValue res;
|
|
|
|
|
for (int i = 0; i < 3; ++i) {
|
2020-09-11 17:14:09 +00:00
|
|
|
res = bc.get_method("forward")(inputs);
|
Add OPN instruction and vararg operator table (#27104)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27104
* The use case here is to replace prim::ListConstruct, which requires Node, but Node is not available in mobile lite interpreter.
* (OPN, X, N), X is the index to the vararg operator-name and operator tables. N is number of inputs. For ListConstruct example, operator name can be "aten::listconstruct" and the overloaded name is the output type ("int", "float", "bool", "tensor" and "generic").
* A vararg operator table is built with void(int input_size, Stack& stack) functions.
## Unit test
LiteInterpreterConv covers OPN instruction and conv operator.
Test Plan: Imported from OSS
Differential Revision: D17762853
fbshipit-source-id: 475aa0c6678e3760cec805862a78510913a89c83
2019-10-04 16:34:21 +00:00
|
|
|
}
|
|
|
|
|
auto output = res.toTensor();
|
|
|
|
|
AT_ASSERT(outputref.dim() == output.dim());
|
2020-03-26 18:15:49 +00:00
|
|
|
AT_ASSERT(
|
|
|
|
|
outputref[0][0][0][0].item<int>() == output[0][0][0][0].item<int>());
|
Add OPN instruction and vararg operator table (#27104)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27104
* The use case here is to replace prim::ListConstruct, which requires Node, but Node is not available in mobile lite interpreter.
* (OPN, X, N), X is the index to the vararg operator-name and operator tables. N is number of inputs. For ListConstruct example, operator name can be "aten::listconstruct" and the overloaded name is the output type ("int", "float", "bool", "tensor" and "generic").
* A vararg operator table is built with void(int input_size, Stack& stack) functions.
## Unit test
LiteInterpreterConv covers OPN instruction and conv operator.
Test Plan: Imported from OSS
Differential Revision: D17762853
fbshipit-source-id: 475aa0c6678e3760cec805862a78510913a89c83
2019-10-04 16:34:21 +00:00
|
|
|
}
|
2019-11-08 21:21:55 +00:00
|
|
|
|
2020-09-25 18:35:39 +00:00
|
|
|
TEST(LiteInterpreterTest, Inline) {
|
2020-03-12 06:29:34 +00:00
|
|
|
Module m("m");
|
2019-11-08 21:21:55 +00:00
|
|
|
m.define(R"JIT(
|
|
|
|
|
def foo1(self, x):
|
|
|
|
|
return x + 1
|
|
|
|
|
|
|
|
|
|
def foo2(self, x):
|
|
|
|
|
return self.foo1(x) + 2
|
|
|
|
|
|
|
|
|
|
def foo3(self, x):
|
|
|
|
|
return self.foo2(x) + 3
|
|
|
|
|
)JIT");
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
std::vector<torch::jit::IValue> inputs({torch::ones({})});
|
2020-09-11 17:14:09 +00:00
|
|
|
auto output = bc.get_method("foo3")(inputs);
|
2019-11-08 21:21:55 +00:00
|
|
|
AT_ASSERT(output.toTensor().item<float>() == 7.0);
|
|
|
|
|
}
|
2019-11-16 00:14:53 +00:00
|
|
|
|
2020-09-25 18:35:39 +00:00
|
|
|
TEST(LiteInterpreterTest, Tuple) {
|
2020-03-12 06:29:34 +00:00
|
|
|
Module m("m");
|
2019-11-16 00:14:53 +00:00
|
|
|
m.define(R"JIT(
|
|
|
|
|
def foo(self, x):
|
|
|
|
|
return (1, 2, x + 3)
|
|
|
|
|
|
|
|
|
|
def forward(self, x):
|
|
|
|
|
tuple = self.foo(x)
|
|
|
|
|
return tuple
|
|
|
|
|
)JIT");
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
std::vector<torch::jit::IValue> inputs({torch::ones({})});
|
2020-09-11 17:14:09 +00:00
|
|
|
auto output = bc.get_method("forward")(inputs);
|
2021-11-02 17:13:02 +00:00
|
|
|
AT_ASSERT(output.toTupleRef().elements()[1].toInt() == 2);
|
2019-11-16 00:14:53 +00:00
|
|
|
}
|
2019-11-17 07:55:53 +00:00
|
|
|
|
2022-01-27 02:51:40 +00:00
|
|
|
TEST(LiteInterpreterTest, AtenFormat) {
|
|
|
|
|
Module m("m");
|
|
|
|
|
m.define(R"""(
|
|
|
|
|
def forward(self, fmt:str="first {} {}", num:str="abc"):
|
|
|
|
|
x = 2
|
|
|
|
|
x = x * x
|
|
|
|
|
return fmt.format(num, x)
|
|
|
|
|
)""");
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
std::vector<torch::jit::IValue> inputs;
|
|
|
|
|
auto output_bc = bc.get_method("forward")(inputs);
|
|
|
|
|
auto output_m = m.get_method("forward")(inputs);
|
|
|
|
|
// std::cout << output_m.toStringRef() << "\n"
|
|
|
|
|
// << output_bc.toStringRef() << std::endl;
|
|
|
|
|
AT_ASSERT(output_m.toStringRef() == output_bc.toStringRef());
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
TEST(LiteInterpreterTest, PrimDevice) {
|
|
|
|
|
Module m("m");
|
|
|
|
|
m.define(R"""(
|
|
|
|
|
def forward(self, x:torch.Tensor):
|
|
|
|
|
return x.device
|
|
|
|
|
)""");
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
std::vector<torch::jit::IValue> inputs;
|
|
|
|
|
auto minput = 3.5 * torch::ones({});
|
|
|
|
|
inputs.emplace_back(minput);
|
|
|
|
|
auto output_bc = bc.get_method("forward")(inputs);
|
|
|
|
|
auto output_m = m.get_method("forward")(inputs);
|
|
|
|
|
AT_ASSERT(output_bc.toDevice().str() == output_m.toDevice().str());
|
|
|
|
|
}
|
|
|
|
|
|
2020-09-25 18:35:39 +00:00
|
|
|
TEST(LiteInterpreterTest, Dict) {
|
2020-04-04 16:50:39 +00:00
|
|
|
Module m("m");
|
|
|
|
|
m.define(R"JIT(
|
|
|
|
|
def foo(self, x):
|
|
|
|
|
return {"result": x + 1}
|
|
|
|
|
|
|
|
|
|
def forward(self, x):
|
|
|
|
|
d = self.foo(x)
|
|
|
|
|
return d
|
|
|
|
|
)JIT");
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
std::vector<torch::jit::IValue> inputs({torch::ones({})});
|
2020-09-11 17:14:09 +00:00
|
|
|
auto output = bc.get_method("forward")(inputs);
|
2020-04-04 16:50:39 +00:00
|
|
|
AT_ASSERT(output.toGenericDict().at("result").toTensor().item().toInt() == 2);
|
|
|
|
|
}
|
|
|
|
|
|
2022-01-11 18:49:51 +00:00
|
|
|
TEST(LiteInterpreterTest, List) {
|
|
|
|
|
Module m("m");
|
|
|
|
|
m.define(R"JIT(
|
|
|
|
|
def foo(self, x):
|
|
|
|
|
return [x + 2]
|
|
|
|
|
|
|
|
|
|
def forward(self, x):
|
|
|
|
|
d = self.foo(x)
|
|
|
|
|
return d
|
|
|
|
|
)JIT");
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
std::vector<torch::jit::IValue> inputs({torch::ones({})});
|
|
|
|
|
auto output = bc.get_method("forward")(inputs);
|
|
|
|
|
auto server_output = m.forward(inputs);
|
|
|
|
|
EXPECT_EQ(output.toList().get(0).toTensor().item().toInt(), 3);
|
|
|
|
|
EXPECT_EQ(output, server_output);
|
|
|
|
|
}
|
|
|
|
|
|
2020-09-25 18:35:39 +00:00
|
|
|
TEST(LiteInterpreterTest, PrimOverload) {
|
2020-03-13 00:46:37 +00:00
|
|
|
/*
|
|
|
|
|
// temporarily disabled
|
|
|
|
|
script::Module m("m");
|
2019-11-17 07:55:53 +00:00
|
|
|
m.define(R"JIT(
|
|
|
|
|
def forward(self, x):
|
|
|
|
|
result = [1, 2]
|
|
|
|
|
result.append(3)
|
|
|
|
|
return result
|
|
|
|
|
)JIT");
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
std::vector<torch::jit::IValue> inputs({torch::ones({})});
|
2020-09-11 17:14:09 +00:00
|
|
|
auto output = bc.get_method("forward")(inputs);
|
2019-11-17 07:55:53 +00:00
|
|
|
AT_ASSERT(output.toIntList()[2] == 3);
|
2020-03-13 00:46:37 +00:00
|
|
|
*/
|
2019-11-17 07:55:53 +00:00
|
|
|
}
|
2020-01-04 21:46:05 +00:00
|
|
|
|
2020-09-25 18:35:39 +00:00
|
|
|
TEST(LiteInterpreterTest, Prim) {
|
2020-03-12 06:29:34 +00:00
|
|
|
Module m("m");
|
2020-01-04 21:46:05 +00:00
|
|
|
m.define(R"JIT(
|
|
|
|
|
def forward(self, x):
|
|
|
|
|
return int(x)
|
|
|
|
|
)JIT");
|
|
|
|
|
|
|
|
|
|
std::vector<IValue> inputs;
|
|
|
|
|
auto minput = 3.5 * torch::ones({});
|
|
|
|
|
inputs.emplace_back(minput);
|
|
|
|
|
auto ref = m.run_method("forward", minput);
|
|
|
|
|
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
IValue res;
|
|
|
|
|
for (int i = 0; i < 3; ++i) {
|
Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os
def get_compiled_files_list():
import json
with open("build/compile_commands.json") as f:
data = json.load(f)
files = [os.path.relpath(node['file']) for node in data]
for idx, fname in enumerate(files):
if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
return files
def run_clang_tidy(fname):
check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
changes = check_output(["git", "ls-files", "-m"])
if len(changes) == 0:
return
check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])
def main():
git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
compiled_files = get_compiled_files_list()
for idx, fname in enumerate(git_files):
if fname not in compiled_files:
continue
if fname.startswith("caffe2/contrib/aten/"):
continue
print(f"[{idx}/{len(git_files)}] Processing {fname}")
run_clang_tidy(fname)
if __name__ == "__main__":
main()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892
Reviewed By: H-Huang
Differential Revision: D27991944
Pulled By: malfet
fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 21:09:06 +00:00
|
|
|
// NOLINTNEXTLINE(performance-unnecessary-copy-initialization)
|
2020-01-04 21:46:05 +00:00
|
|
|
auto bcinputs = inputs;
|
2020-09-11 17:14:09 +00:00
|
|
|
res = bc.get_method("forward")(bcinputs);
|
2020-01-04 21:46:05 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
auto resi = res.toInt();
|
|
|
|
|
auto refi = ref.toInt();
|
|
|
|
|
AT_ASSERT(resi == refi);
|
|
|
|
|
}
|
2020-02-13 00:25:13 +00:00
|
|
|
|
2020-09-25 18:35:39 +00:00
|
|
|
TEST(LiteInterpreterTest, PrimScalar) {
|
2020-09-24 16:36:53 +00:00
|
|
|
Module m("m");
|
|
|
|
|
m.define(R"JIT(
|
|
|
|
|
def forward(self, x):
|
|
|
|
|
return int(x.item())
|
|
|
|
|
)JIT");
|
|
|
|
|
|
|
|
|
|
std::vector<IValue> inputs;
|
|
|
|
|
auto minput = 3.5 * torch::ones({});
|
|
|
|
|
inputs.emplace_back(minput);
|
|
|
|
|
auto ref = m.run_method("forward", minput);
|
|
|
|
|
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
IValue res;
|
|
|
|
|
for (int i = 0; i < 3; ++i) {
|
Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os
def get_compiled_files_list():
import json
with open("build/compile_commands.json") as f:
data = json.load(f)
files = [os.path.relpath(node['file']) for node in data]
for idx, fname in enumerate(files):
if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
return files
def run_clang_tidy(fname):
check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
changes = check_output(["git", "ls-files", "-m"])
if len(changes) == 0:
return
check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])
def main():
git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
compiled_files = get_compiled_files_list()
for idx, fname in enumerate(git_files):
if fname not in compiled_files:
continue
if fname.startswith("caffe2/contrib/aten/"):
continue
print(f"[{idx}/{len(git_files)}] Processing {fname}")
run_clang_tidy(fname)
if __name__ == "__main__":
main()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892
Reviewed By: H-Huang
Differential Revision: D27991944
Pulled By: malfet
fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 21:09:06 +00:00
|
|
|
// NOLINTNEXTLINE(performance-unnecessary-copy-initialization)
|
2020-09-24 16:36:53 +00:00
|
|
|
auto bcinputs = inputs;
|
|
|
|
|
res = bc.get_method("forward")(bcinputs);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
auto resi = res.toInt();
|
|
|
|
|
auto refi = ref.toInt();
|
|
|
|
|
AT_ASSERT(resi == refi);
|
|
|
|
|
}
|
|
|
|
|
|
2020-09-25 18:35:39 +00:00
|
|
|
TEST(LiteInterpreterTest, LoadOrigJit) {
|
2020-03-12 06:29:34 +00:00
|
|
|
Module m("m");
|
2020-02-13 00:25:13 +00:00
|
|
|
m.register_parameter("foo", torch::ones({}), false);
|
|
|
|
|
m.define(R"(
|
|
|
|
|
def forward(self, x):
|
|
|
|
|
b = 4
|
|
|
|
|
return self.foo + x + b
|
|
|
|
|
)");
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m.save(ss);
|
2021-02-26 19:51:29 +00:00
|
|
|
ASSERT_THROWS_WITH_MESSAGE(_load_for_mobile(ss), "file not found");
|
2020-02-13 00:25:13 +00:00
|
|
|
}
|
|
|
|
|
|
2020-09-25 18:35:39 +00:00
|
|
|
TEST(LiteInterpreterTest, WrongMethodName) {
|
2020-03-12 06:29:34 +00:00
|
|
|
Module m("m");
|
2020-02-29 02:25:30 +00:00
|
|
|
m.register_parameter("foo", torch::ones({}), false);
|
|
|
|
|
m.define(R"(
|
|
|
|
|
def add(self, x):
|
|
|
|
|
b = 4
|
|
|
|
|
return self.foo + x + b
|
|
|
|
|
)");
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
std::vector<IValue> inputs;
|
|
|
|
|
auto minput = 5 * torch::ones({});
|
|
|
|
|
inputs.emplace_back(minput);
|
2021-02-26 19:51:29 +00:00
|
|
|
ASSERT_THROWS_WITH_MESSAGE(
|
|
|
|
|
bc.get_method("forward")(inputs), "is not defined");
|
2020-02-29 02:25:30 +00:00
|
|
|
}
|
|
|
|
|
|
2020-09-25 18:35:39 +00:00
|
|
|
TEST(LiteInterpreterTest, SetState) {
|
2020-03-12 06:29:34 +00:00
|
|
|
Module m("m");
|
2020-03-05 23:21:00 +00:00
|
|
|
m.register_parameter("foo", torch::ones({}), false);
|
|
|
|
|
m.define(R"(
|
|
|
|
|
def __getstate__(self):
|
2021-12-11 05:22:38 +00:00
|
|
|
return self.foo + self.foo
|
2020-03-05 23:21:00 +00:00
|
|
|
def __setstate__(self, a):
|
|
|
|
|
self.foo = a
|
|
|
|
|
def forward(self, x):
|
|
|
|
|
b = 4
|
|
|
|
|
return self.foo + x + b
|
|
|
|
|
)");
|
|
|
|
|
|
|
|
|
|
std::vector<IValue> inputs;
|
|
|
|
|
auto minput = 5 * torch::ones({});
|
|
|
|
|
inputs.emplace_back(minput);
|
|
|
|
|
|
|
|
|
|
std::stringstream ms;
|
|
|
|
|
m.save(ms);
|
|
|
|
|
auto loaded_m = load(ms);
|
|
|
|
|
auto ref = loaded_m.run_method("forward", minput);
|
|
|
|
|
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
IValue res;
|
|
|
|
|
for (int i = 0; i < 3; ++i) {
|
Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os
def get_compiled_files_list():
import json
with open("build/compile_commands.json") as f:
data = json.load(f)
files = [os.path.relpath(node['file']) for node in data]
for idx, fname in enumerate(files):
if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
return files
def run_clang_tidy(fname):
check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
changes = check_output(["git", "ls-files", "-m"])
if len(changes) == 0:
return
check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])
def main():
git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
compiled_files = get_compiled_files_list()
for idx, fname in enumerate(git_files):
if fname not in compiled_files:
continue
if fname.startswith("caffe2/contrib/aten/"):
continue
print(f"[{idx}/{len(git_files)}] Processing {fname}")
run_clang_tidy(fname)
if __name__ == "__main__":
main()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892
Reviewed By: H-Huang
Differential Revision: D27991944
Pulled By: malfet
fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 21:09:06 +00:00
|
|
|
// NOLINTNEXTLINE(performance-unnecessary-copy-initialization)
|
2020-03-05 23:21:00 +00:00
|
|
|
auto bcinputs = inputs;
|
2020-09-11 17:14:09 +00:00
|
|
|
res = bc.get_method("forward")(bcinputs);
|
2020-03-05 23:21:00 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
auto resd = res.toTensor().item<float>();
|
|
|
|
|
auto refd = ref.toTensor().item<float>();
|
|
|
|
|
AT_ASSERT(resd == refd);
|
|
|
|
|
}
|
|
|
|
|
|
2020-03-14 01:21:21 +00:00
|
|
|
class TorchBindLiteInterpreterTestStruct
|
|
|
|
|
: public torch::jit::CustomClassHolder {
|
|
|
|
|
public:
|
|
|
|
|
std::string get(at::Tensor t) {
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
ss << "Hello! Your tensor has ";
|
|
|
|
|
ss << t.numel();
|
|
|
|
|
ss << " elements!";
|
|
|
|
|
return ss.str();
|
|
|
|
|
}
|
|
|
|
|
};
|
|
|
|
|
|
2021-02-04 05:51:12 +00:00
|
|
|
namespace {
|
|
|
|
|
struct ClassNamespaceValue : public SugaredValue {
|
|
|
|
|
explicit ClassNamespaceValue(c10::QualifiedName name)
|
|
|
|
|
: basename_(std::move(name)) {}
|
|
|
|
|
|
|
|
|
|
std::shared_ptr<SugaredValue> attr(
|
|
|
|
|
const SourceRange& loc,
|
2021-10-27 18:52:48 +00:00
|
|
|
GraphFunction& m,
|
2021-02-04 05:51:12 +00:00
|
|
|
const std::string& name) override {
|
|
|
|
|
const auto fullName = c10::QualifiedName(basename_, name);
|
|
|
|
|
|
|
|
|
|
// Check to see if it is a custom class.
|
|
|
|
|
if (auto custom_class = getCustomClass(fullName.qualifiedName())) {
|
|
|
|
|
return std::make_shared<ClassValue>(custom_class);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// If it's not a custom class, assume it's another namespace
|
Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os
def get_compiled_files_list():
import json
with open("build/compile_commands.json") as f:
data = json.load(f)
files = [os.path.relpath(node['file']) for node in data]
for idx, fname in enumerate(files):
if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
return files
def run_clang_tidy(fname):
check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
changes = check_output(["git", "ls-files", "-m"])
if len(changes) == 0:
return
check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])
def main():
git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
compiled_files = get_compiled_files_list()
for idx, fname in enumerate(git_files):
if fname not in compiled_files:
continue
if fname.startswith("caffe2/contrib/aten/"):
continue
print(f"[{idx}/{len(git_files)}] Processing {fname}")
run_clang_tidy(fname)
if __name__ == "__main__":
main()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892
Reviewed By: H-Huang
Differential Revision: D27991944
Pulled By: malfet
fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 21:09:06 +00:00
|
|
|
// NOLINTNEXTLINE(performance-move-const-arg)
|
2021-02-04 05:51:12 +00:00
|
|
|
return std::make_shared<ClassNamespaceValue>(std::move(fullName));
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
std::string kind() const override {
|
|
|
|
|
return "Class Namespace";
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
private:
|
|
|
|
|
c10::QualifiedName basename_;
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
struct TestModuleResolver : public Resolver {
|
|
|
|
|
std::shared_ptr<SugaredValue> resolveValue(
|
|
|
|
|
const std::string& name,
|
2021-10-27 18:52:48 +00:00
|
|
|
GraphFunction& m,
|
2021-02-04 05:51:12 +00:00
|
|
|
const SourceRange& loc) override {
|
|
|
|
|
if (name == "torch") {
|
|
|
|
|
return std::make_shared<BuiltinModule>("aten");
|
|
|
|
|
} else if (name == "__torch__") {
|
|
|
|
|
return std::make_shared<ClassNamespaceValue>(c10::QualifiedName(name));
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return nullptr;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
TypePtr resolveType(const std::string& name, const SourceRange& loc)
|
|
|
|
|
override {
|
|
|
|
|
return nullptr;
|
|
|
|
|
}
|
|
|
|
|
};
|
|
|
|
|
} // namespace
|
|
|
|
|
|
|
|
|
|
TEST(LiteInterpreterTest, BuiltinClass) {
|
|
|
|
|
script::Module m("m");
|
|
|
|
|
|
|
|
|
|
auto cls = getCustomClass(
|
|
|
|
|
"__torch__.torch.classes._TorchScriptTesting._LiteInterpreterTest");
|
|
|
|
|
TORCH_INTERNAL_ASSERT(cls);
|
|
|
|
|
c10::intrusive_ptr<torch::CustomClassHolder> obj_holder;
|
|
|
|
|
m.register_attribute("my_obj", cls, IValue::make_capsule(obj_holder));
|
|
|
|
|
|
|
|
|
|
m.register_parameter("foo", torch::ones({}), false);
|
|
|
|
|
m.define(
|
|
|
|
|
R"(
|
|
|
|
|
def __getstate__(self):
|
|
|
|
|
return 1
|
|
|
|
|
def __setstate__(self, a):
|
|
|
|
|
self.my_obj = __torch__.torch.classes._TorchScriptTesting._LiteInterpreterTest()
|
|
|
|
|
|
|
|
|
|
def forward(self, x) -> str:
|
|
|
|
|
return self.my_obj.get(x)
|
|
|
|
|
)",
|
|
|
|
|
std::make_shared<TestModuleResolver>());
|
|
|
|
|
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
auto res =
|
|
|
|
|
bc.get_method("forward")(std::vector<IValue>{torch::zeros({3, 4})});
|
|
|
|
|
const auto& str = res.toStringRef();
|
|
|
|
|
std::string expected = "Hello! Your tensor has 12 elements!";
|
|
|
|
|
AT_ASSERT(str == expected);
|
|
|
|
|
}
|
|
|
|
|
|
2020-09-25 18:35:39 +00:00
|
|
|
TEST(LiteInterpreterTest, BuiltinFunction) {
|
2020-03-14 01:21:21 +00:00
|
|
|
script::Module m("m");
|
|
|
|
|
auto custom_class_obj =
|
|
|
|
|
make_custom_class<TorchBindLiteInterpreterTestStruct>();
|
|
|
|
|
m.register_attribute("my_obj", custom_class_obj.type(), custom_class_obj);
|
|
|
|
|
m.define(R"(
|
|
|
|
|
def forward(self, x) -> str:
|
|
|
|
|
return self.my_obj.get(x)
|
|
|
|
|
)");
|
|
|
|
|
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
auto res =
|
2020-09-11 17:14:09 +00:00
|
|
|
bc.get_method("forward")(std::vector<IValue>{torch::zeros({3, 4})});
|
Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os
def get_compiled_files_list():
import json
with open("build/compile_commands.json") as f:
data = json.load(f)
files = [os.path.relpath(node['file']) for node in data]
for idx, fname in enumerate(files):
if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
return files
def run_clang_tidy(fname):
check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
changes = check_output(["git", "ls-files", "-m"])
if len(changes) == 0:
return
check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])
def main():
git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
compiled_files = get_compiled_files_list()
for idx, fname in enumerate(git_files):
if fname not in compiled_files:
continue
if fname.startswith("caffe2/contrib/aten/"):
continue
print(f"[{idx}/{len(git_files)}] Processing {fname}")
run_clang_tidy(fname)
if __name__ == "__main__":
main()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892
Reviewed By: H-Huang
Differential Revision: D27991944
Pulled By: malfet
fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 21:09:06 +00:00
|
|
|
// NOLINTNEXTLINE(performance-unnecessary-copy-initialization)
|
2020-03-14 01:21:21 +00:00
|
|
|
auto str = res.toStringRef();
|
|
|
|
|
std::string expected = "Hello! Your tensor has 12 elements!";
|
|
|
|
|
AT_ASSERT(str == expected);
|
|
|
|
|
}
|
|
|
|
|
|
2021-08-31 14:36:53 +00:00
|
|
|
#if !defined FB_XPLAT_BUILD
|
|
|
|
|
TEST(LiteInterpreterTest, GetRuntimeByteCodeVersion) {
|
|
|
|
|
auto runtime_bytecode_version = _get_runtime_bytecode_version();
|
|
|
|
|
AT_ASSERT(
|
|
|
|
|
runtime_bytecode_version ==
|
|
|
|
|
caffe2::serialize::kMaxSupportedBytecodeVersion);
|
|
|
|
|
}
|
|
|
|
|
|
2021-10-29 20:41:38 +00:00
|
|
|
TEST(LiteInterpreterTest, GetRuntimeOperatorsVersion) {
|
|
|
|
|
auto runtime_operators_version = _get_runtime_operators_min_max_versions();
|
|
|
|
|
AT_ASSERT(
|
|
|
|
|
runtime_operators_version.first ==
|
|
|
|
|
caffe2::serialize::kMinSupportedFileFormatVersion &&
|
|
|
|
|
runtime_operators_version.second ==
|
|
|
|
|
caffe2::serialize::kMaxSupportedFileFormatVersion);
|
|
|
|
|
}
|
|
|
|
|
|
2021-08-31 14:36:53 +00:00
|
|
|
/**
|
|
|
|
|
* The test below is disarmed for FB internal xplat builds since
|
|
|
|
|
* BUCK requires us to pass in the script_module_v4.ptl file in
|
|
|
|
|
* as a resource dependency of the build rule for this file, and
|
|
|
|
|
* we would need to access it via the C++ Resources API instead
|
|
|
|
|
* of directly reading from disk (which is what the open source
|
|
|
|
|
* build/run does).
|
|
|
|
|
*/
|
|
|
|
|
TEST(LiteInterpreterTest, GetByteCodeVersion) {
|
|
|
|
|
std::string filePath(__FILE__);
|
|
|
|
|
auto test_model_file_v4 =
|
|
|
|
|
filePath.substr(0, filePath.find_last_of("/\\") + 1);
|
|
|
|
|
test_model_file_v4.append("script_module_v4.ptl");
|
|
|
|
|
|
|
|
|
|
auto version_v4 = _get_model_bytecode_version(test_model_file_v4);
|
|
|
|
|
AT_ASSERT(version_v4 == 4);
|
|
|
|
|
}
|
2021-10-06 09:20:54 +00:00
|
|
|
|
2021-08-31 14:36:53 +00:00
|
|
|
#endif // !defined(FB_XPLAT_BUILD)
|
|
|
|
|
|
2021-10-06 09:20:54 +00:00
|
|
|
TEST(LiteInterpreterTest, GetContainTypes) {
|
|
|
|
|
Module m("m");
|
|
|
|
|
m.define(R"(
|
|
|
|
|
def forward(self):
|
|
|
|
|
return 3
|
|
|
|
|
)");
|
|
|
|
|
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss, {}, true);
|
|
|
|
|
|
2022-06-09 20:15:50 +00:00
|
|
|
_get_mobile_model_contained_types(ss);
|
2021-10-06 09:20:54 +00:00
|
|
|
}
|
|
|
|
|
|
2021-05-08 01:11:15 +00:00
|
|
|
namespace {
|
2021-05-25 22:33:50 +00:00
|
|
|
|
|
|
|
|
void compareModelOutput(
|
2021-10-15 19:14:59 +00:00
|
|
|
c10::ArrayRef<IValue> actual_result_list,
|
2022-02-15 03:42:44 +00:00
|
|
|
const std::vector<IValue>& expect_result_list) {
|
2021-05-25 22:33:50 +00:00
|
|
|
AT_ASSERT(actual_result_list.size() == expect_result_list.size());
|
|
|
|
|
AT_ASSERT(
|
2022-02-15 03:42:44 +00:00
|
|
|
actual_result_list[0].toTensor().equal(expect_result_list[0].toTensor()));
|
|
|
|
|
AT_ASSERT(
|
|
|
|
|
actual_result_list[1].toTensor().dim() ==
|
|
|
|
|
expect_result_list[1].toTensor().dim());
|
|
|
|
|
AT_ASSERT(
|
|
|
|
|
actual_result_list[2].toTensor().equal(expect_result_list[2].toTensor()));
|
|
|
|
|
AT_ASSERT(
|
|
|
|
|
actual_result_list[3].toTensor().equal(expect_result_list[3].toTensor()));
|
|
|
|
|
ASSERT_EQ(
|
|
|
|
|
actual_result_list[4].toStringRef(), expect_result_list[4].toStringRef());
|
|
|
|
|
ASSERT_EQ(actual_result_list[5].toBool(), expect_result_list[5].toBool());
|
|
|
|
|
ASSERT_EQ(actual_result_list[6].toBool(), expect_result_list[6].toBool());
|
|
|
|
|
ASSERT_EQ(actual_result_list[7].toBool(), expect_result_list[7].toBool());
|
|
|
|
|
AT_ASSERT(
|
|
|
|
|
actual_result_list[8].toTensor().equal(expect_result_list[8].toTensor()));
|
|
|
|
|
ASSERT_EQ(
|
|
|
|
|
actual_result_list[9].toStringRef(), expect_result_list[9].toStringRef());
|
|
|
|
|
ASSERT_EQ(actual_result_list[10].toInt(), expect_result_list[10].toInt());
|
|
|
|
|
ASSERT_EQ(actual_result_list[11].toBool(), expect_result_list[11].toBool());
|
2021-05-25 22:33:50 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
void runAndCheckTorchScriptModel(
|
|
|
|
|
std::stringstream& input_model_stream,
|
|
|
|
|
const std::vector<IValue>& input_data,
|
2022-02-15 03:42:44 +00:00
|
|
|
const std::vector<IValue>& expect_result_list,
|
2022-04-02 02:29:33 +00:00
|
|
|
const uint64_t expect_version) {
|
2021-05-25 22:33:50 +00:00
|
|
|
auto actual_version = _get_model_bytecode_version(input_model_stream);
|
|
|
|
|
AT_ASSERT(actual_version == expect_version);
|
|
|
|
|
|
|
|
|
|
// Load and run the backport model, then compare the result with expect
|
|
|
|
|
// result
|
|
|
|
|
Module m_mobile = load(input_model_stream);
|
|
|
|
|
|
|
|
|
|
auto actual_result = m_mobile.forward(input_data);
|
2021-11-02 17:13:02 +00:00
|
|
|
const auto& actual_result_list = actual_result.toTupleRef().elements();
|
2021-05-25 22:33:50 +00:00
|
|
|
compareModelOutput(actual_result_list, expect_result_list);
|
|
|
|
|
}
|
|
|
|
|
|
2021-05-08 01:11:15 +00:00
|
|
|
void runAndCheckBytecodeModel(
|
|
|
|
|
std::stringstream& input_model_stream,
|
|
|
|
|
const std::vector<IValue>& input_data,
|
2022-02-15 03:42:44 +00:00
|
|
|
const std::vector<IValue>& expect_result_list,
|
2022-04-02 02:29:33 +00:00
|
|
|
const uint64_t expect_version) {
|
2021-05-08 01:11:15 +00:00
|
|
|
auto actual_version = _get_model_bytecode_version(input_model_stream);
|
|
|
|
|
AT_ASSERT(actual_version == expect_version);
|
|
|
|
|
|
|
|
|
|
// Load and run the backport model, then compare the result with expect
|
|
|
|
|
// result
|
2021-05-25 22:33:50 +00:00
|
|
|
Module m_mobile = load(input_model_stream);
|
2021-05-08 01:11:15 +00:00
|
|
|
|
|
|
|
|
auto actual_result = m_mobile.forward(input_data);
|
2021-11-02 17:13:02 +00:00
|
|
|
const auto& actual_result_list = actual_result.toTupleRef().elements();
|
2021-05-08 01:11:15 +00:00
|
|
|
|
2021-05-25 22:33:50 +00:00
|
|
|
compareModelOutput(actual_result_list, expect_result_list);
|
2021-05-08 01:11:15 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
void backportAllVersionCheck(
|
|
|
|
|
std::stringstream& test_model_file_stream,
|
|
|
|
|
std::vector<IValue>& input_data,
|
2022-02-15 03:42:44 +00:00
|
|
|
std::vector<IValue>& expect_result_list,
|
2022-04-02 02:29:33 +00:00
|
|
|
const uint64_t expect_from_version) {
|
2021-05-08 01:11:15 +00:00
|
|
|
auto from_version = _get_model_bytecode_version(test_model_file_stream);
|
2022-06-20 16:37:38 +00:00
|
|
|
EXPECT_EQ(from_version, expect_from_version);
|
2022-04-02 02:29:33 +00:00
|
|
|
AT_ASSERT(from_version > 0);
|
2021-05-08 01:11:15 +00:00
|
|
|
|
|
|
|
|
// Backport script_module_v5.ptl to an older version
|
|
|
|
|
constexpr int64_t minimum_to_version = 4;
|
2022-04-02 02:29:33 +00:00
|
|
|
auto current_to_version = from_version - 1;
|
2021-05-08 01:11:15 +00:00
|
|
|
|
|
|
|
|
// Verify all candidate to_version work as expected. All backport to version
|
|
|
|
|
// larger than minimum_to_version should success.
|
|
|
|
|
while (current_to_version >= minimum_to_version) {
|
2021-05-26 09:06:27 +00:00
|
|
|
// Do not declare std::stringstream oss outside of the while loop as
|
|
|
|
|
// oss.clear() doesn't reset the stream content, only clears out error state
|
|
|
|
|
// flag in stringstream causing a problematic stream. Instead, it's cleaner
|
|
|
|
|
// and safer to just declare a new std::stringstream one and swap them.
|
|
|
|
|
std::stringstream oss;
|
2021-05-08 01:11:15 +00:00
|
|
|
bool backPortSuccess =
|
|
|
|
|
_backport_for_mobile(test_model_file_stream, oss, current_to_version);
|
|
|
|
|
AT_ASSERT(backPortSuccess);
|
|
|
|
|
|
|
|
|
|
// Check backport model version
|
2021-05-26 09:06:27 +00:00
|
|
|
auto backport_version = _get_model_bytecode_version(oss);
|
2022-04-01 14:58:54 +00:00
|
|
|
backport_version = _get_model_bytecode_version(oss);
|
2021-05-08 01:11:15 +00:00
|
|
|
AT_ASSERT(backport_version == current_to_version);
|
|
|
|
|
|
|
|
|
|
// Load and run the backport model, then compare the result with expect
|
|
|
|
|
// result
|
|
|
|
|
runAndCheckBytecodeModel(
|
2021-05-26 09:06:27 +00:00
|
|
|
oss, input_data, expect_result_list, current_to_version);
|
2022-04-01 14:58:54 +00:00
|
|
|
oss.seekg(0, oss.beg);
|
2021-05-25 22:33:50 +00:00
|
|
|
runAndCheckTorchScriptModel(
|
2021-05-26 09:06:27 +00:00
|
|
|
oss, input_data, expect_result_list, current_to_version);
|
2021-05-08 01:11:15 +00:00
|
|
|
|
|
|
|
|
current_to_version--;
|
|
|
|
|
}
|
|
|
|
|
// backport to minimum version - 1 should fail
|
2021-05-26 09:06:27 +00:00
|
|
|
std::stringstream oss;
|
2021-05-08 01:11:15 +00:00
|
|
|
bool backPortSuccess =
|
|
|
|
|
_backport_for_mobile(test_model_file_stream, oss, minimum_to_version - 1);
|
|
|
|
|
AT_ASSERT(!backPortSuccess);
|
|
|
|
|
}
|
|
|
|
|
} // namespace
|
|
|
|
|
|
2021-07-30 03:09:07 +00:00
|
|
|
#if !defined FB_XPLAT_BUILD
|
2021-05-08 01:11:15 +00:00
|
|
|
TEST(LiteInterpreterTest, BackPortByteCodeModelAllVersions) {
|
|
|
|
|
torch::jit::Module module("m");
|
|
|
|
|
// NOLINTNEXTLINE(cppcoreguidelines-avoid-magic-numbers)
|
|
|
|
|
module.register_parameter("weight", torch::ones({20, 1, 5, 5}), false);
|
|
|
|
|
// NOLINTNEXTLINE(cppcoreguidelines-avoid-magic-numbers)
|
|
|
|
|
module.register_parameter("bias", torch::ones({20}), false);
|
|
|
|
|
module.define(R"(
|
2022-02-15 03:42:44 +00:00
|
|
|
def fn(self, x:float=1.0):
|
|
|
|
|
return x
|
|
|
|
|
|
2021-05-08 01:11:15 +00:00
|
|
|
def forward(self, input):
|
|
|
|
|
x1 = torch.zeros(2, 2)
|
|
|
|
|
x2 = torch.empty_like(torch.empty(2, 2))
|
|
|
|
|
x3 = torch._convolution(input, self.weight, self.bias, [1, 1], [0, 0], [1, 1], False, [0, 0], 1, False, False, True, True)
|
2021-09-21 05:22:17 +00:00
|
|
|
# Add torch.add operator to cover bytecode version bump from 6 to 7
|
|
|
|
|
# for bytecode version 7, the main change is to support defaults arguments with out arguments
|
|
|
|
|
x = 2 * torch.ones(1)
|
|
|
|
|
h = torch.ones(1)
|
|
|
|
|
torch.add(x, h, out=x)
|
2022-02-15 03:42:44 +00:00
|
|
|
device = torch.ones(1, 1).cpu().device.type
|
|
|
|
|
is_cuda = x1.is_cuda
|
|
|
|
|
bool_val = True
|
|
|
|
|
check_is = [] is None
|
|
|
|
|
check_is_not = [1] is not None
|
|
|
|
|
check_not = not bool_val
|
|
|
|
|
num_to_tensor = torch.tensor([self.fn()])
|
|
|
|
|
d = {"a": "abc"}
|
|
|
|
|
check_dict_index = d["a"]
|
|
|
|
|
check_dim = x1.dim()
|
|
|
|
|
return (
|
|
|
|
|
x1, x2, x3, x, device, is_cuda, check_is,
|
|
|
|
|
check_is_not, num_to_tensor, check_dict_index,
|
|
|
|
|
check_dim, check_not
|
|
|
|
|
)
|
|
|
|
|
)");
|
2021-05-08 01:11:15 +00:00
|
|
|
|
|
|
|
|
torch::jit::Module module_freeze = freeze(module);
|
|
|
|
|
|
|
|
|
|
std::stringstream input_model_stream;
|
2022-04-07 19:36:41 +00:00
|
|
|
module_freeze._save_for_mobile(
|
|
|
|
|
input_model_stream,
|
|
|
|
|
/*extra_files=*/{},
|
|
|
|
|
/*save_mobile_debug_info=*/false,
|
|
|
|
|
/*use_flatbuffer=*/true);
|
2021-05-08 01:11:15 +00:00
|
|
|
std::vector<IValue> input_data =
|
|
|
|
|
std::vector<IValue>({torch::ones({1, 1, 28, 28})});
|
2022-02-15 03:42:44 +00:00
|
|
|
std::vector<IValue> expect_result_list;
|
2021-05-08 01:11:15 +00:00
|
|
|
expect_result_list.emplace_back(at::ones({2, 2}, ScalarType::Float) * 0);
|
|
|
|
|
expect_result_list.emplace_back(at::ones({2, 2}, ScalarType::Float));
|
|
|
|
|
expect_result_list.emplace_back(
|
|
|
|
|
at::ones({1, 20, 24, 24}, ScalarType::Float) * 26);
|
2021-09-21 05:22:17 +00:00
|
|
|
expect_result_list.emplace_back(3 * at::ones({1}));
|
2022-02-15 03:42:44 +00:00
|
|
|
// "cpu" False, False, True, tensor(1), "abc", 2, False)
|
|
|
|
|
expect_result_list.emplace_back(c10::IValue("cpu"));
|
|
|
|
|
expect_result_list.emplace_back(c10::IValue(false));
|
|
|
|
|
expect_result_list.emplace_back(c10::IValue(false));
|
|
|
|
|
expect_result_list.emplace_back(c10::IValue(true));
|
|
|
|
|
expect_result_list.emplace_back(c10::IValue(at::ones({1})));
|
|
|
|
|
expect_result_list.emplace_back(c10::IValue("abc"));
|
|
|
|
|
expect_result_list.emplace_back(c10::IValue(2));
|
|
|
|
|
expect_result_list.emplace_back(c10::IValue(false));
|
2021-09-21 05:22:17 +00:00
|
|
|
|
2021-05-08 01:11:15 +00:00
|
|
|
backportAllVersionCheck(
|
|
|
|
|
input_model_stream,
|
|
|
|
|
input_data,
|
|
|
|
|
expect_result_list,
|
2022-06-20 16:37:38 +00:00
|
|
|
9); // flatbuffer starts at 9
|
2021-05-08 01:11:15 +00:00
|
|
|
}
|
2021-07-30 03:09:07 +00:00
|
|
|
#endif // !defined(FB_XPLAT_BUILD)
|
2021-05-08 01:11:15 +00:00
|
|
|
|
2021-05-13 17:19:28 +00:00
|
|
|
TEST(LiteInterpreterTest, GetRuntimeOpsAndInfo) {
|
|
|
|
|
auto runtime_ops = _get_runtime_ops_and_info();
|
|
|
|
|
// Ballpark estimate of the minimal number of ops; just used to
|
|
|
|
|
// verify API returns a reasonably large number.
|
|
|
|
|
AT_ASSERT(runtime_ops.size() > 2900);
|
|
|
|
|
}
|
|
|
|
|
|
2021-08-03 18:20:23 +00:00
|
|
|
TEST(LiteInterpreterTest, isCompatibleSuccess) {
|
|
|
|
|
// test trivial success case
|
2021-08-11 18:14:25 +00:00
|
|
|
auto runtime_info = RuntimeCompatibilityInfo::get();
|
2021-08-03 18:20:23 +00:00
|
|
|
std::unordered_map<std::string, OperatorInfo> model_ops;
|
|
|
|
|
model_ops["aten::add.Scalar"] = OperatorInfo{2};
|
|
|
|
|
|
[PyTorch Edge][type] Add type support for NamedTuple custom class (import) (#63130)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63130
Extend `type_parser` to handle `NamedTuple` type. It can be extended to handle other types when needed. The custom type will follow the following format:
```
"qualified_named[
NamedTuple, [
[filed_name_1, field_type_1],
[filed_name_2, field_type_2]
]
]"
```
For example:
```
"__torch__.base_models.sparse_nn.pytorch_preproc_types.PreprocOutputType[
NamedTuple, [
[float_features, Tensor],
[id_list_features, List[Tensor]],
[label, Tensor],
[weight, Tensor],
]
]"
```
For nested types, the order of type lists from type table should be:
```
std::string type_1 = “__torch__.C [
NamedTuple, [
[field_name_c_1, Tensor],
[field_name_c_2, Tuple[Tensor, Tensor]],
]
]”
std::string type_2 = “__torch__.B [
NamedTuple, [
[field_name_b, __torch__.C ]
]
]”
std::string type_3 = “__torch__.A[
NamedTuple, [
[field_name_a, __torch__.B]
]
]”
std::vector<std::string> type_strs = {type_str_1, type_str_2, type_3};
std::vector<TypePtr> type_ptrs = c10::parseType(type_strs);
```
namedtuple from both `collection` and `typing` are supported
```
from typing import NamedTuple
from collections import namedtuple
```
This change only adds the parser and now new runtime can read the above format.
ghstack-source-id: 141293658
Test Plan:
```
buck test mode/dev //caffe2/test/cpp/jit:jit -- --exact 'caffe2/test/cpp/jit:jit - LiteInterpreterTest.CompatiblePrimitiveType'
buck test mode/dev //caffe2/test/cpp/jit:jit -- --exact 'caffe2/test/cpp/jit:jit - LiteInterpreterTest.CompatibleCustomType'
buck test mode/dev //caffe2/test/cpp/jit:jit -- --exact 'caffe2/test/cpp/jit:jit - LiteInterpreterTest.InCompatiblePrimitiveType'
buck test mode/dev //caffe2/test/cpp/jit:jit -- --exact 'caffe2/test/cpp/jit:jit - LiteInterpreterTest.InCompatibleCustomType'
```
Reviewed By: iseeyuan
Differential Revision: D30261547
fbshipit-source-id: 68a9974338464e320b39a5c613dc048f6c5adeb5
2021-10-22 07:39:29 +00:00
|
|
|
std::unordered_set<std::string> types = {"List", "int", "NamedTuple"};
|
2021-08-03 18:20:23 +00:00
|
|
|
auto model_info = ModelCompatibilityInfo{
|
2021-12-02 03:12:01 +00:00
|
|
|
caffe2::serialize::kMaxSupportedBytecodeVersion,
|
|
|
|
|
model_ops,
|
|
|
|
|
types,
|
|
|
|
|
_get_runtime_bytecode_min_max_versions().first};
|
2021-08-03 18:20:23 +00:00
|
|
|
|
|
|
|
|
AT_ASSERT(
|
|
|
|
|
is_compatible(runtime_info, model_info).status ==
|
|
|
|
|
ModelCompatibilityStatus::OK);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
TEST(LiteInterpreterTest, isCompatibleFail) {
|
|
|
|
|
// test trivial failure due to ops
|
|
|
|
|
std::unordered_map<std::string, OperatorInfo> model_ops;
|
|
|
|
|
model_ops["aten::add.Scalar"] = OperatorInfo{2};
|
|
|
|
|
auto model_info = ModelCompatibilityInfo{
|
|
|
|
|
caffe2::serialize::kMaxSupportedBytecodeVersion, model_ops};
|
|
|
|
|
std::unordered_map<std::string, OperatorInfo> runtime_ops;
|
|
|
|
|
runtime_ops["aten::add.Int"] = OperatorInfo{2};
|
|
|
|
|
auto runtime_info = RuntimeCompatibilityInfo{
|
2021-11-06 02:32:34 +00:00
|
|
|
std::pair<uint64_t, uint64_t>(
|
|
|
|
|
caffe2::serialize::kMinSupportedBytecodeVersion,
|
|
|
|
|
caffe2::serialize::kMaxSupportedBytecodeVersion),
|
2021-10-06 09:20:54 +00:00
|
|
|
runtime_ops,
|
|
|
|
|
_get_mobile_supported_types()};
|
2021-08-03 18:20:23 +00:00
|
|
|
|
|
|
|
|
auto result = is_compatible(runtime_info, model_info);
|
|
|
|
|
AT_ASSERT(result.status = ModelCompatibilityStatus::ERROR);
|
|
|
|
|
AT_ASSERT(
|
|
|
|
|
result.errors[0] ==
|
|
|
|
|
"Operator 'aten::add.Scalar' missing from runtime (not found)");
|
|
|
|
|
|
2021-11-06 02:32:34 +00:00
|
|
|
// test trivial failure due to bytecode greater than max supported bytecode
|
|
|
|
|
// version
|
2021-08-03 18:20:23 +00:00
|
|
|
runtime_ops["aten::add.Scalar"] = OperatorInfo{2};
|
|
|
|
|
runtime_info = RuntimeCompatibilityInfo{
|
2021-11-06 02:32:34 +00:00
|
|
|
std::pair<uint64_t, uint64_t>(
|
|
|
|
|
caffe2::serialize::kMinSupportedBytecodeVersion,
|
|
|
|
|
caffe2::serialize::kMaxSupportedBytecodeVersion),
|
2021-10-06 09:20:54 +00:00
|
|
|
runtime_ops,
|
|
|
|
|
_get_mobile_supported_types()};
|
2021-08-03 18:20:23 +00:00
|
|
|
model_info.bytecode_version =
|
|
|
|
|
caffe2::serialize::kMaxSupportedBytecodeVersion + 1;
|
|
|
|
|
|
|
|
|
|
result = is_compatible(runtime_info, model_info);
|
|
|
|
|
AT_ASSERT(result.status = ModelCompatibilityStatus::ERROR);
|
2021-10-06 09:20:54 +00:00
|
|
|
|
2021-11-06 02:32:34 +00:00
|
|
|
// test trivial failure due to bytecode less than min supported bytecode
|
|
|
|
|
// version
|
|
|
|
|
runtime_ops["aten::add.Scalar"] = OperatorInfo{2};
|
|
|
|
|
runtime_info = RuntimeCompatibilityInfo{
|
|
|
|
|
std::pair<uint64_t, uint64_t>(
|
|
|
|
|
caffe2::serialize::kMinSupportedBytecodeVersion,
|
|
|
|
|
caffe2::serialize::kMaxSupportedBytecodeVersion),
|
|
|
|
|
runtime_ops,
|
|
|
|
|
_get_mobile_supported_types()};
|
|
|
|
|
model_info.bytecode_version =
|
|
|
|
|
caffe2::serialize::kMinSupportedBytecodeVersion - 1;
|
|
|
|
|
|
|
|
|
|
result = is_compatible(runtime_info, model_info);
|
|
|
|
|
AT_ASSERT(result.status = ModelCompatibilityStatus::ERROR);
|
|
|
|
|
|
2021-10-06 09:20:54 +00:00
|
|
|
// test trivial failure due to type
|
|
|
|
|
runtime_info = RuntimeCompatibilityInfo::get();
|
[PyTorch Edge][type] Add type support for NamedTuple custom class (import) (#63130)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63130
Extend `type_parser` to handle `NamedTuple` type. It can be extended to handle other types when needed. The custom type will follow the following format:
```
"qualified_named[
NamedTuple, [
[filed_name_1, field_type_1],
[filed_name_2, field_type_2]
]
]"
```
For example:
```
"__torch__.base_models.sparse_nn.pytorch_preproc_types.PreprocOutputType[
NamedTuple, [
[float_features, Tensor],
[id_list_features, List[Tensor]],
[label, Tensor],
[weight, Tensor],
]
]"
```
For nested types, the order of type lists from type table should be:
```
std::string type_1 = “__torch__.C [
NamedTuple, [
[field_name_c_1, Tensor],
[field_name_c_2, Tuple[Tensor, Tensor]],
]
]”
std::string type_2 = “__torch__.B [
NamedTuple, [
[field_name_b, __torch__.C ]
]
]”
std::string type_3 = “__torch__.A[
NamedTuple, [
[field_name_a, __torch__.B]
]
]”
std::vector<std::string> type_strs = {type_str_1, type_str_2, type_3};
std::vector<TypePtr> type_ptrs = c10::parseType(type_strs);
```
namedtuple from both `collection` and `typing` are supported
```
from typing import NamedTuple
from collections import namedtuple
```
This change only adds the parser and now new runtime can read the above format.
ghstack-source-id: 141293658
Test Plan:
```
buck test mode/dev //caffe2/test/cpp/jit:jit -- --exact 'caffe2/test/cpp/jit:jit - LiteInterpreterTest.CompatiblePrimitiveType'
buck test mode/dev //caffe2/test/cpp/jit:jit -- --exact 'caffe2/test/cpp/jit:jit - LiteInterpreterTest.CompatibleCustomType'
buck test mode/dev //caffe2/test/cpp/jit:jit -- --exact 'caffe2/test/cpp/jit:jit - LiteInterpreterTest.InCompatiblePrimitiveType'
buck test mode/dev //caffe2/test/cpp/jit:jit -- --exact 'caffe2/test/cpp/jit:jit - LiteInterpreterTest.InCompatibleCustomType'
```
Reviewed By: iseeyuan
Differential Revision: D30261547
fbshipit-source-id: 68a9974338464e320b39a5c613dc048f6c5adeb5
2021-10-22 07:39:29 +00:00
|
|
|
std::unordered_set<std::string> types = {"List", "int", "Sequence"};
|
2021-10-06 09:20:54 +00:00
|
|
|
|
|
|
|
|
model_info = ModelCompatibilityInfo{
|
2021-12-02 03:12:01 +00:00
|
|
|
caffe2::serialize::kMaxSupportedBytecodeVersion,
|
|
|
|
|
model_ops,
|
|
|
|
|
types,
|
|
|
|
|
_get_runtime_bytecode_min_max_versions().first};
|
|
|
|
|
|
|
|
|
|
AT_ASSERT(
|
|
|
|
|
is_compatible(runtime_info, model_info).status ==
|
|
|
|
|
ModelCompatibilityStatus::ERROR);
|
|
|
|
|
|
|
|
|
|
// test trivial failure due to operator version
|
|
|
|
|
runtime_info = RuntimeCompatibilityInfo::get();
|
|
|
|
|
|
|
|
|
|
model_info = ModelCompatibilityInfo{
|
|
|
|
|
caffe2::serialize::kMaxSupportedBytecodeVersion, model_ops, {}, 0};
|
2021-10-06 09:20:54 +00:00
|
|
|
|
|
|
|
|
AT_ASSERT(
|
|
|
|
|
is_compatible(runtime_info, model_info).status ==
|
|
|
|
|
ModelCompatibilityStatus::ERROR);
|
2021-08-03 18:20:23 +00:00
|
|
|
}
|
|
|
|
|
|
2020-09-25 18:35:39 +00:00
|
|
|
TEST(LiteInterpreterTest, Eval) {
|
2020-08-17 07:17:39 +00:00
|
|
|
std::vector<torch::jit::IValue> inputs;
|
|
|
|
|
|
|
|
|
|
Module m("m");
|
|
|
|
|
m.define(R"(
|
|
|
|
|
def __init__(self, x):
|
|
|
|
|
self.training = True
|
|
|
|
|
|
|
|
|
|
def forward(self, input):
|
|
|
|
|
return torch.dropout(input, 1.0, self.training)
|
|
|
|
|
)");
|
|
|
|
|
|
Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os
def get_compiled_files_list():
import json
with open("build/compile_commands.json") as f:
data = json.load(f)
files = [os.path.relpath(node['file']) for node in data]
for idx, fname in enumerate(files):
if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
return files
def run_clang_tidy(fname):
check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
changes = check_output(["git", "ls-files", "-m"])
if len(changes) == 0:
return
check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])
def main():
git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
compiled_files = get_compiled_files_list()
for idx, fname in enumerate(git_files):
if fname not in compiled_files:
continue
if fname.startswith("caffe2/contrib/aten/"):
continue
print(f"[{idx}/{len(git_files)}] Processing {fname}")
run_clang_tidy(fname)
if __name__ == "__main__":
main()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892
Reviewed By: H-Huang
Differential Revision: D27991944
Pulled By: malfet
fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 21:09:06 +00:00
|
|
|
// NOLINTNEXTLINE(cppcoreguidelines-avoid-magic-numbers,modernize-use-emplace)
|
2020-08-17 07:17:39 +00:00
|
|
|
inputs.push_back(torch::ones({1, 1, 28, 28}));
|
|
|
|
|
m.eval();
|
|
|
|
|
auto outputref = m.forward(inputs).toTensor();
|
|
|
|
|
|
|
|
|
|
// save m in training mode to make sure that mobile eval() will correctly
|
|
|
|
|
// change back to eval mode
|
|
|
|
|
m.train();
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
bc.eval();
|
|
|
|
|
IValue res;
|
|
|
|
|
for (int i = 0; i < 3; ++i) {
|
2020-09-11 17:14:09 +00:00
|
|
|
res = bc.get_method("forward")(inputs);
|
2020-08-17 07:17:39 +00:00
|
|
|
}
|
|
|
|
|
auto output = res.toTensor();
|
|
|
|
|
AT_ASSERT(outputref.dim() == output.dim());
|
|
|
|
|
AT_ASSERT(
|
|
|
|
|
outputref[0][0][0][0].item<int>() == output[0][0][0][0].item<int>());
|
|
|
|
|
}
|
|
|
|
|
|
2020-09-25 18:35:39 +00:00
|
|
|
TEST(LiteInterpreterTest, FindWrongMethodName) {
|
2020-09-03 21:43:07 +00:00
|
|
|
Module m("m");
|
|
|
|
|
m.register_parameter("foo", torch::ones({}), false);
|
|
|
|
|
m.define(R"(
|
|
|
|
|
def add(self, x):
|
|
|
|
|
b = 4
|
|
|
|
|
return self.foo + x + b
|
|
|
|
|
)");
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
2024-07-15 00:48:43 +00:00
|
|
|
ASSERT_TRUE(bc.find_method("forward") == std::nullopt);
|
2020-09-03 21:43:07 +00:00
|
|
|
}
|
|
|
|
|
|
2020-09-25 18:35:39 +00:00
|
|
|
TEST(LiteInterpreterTest, FindAndRunMethod) {
|
2020-09-03 21:43:07 +00:00
|
|
|
Module m("m");
|
|
|
|
|
m.register_parameter("foo", torch::ones({}), false);
|
|
|
|
|
m.define(R"(
|
|
|
|
|
def add_it(self, x):
|
|
|
|
|
b = 4
|
|
|
|
|
return self.foo + x + b
|
|
|
|
|
)");
|
|
|
|
|
|
|
|
|
|
std::vector<IValue> inputs;
|
|
|
|
|
auto minput = 5 * torch::ones({});
|
|
|
|
|
inputs.emplace_back(minput);
|
|
|
|
|
auto ref = m.get_method("add_it")(inputs);
|
|
|
|
|
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
IValue res;
|
|
|
|
|
for (int i = 0; i < 3; ++i) {
|
|
|
|
|
auto bcinputs = inputs;
|
|
|
|
|
auto method = bc.find_method("add_it");
|
2024-07-15 00:48:43 +00:00
|
|
|
AT_ASSERT(method != std::nullopt);
|
2020-09-03 21:43:07 +00:00
|
|
|
res = (*method)(std::move(bcinputs));
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
auto resd = res.toTensor().item<float>();
|
|
|
|
|
auto refd = ref.toTensor().item<float>();
|
|
|
|
|
AT_ASSERT(resd == refd);
|
|
|
|
|
}
|
|
|
|
|
|
2020-09-25 18:35:39 +00:00
|
|
|
TEST(LiteInterpreterTest, RunMethodVariadic) {
|
2020-09-13 20:25:29 +00:00
|
|
|
Module m("m");
|
|
|
|
|
m.register_parameter("foo", torch::ones({}), false);
|
|
|
|
|
m.define(R"(
|
|
|
|
|
def add_three(self, x, y):
|
|
|
|
|
return self.foo + x + y
|
|
|
|
|
)");
|
|
|
|
|
|
|
|
|
|
std::vector<IValue> inputs;
|
|
|
|
|
auto inputx = 5 * torch::ones({});
|
|
|
|
|
auto inputy = 4 * torch::ones({});
|
|
|
|
|
auto ref = m.run_method("add_three", inputx, inputy);
|
|
|
|
|
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
IValue res = bc.run_method("add_three", inputx, inputy);
|
|
|
|
|
|
|
|
|
|
auto resd = res.toTensor().item<float>();
|
|
|
|
|
auto refd = ref.toTensor().item<float>();
|
|
|
|
|
AT_ASSERT(resd == refd);
|
|
|
|
|
}
|
|
|
|
|
|
2021-03-16 22:20:39 +00:00
|
|
|
TEST(LiteInterpreterTest, DuplicateSetState) {
|
|
|
|
|
Module m("M");
|
|
|
|
|
m.register_parameter("foo", torch::ones({}), false);
|
|
|
|
|
m.define(R"(
|
|
|
|
|
def __getstate__(self):
|
|
|
|
|
return self.foo + self.foo
|
|
|
|
|
def __setstate__(self, a):
|
|
|
|
|
self.foo = a
|
|
|
|
|
def forward(self, x):
|
|
|
|
|
b = 4
|
|
|
|
|
return self.foo + x + b
|
|
|
|
|
)");
|
|
|
|
|
|
|
|
|
|
Module b("B");
|
|
|
|
|
b.register_module("M0", m);
|
|
|
|
|
b.register_module("M1", m);
|
|
|
|
|
b.define(R"(
|
|
|
|
|
def forward(self, x):
|
|
|
|
|
return self.M0.forward(x) + self.M1.forward(x)
|
|
|
|
|
)");
|
|
|
|
|
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
const auto methods = bc.get_methods();
|
|
|
|
|
const size_t expected_n = 3;
|
|
|
|
|
ASSERT_EQ(methods.size(), expected_n);
|
|
|
|
|
}
|
|
|
|
|
|
2020-11-07 04:17:49 +00:00
|
|
|
TEST(LiteInterpreterTest, ExtraFiles) {
|
|
|
|
|
const auto script = R"JIT(
|
|
|
|
|
def forward(self):
|
|
|
|
|
x = torch.rand(5, 5)
|
|
|
|
|
x = x.mm(x)
|
|
|
|
|
return x
|
|
|
|
|
)JIT";
|
|
|
|
|
|
|
|
|
|
auto module =
|
|
|
|
|
std::make_shared<Module>("Module", std::make_shared<CompilationUnit>());
|
|
|
|
|
module->define(script);
|
|
|
|
|
std::ostringstream oss;
|
|
|
|
|
std::unordered_map<std::string, std::string> extra_files;
|
|
|
|
|
extra_files["metadata.json"] = "abc";
|
2021-02-24 05:55:07 +00:00
|
|
|
extra_files["mobile_info.json"] = "{\"key\": 23}";
|
2020-11-07 04:17:49 +00:00
|
|
|
module->_save_for_mobile(oss, extra_files);
|
|
|
|
|
|
|
|
|
|
std::istringstream iss(oss.str());
|
|
|
|
|
std::unordered_map<std::string, std::string> loaded_extra_files;
|
|
|
|
|
loaded_extra_files["metadata.json"] = "";
|
2021-02-24 05:55:07 +00:00
|
|
|
torch::jit::_load_for_mobile(iss, torch::kCPU, loaded_extra_files);
|
2020-11-07 04:17:49 +00:00
|
|
|
ASSERT_EQ(loaded_extra_files["metadata.json"], "abc");
|
2021-02-24 05:55:07 +00:00
|
|
|
|
|
|
|
|
loaded_extra_files.clear();
|
|
|
|
|
std::vector<std::string> all_files =
|
|
|
|
|
caffe2::serialize::PyTorchStreamReader(&iss).getAllRecords();
|
|
|
|
|
|
|
|
|
|
for (auto& file_name : all_files) {
|
|
|
|
|
if (file_name.find("extra/") == 0) {
|
|
|
|
|
loaded_extra_files[file_name.substr(6)] = "";
|
|
|
|
|
}
|
|
|
|
|
}
|
2022-03-24 21:46:21 +00:00
|
|
|
iss.seekg(0, iss.beg);
|
2021-02-24 05:55:07 +00:00
|
|
|
torch::jit::_load_for_mobile(iss, torch::kCPU, loaded_extra_files);
|
|
|
|
|
ASSERT_EQ(loaded_extra_files["metadata.json"], "abc");
|
|
|
|
|
ASSERT_EQ(loaded_extra_files["mobile_info.json"], "{\"key\": 23}");
|
2023-05-01 23:27:56 +00:00
|
|
|
|
|
|
|
|
std::unordered_map<std::string, std::string>
|
|
|
|
|
loaded_extra_files_without_explicit_mapping;
|
|
|
|
|
iss.seekg(0, iss.beg);
|
|
|
|
|
torch::jit::_load_for_mobile(
|
|
|
|
|
iss,
|
|
|
|
|
torch::kCPU,
|
|
|
|
|
loaded_extra_files_without_explicit_mapping,
|
|
|
|
|
MobileModuleLoadOptions::PARSE_ALL_EXTRA_FILE_MAPS);
|
|
|
|
|
ASSERT_EQ(
|
|
|
|
|
loaded_extra_files_without_explicit_mapping["metadata.json"], "abc");
|
|
|
|
|
ASSERT_EQ(
|
|
|
|
|
loaded_extra_files_without_explicit_mapping["mobile_info.json"],
|
|
|
|
|
"{\"key\": 23}");
|
2020-11-07 04:17:49 +00:00
|
|
|
}
|
|
|
|
|
|
2020-12-18 19:15:42 +00:00
|
|
|
TEST(LiteInterpreterTest, OpNameExportFetchRootOperators) {
|
|
|
|
|
torch::jit::Module m("m");
|
|
|
|
|
m.register_parameter("weight", torch::ones({20, 1, 5, 5}), false);
|
|
|
|
|
m.register_parameter("bias", torch::ones({20}), false);
|
|
|
|
|
m.define(R"(
|
|
|
|
|
def forward(self, input):
|
|
|
|
|
x1 = torch.zeros(2, 2)
|
|
|
|
|
x2 = torch.empty_like(torch.empty(2, 2))
|
|
|
|
|
x3 = torch._convolution(input, self.weight, self.bias, [1, 1], [0, 0], [1, 1], False, [0, 0], 1, False, False, True, True)
|
|
|
|
|
return (x1, x2, x3)
|
|
|
|
|
)");
|
|
|
|
|
m.eval();
|
|
|
|
|
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
|
|
|
|
|
torch::jit::mobile::Module ptl_model = torch::jit::_load_for_mobile(ss);
|
|
|
|
|
std::set<std::string> operator_names =
|
|
|
|
|
torch::jit::mobile::_export_operator_list(ptl_model);
|
|
|
|
|
std::set<std::string> expected_operator_names = {
|
|
|
|
|
"aten::_convolution",
|
|
|
|
|
"aten::empty.memory_format",
|
|
|
|
|
"aten::empty_like",
|
|
|
|
|
"aten::zeros",
|
|
|
|
|
};
|
|
|
|
|
EXPECT_EQ(operator_names, expected_operator_names)
|
|
|
|
|
<< "Expected the root operator lists to be the same";
|
|
|
|
|
}
|
|
|
|
|
|
[PyTorch Mobile][Forward/backward compatibility] Number of arguments for operators (#56845)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56845
Handle forward/backward compatibility caused by added default arguments in mobile. As an example,
In older version, operator aten::foo's schema is
```
foo(Tensor a, Tensor b) -> Tensor
```
In the new version, the schema is updated to
```
foo(Tensor a, Tensor b, int groups=1) -> Tensor
```
## Model file
Serialize the number of specified arguments to each operator into the bytecode operator table. Before the operator table contains operator name and overload name:
```
('operators', (('aten::foo', ''),))
```
Now the number of specified arguments is added:
```
# bytecode version 6
('operators', (('aten::foo', '', 2),))
```
where "2" means the number of specified arguments.
Since there's bytecode schema change, the bytecode version number is bumped. This PR is to be landed after #56002 , where the version number is bumped from 4 to 5. This PR bumps the version number from 5 to 6.
## Runtime and backward compatibility
When the operator is found (either jit or c10), we have the OperatorHandle, where the operator schema can be accessed by
```
op.value().schema().arguments()
```
Adaptation is implemented to handle backward compatibility. For the example above, the new runtime holds the updated schema:
```
foo(Tensor a, Tensor b, int groups=1) -> Tensor
```
Whereas the model file carries
```
(('aten::foo', ''), 2)
```
We can implement a wrapper around the original function pointer to push the default argument to the stack.
## Deliver time and forward compatibility
At model delivery time, two checks can be done:
### Operator check
Two APIs to be provided:
* Runtime: An API to get a runtime’s ops and their schemas (i.e. the # of args). D27920185(WIP)
* Model: An API to get a model’s ops and their schema requirements (i.e. the # of args required).
The APIs can be used to check
* runtime.ops() is a superset of model.ops()
* for each op in model.ops() validate their schemas are compatible with those in runtime.ops() -- i.e. the # args required in a model op are <= # args in the runtime op.
Note that only root ops in the model needs to be checked here. For transient ops it's not necessary. For example, if a root op, "aten::root" calls "aten::foo", it's "aten::root"'s responsibility to adapt to "aten::foo"'s change, or "aten::root" itself needs to be updated too.
### Bytecode version backport (PR coming)
When delivering a model with bytecode v6, if the runtime only works with bytecode v5 and lower, backport is needed.
* The number of arguments is removed from the operator table
* The bytecode version is changed from 6 to 5
Note that this backport is a pure format change, it does not guarantee the backported model always runs in old runtime. The operator check mentioned before should be done first, before it’s back ported to v5.
Test Plan: Imported from OSS
Reviewed By: gmagogsfm
Differential Revision: D27986544
Pulled By: iseeyuan
fbshipit-source-id: 143e19d4798cfb96b65095538dd648eead4e3fda
2021-05-13 21:19:25 +00:00
|
|
|
TEST(LiteInterpreterTest, DefaultArgsConv) {
|
|
|
|
|
auto s = std::getenv("PYTORCH_TEST_WITH_TSAN");
|
|
|
|
|
if (s && strcmp(s, "1") == 0)
|
|
|
|
|
return;
|
|
|
|
|
|
|
|
|
|
std::vector<torch::jit::IValue> inputs;
|
|
|
|
|
|
|
|
|
|
Module m("m");
|
|
|
|
|
m.register_parameter("weight", torch::ones({20, 1, 5, 5}), false);
|
|
|
|
|
m.register_parameter("bias", torch::ones({20}), false);
|
|
|
|
|
m.define(R"(
|
|
|
|
|
def forward(self, input):
|
|
|
|
|
return torch.conv2d(input, self.weight, self.bias, [1, 1], [0, 0], [1, 1], 1)
|
|
|
|
|
)");
|
|
|
|
|
|
2021-12-11 05:22:38 +00:00
|
|
|
inputs.push_back(torch::ones({1, 1, 28, 28}));
|
[PyTorch Mobile][Forward/backward compatibility] Number of arguments for operators (#56845)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56845
Handle forward/backward compatibility caused by added default arguments in mobile. As an example,
In older version, operator aten::foo's schema is
```
foo(Tensor a, Tensor b) -> Tensor
```
In the new version, the schema is updated to
```
foo(Tensor a, Tensor b, int groups=1) -> Tensor
```
## Model file
Serialize the number of specified arguments to each operator into the bytecode operator table. Before the operator table contains operator name and overload name:
```
('operators', (('aten::foo', ''),))
```
Now the number of specified arguments is added:
```
# bytecode version 6
('operators', (('aten::foo', '', 2),))
```
where "2" means the number of specified arguments.
Since there's bytecode schema change, the bytecode version number is bumped. This PR is to be landed after #56002 , where the version number is bumped from 4 to 5. This PR bumps the version number from 5 to 6.
## Runtime and backward compatibility
When the operator is found (either jit or c10), we have the OperatorHandle, where the operator schema can be accessed by
```
op.value().schema().arguments()
```
Adaptation is implemented to handle backward compatibility. For the example above, the new runtime holds the updated schema:
```
foo(Tensor a, Tensor b, int groups=1) -> Tensor
```
Whereas the model file carries
```
(('aten::foo', ''), 2)
```
We can implement a wrapper around the original function pointer to push the default argument to the stack.
## Deliver time and forward compatibility
At model delivery time, two checks can be done:
### Operator check
Two APIs to be provided:
* Runtime: An API to get a runtime’s ops and their schemas (i.e. the # of args). D27920185(WIP)
* Model: An API to get a model’s ops and their schema requirements (i.e. the # of args required).
The APIs can be used to check
* runtime.ops() is a superset of model.ops()
* for each op in model.ops() validate their schemas are compatible with those in runtime.ops() -- i.e. the # args required in a model op are <= # args in the runtime op.
Note that only root ops in the model needs to be checked here. For transient ops it's not necessary. For example, if a root op, "aten::root" calls "aten::foo", it's "aten::root"'s responsibility to adapt to "aten::foo"'s change, or "aten::root" itself needs to be updated too.
### Bytecode version backport (PR coming)
When delivering a model with bytecode v6, if the runtime only works with bytecode v5 and lower, backport is needed.
* The number of arguments is removed from the operator table
* The bytecode version is changed from 6 to 5
Note that this backport is a pure format change, it does not guarantee the backported model always runs in old runtime. The operator check mentioned before should be done first, before it’s back ported to v5.
Test Plan: Imported from OSS
Reviewed By: gmagogsfm
Differential Revision: D27986544
Pulled By: iseeyuan
fbshipit-source-id: 143e19d4798cfb96b65095538dd648eead4e3fda
2021-05-13 21:19:25 +00:00
|
|
|
|
|
|
|
|
auto outputref = m.forward(inputs).toTensor();
|
|
|
|
|
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
IValue res;
|
|
|
|
|
for (int i = 0; i < 1; ++i) {
|
|
|
|
|
res = bc.get_method("forward")(inputs);
|
|
|
|
|
}
|
|
|
|
|
auto output = res.toTensor();
|
|
|
|
|
AT_ASSERT(outputref.dim() == output.dim());
|
|
|
|
|
AT_ASSERT(output.equal(outputref));
|
|
|
|
|
}
|
|
|
|
|
|
2021-09-12 05:22:28 +00:00
|
|
|
TEST(RunTimeTest, ParseBytecode) {
|
|
|
|
|
// A simple example to show a simple bytecode that can be used independent of
|
|
|
|
|
// PyTorch TorchScript serialization (unpickler, etc) and operator library.
|
|
|
|
|
// It has basic control flow (if, else) and basic data orchestration (list
|
|
|
|
|
// construction). The original PyTorch program:
|
|
|
|
|
|
|
|
|
|
// class Module(torch.nn.Module):
|
|
|
|
|
//
|
2024-08-01 07:22:48 +00:00
|
|
|
// def __init__(self) -> None:
|
2021-09-12 05:22:28 +00:00
|
|
|
// super().__init__()
|
|
|
|
|
//
|
|
|
|
|
// def forward(self, x: int, h: int, xfirst: bool):
|
|
|
|
|
// if xfirst:
|
|
|
|
|
// return [x, h]
|
|
|
|
|
// else:
|
|
|
|
|
// return [h, x]
|
|
|
|
|
|
|
|
|
|
// 1. Prepare for the bytecode. In reality it can be from a customized
|
|
|
|
|
// deserializer.
|
|
|
|
|
std::vector<IValue> instructions{
|
|
|
|
|
to_tuple({"STOREN", 1, 4}),
|
|
|
|
|
to_tuple({"DROPR", 1, 0}),
|
|
|
|
|
to_tuple({"MOVE", 4, 0}),
|
|
|
|
|
to_tuple({"JF", 5, 0}),
|
|
|
|
|
to_tuple({"LOAD", 2, 0}),
|
|
|
|
|
to_tuple({"LOAD", 3, 0}),
|
|
|
|
|
to_tuple({"LIST_CONSTRUCT", 0, 2}),
|
|
|
|
|
to_tuple({"JMP", 4, 0}),
|
|
|
|
|
to_tuple({"LOAD", 3, 0}),
|
|
|
|
|
to_tuple({"LOAD", 2, 0}),
|
|
|
|
|
to_tuple({"LIST_CONSTRUCT", 1, 2}),
|
|
|
|
|
to_tuple({"STORE", 5, 0}),
|
|
|
|
|
to_tuple({"DROPR", 3, 0}),
|
|
|
|
|
to_tuple({"DROPR", 2, 0}),
|
|
|
|
|
to_tuple({"MOVE", 5, 0}),
|
|
|
|
|
to_tuple({"RET", 0, 0}),
|
|
|
|
|
};
|
|
|
|
|
std::vector<IValue> operators; // empty for this example
|
|
|
|
|
std::vector<IValue> constants; // empty for this example
|
|
|
|
|
|
|
|
|
|
std::vector<IValue> types{"List[int]", "List[int]"};
|
|
|
|
|
// 2. Parse the function
|
|
|
|
|
std::string function_name("test_function");
|
2021-12-11 05:22:38 +00:00
|
|
|
auto function = std::unique_ptr<mobile::Function>(
|
|
|
|
|
new mobile::Function(c10::QualifiedName(function_name)));
|
2021-10-15 19:14:59 +00:00
|
|
|
c10::ivalue::TupleElements debug_handles_m_tuple;
|
2021-09-12 05:22:28 +00:00
|
|
|
parseInstructions(
|
2021-10-15 19:14:59 +00:00
|
|
|
function_name,
|
|
|
|
|
std::move(*c10::ivalue::Tuple::create(instructions)).elements(),
|
|
|
|
|
debug_handles_m_tuple,
|
|
|
|
|
function.get());
|
|
|
|
|
parseTypes(c10::ivalue::Tuple::create(types)->elements(), function.get());
|
2021-09-12 05:22:28 +00:00
|
|
|
const size_t rsize = 5;
|
|
|
|
|
parseRegisterSize(rsize, function.get());
|
|
|
|
|
|
|
|
|
|
// 3. Prepare for inputs and run the function
|
|
|
|
|
// Note that the first input is reserved for Module object.
|
|
|
|
|
// Since this is a function test and Module object is not required,
|
|
|
|
|
// a dummy IValue (0) is added here.
|
|
|
|
|
std::vector<IValue> inputs{0, 1, 2, true};
|
|
|
|
|
function->run(inputs);
|
|
|
|
|
auto output = inputs[0].toList();
|
|
|
|
|
ASSERT_EQ(output[0], 1);
|
|
|
|
|
ASSERT_EQ(output[1], 2);
|
|
|
|
|
|
|
|
|
|
std::vector<IValue> inputs1{0, 1, 2, false};
|
|
|
|
|
function->run(inputs1);
|
|
|
|
|
auto output1 = inputs1[0].toList();
|
|
|
|
|
ASSERT_EQ(output1[0], 2);
|
|
|
|
|
ASSERT_EQ(output1[1], 1);
|
|
|
|
|
}
|
|
|
|
|
|
2021-09-17 19:57:48 +00:00
|
|
|
TEST(RunTimeTest, ParseOperator) {
|
|
|
|
|
// A simple example to show a simple bytecode that can be used independent of
|
|
|
|
|
// PyTorch TorchScript serialization (unpickler, etc) and operator library.
|
|
|
|
|
// It has one operator and we should be able to register it. The original
|
|
|
|
|
// PyTorch program:
|
|
|
|
|
|
|
|
|
|
// class Add(torch.nn.Module):
|
2024-08-01 07:22:48 +00:00
|
|
|
// def __init__(self) -> None:
|
2023-02-12 22:20:50 +00:00
|
|
|
// super().__init__()
|
2021-09-17 19:57:48 +00:00
|
|
|
|
|
|
|
|
// def forward(self, a, b):
|
|
|
|
|
// return a + b
|
|
|
|
|
|
|
|
|
|
// 1. Prepare for the bytecode. In reality it can be from a customized
|
|
|
|
|
// deserializer.
|
|
|
|
|
std::vector<IValue> instructions{
|
|
|
|
|
to_tuple({"STOREN", 1, 3}),
|
|
|
|
|
to_tuple({"DROPR", 1, 0}),
|
|
|
|
|
to_tuple({"MOVE", 2, 0}),
|
|
|
|
|
to_tuple({"MOVE", 3, 0}),
|
|
|
|
|
to_tuple({"OP", 0, 0}),
|
|
|
|
|
to_tuple({"RET", 0, 0}),
|
|
|
|
|
};
|
|
|
|
|
std::vector<IValue> operators{
|
|
|
|
|
to_tuple({"aten::add", "Tensor", 2}),
|
|
|
|
|
};
|
|
|
|
|
std::vector<IValue> constants{
|
|
|
|
|
to_tuple({1}),
|
|
|
|
|
};
|
|
|
|
|
// 2. Parse the function
|
|
|
|
|
std::string function_name("test_function");
|
2021-12-11 05:22:38 +00:00
|
|
|
auto function = std::unique_ptr<mobile::Function>(
|
|
|
|
|
new mobile::Function(c10::QualifiedName(function_name)));
|
2021-10-15 19:14:59 +00:00
|
|
|
c10::ivalue::TupleElements debug_handles_m_tuple;
|
2021-09-17 19:57:48 +00:00
|
|
|
parseInstructions(
|
2021-10-15 19:14:59 +00:00
|
|
|
function_name,
|
|
|
|
|
std::move(*c10::ivalue::Tuple::create(instructions)).elements(),
|
|
|
|
|
debug_handles_m_tuple,
|
|
|
|
|
function.get());
|
|
|
|
|
parseOperators(
|
|
|
|
|
std::move(*c10::ivalue::Tuple::create(operators)).elements(),
|
|
|
|
|
1,
|
|
|
|
|
function.get());
|
2021-09-17 19:57:48 +00:00
|
|
|
const size_t rsize = 5;
|
|
|
|
|
parseRegisterSize(rsize, function.get());
|
|
|
|
|
|
|
|
|
|
// 3. Prepare for inputs and run the function
|
|
|
|
|
// Note that the first input is reserved for Module object.
|
|
|
|
|
// Since this is a function test and Module object is not required,
|
|
|
|
|
// a dummy IValue (0) is added here.
|
|
|
|
|
std::vector<IValue> inputs{0, at::tensor(1), at::tensor(2)};
|
|
|
|
|
function->run(inputs);
|
|
|
|
|
auto output = inputs[0];
|
|
|
|
|
ASSERT_EQ(output, at::tensor(3));
|
|
|
|
|
}
|
|
|
|
|
|
[PyTorch Mobile][Forward/backward compatibility] Number of arguments for operators (#56845)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56845
Handle forward/backward compatibility caused by added default arguments in mobile. As an example,
In older version, operator aten::foo's schema is
```
foo(Tensor a, Tensor b) -> Tensor
```
In the new version, the schema is updated to
```
foo(Tensor a, Tensor b, int groups=1) -> Tensor
```
## Model file
Serialize the number of specified arguments to each operator into the bytecode operator table. Before the operator table contains operator name and overload name:
```
('operators', (('aten::foo', ''),))
```
Now the number of specified arguments is added:
```
# bytecode version 6
('operators', (('aten::foo', '', 2),))
```
where "2" means the number of specified arguments.
Since there's bytecode schema change, the bytecode version number is bumped. This PR is to be landed after #56002 , where the version number is bumped from 4 to 5. This PR bumps the version number from 5 to 6.
## Runtime and backward compatibility
When the operator is found (either jit or c10), we have the OperatorHandle, where the operator schema can be accessed by
```
op.value().schema().arguments()
```
Adaptation is implemented to handle backward compatibility. For the example above, the new runtime holds the updated schema:
```
foo(Tensor a, Tensor b, int groups=1) -> Tensor
```
Whereas the model file carries
```
(('aten::foo', ''), 2)
```
We can implement a wrapper around the original function pointer to push the default argument to the stack.
## Deliver time and forward compatibility
At model delivery time, two checks can be done:
### Operator check
Two APIs to be provided:
* Runtime: An API to get a runtime’s ops and their schemas (i.e. the # of args). D27920185(WIP)
* Model: An API to get a model’s ops and their schema requirements (i.e. the # of args required).
The APIs can be used to check
* runtime.ops() is a superset of model.ops()
* for each op in model.ops() validate their schemas are compatible with those in runtime.ops() -- i.e. the # args required in a model op are <= # args in the runtime op.
Note that only root ops in the model needs to be checked here. For transient ops it's not necessary. For example, if a root op, "aten::root" calls "aten::foo", it's "aten::root"'s responsibility to adapt to "aten::foo"'s change, or "aten::root" itself needs to be updated too.
### Bytecode version backport (PR coming)
When delivering a model with bytecode v6, if the runtime only works with bytecode v5 and lower, backport is needed.
* The number of arguments is removed from the operator table
* The bytecode version is changed from 6 to 5
Note that this backport is a pure format change, it does not guarantee the backported model always runs in old runtime. The operator check mentioned before should be done first, before it’s back ported to v5.
Test Plan: Imported from OSS
Reviewed By: gmagogsfm
Differential Revision: D27986544
Pulled By: iseeyuan
fbshipit-source-id: 143e19d4798cfb96b65095538dd648eead4e3fda
2021-05-13 21:19:25 +00:00
|
|
|
namespace {
|
|
|
|
|
void testLiteModuleCompareResultTensors(
|
|
|
|
|
Module& m,
|
2021-07-30 03:09:07 +00:00
|
|
|
const std::vector<torch::jit::IValue>& inputs,
|
|
|
|
|
const std::string& method_name = "forward") {
|
|
|
|
|
auto outputref = m.get_method(method_name)(inputs).toTensor();
|
[PyTorch Mobile][Forward/backward compatibility] Number of arguments for operators (#56845)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56845
Handle forward/backward compatibility caused by added default arguments in mobile. As an example,
In older version, operator aten::foo's schema is
```
foo(Tensor a, Tensor b) -> Tensor
```
In the new version, the schema is updated to
```
foo(Tensor a, Tensor b, int groups=1) -> Tensor
```
## Model file
Serialize the number of specified arguments to each operator into the bytecode operator table. Before the operator table contains operator name and overload name:
```
('operators', (('aten::foo', ''),))
```
Now the number of specified arguments is added:
```
# bytecode version 6
('operators', (('aten::foo', '', 2),))
```
where "2" means the number of specified arguments.
Since there's bytecode schema change, the bytecode version number is bumped. This PR is to be landed after #56002 , where the version number is bumped from 4 to 5. This PR bumps the version number from 5 to 6.
## Runtime and backward compatibility
When the operator is found (either jit or c10), we have the OperatorHandle, where the operator schema can be accessed by
```
op.value().schema().arguments()
```
Adaptation is implemented to handle backward compatibility. For the example above, the new runtime holds the updated schema:
```
foo(Tensor a, Tensor b, int groups=1) -> Tensor
```
Whereas the model file carries
```
(('aten::foo', ''), 2)
```
We can implement a wrapper around the original function pointer to push the default argument to the stack.
## Deliver time and forward compatibility
At model delivery time, two checks can be done:
### Operator check
Two APIs to be provided:
* Runtime: An API to get a runtime’s ops and their schemas (i.e. the # of args). D27920185(WIP)
* Model: An API to get a model’s ops and their schema requirements (i.e. the # of args required).
The APIs can be used to check
* runtime.ops() is a superset of model.ops()
* for each op in model.ops() validate their schemas are compatible with those in runtime.ops() -- i.e. the # args required in a model op are <= # args in the runtime op.
Note that only root ops in the model needs to be checked here. For transient ops it's not necessary. For example, if a root op, "aten::root" calls "aten::foo", it's "aten::root"'s responsibility to adapt to "aten::foo"'s change, or "aten::root" itself needs to be updated too.
### Bytecode version backport (PR coming)
When delivering a model with bytecode v6, if the runtime only works with bytecode v5 and lower, backport is needed.
* The number of arguments is removed from the operator table
* The bytecode version is changed from 6 to 5
Note that this backport is a pure format change, it does not guarantee the backported model always runs in old runtime. The operator check mentioned before should be done first, before it’s back ported to v5.
Test Plan: Imported from OSS
Reviewed By: gmagogsfm
Differential Revision: D27986544
Pulled By: iseeyuan
fbshipit-source-id: 143e19d4798cfb96b65095538dd648eead4e3fda
2021-05-13 21:19:25 +00:00
|
|
|
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
IValue res;
|
|
|
|
|
for (int i = 0; i < 3; ++i) {
|
2021-07-30 03:09:07 +00:00
|
|
|
res = bc.get_method(method_name)(inputs);
|
[PyTorch Mobile][Forward/backward compatibility] Number of arguments for operators (#56845)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56845
Handle forward/backward compatibility caused by added default arguments in mobile. As an example,
In older version, operator aten::foo's schema is
```
foo(Tensor a, Tensor b) -> Tensor
```
In the new version, the schema is updated to
```
foo(Tensor a, Tensor b, int groups=1) -> Tensor
```
## Model file
Serialize the number of specified arguments to each operator into the bytecode operator table. Before the operator table contains operator name and overload name:
```
('operators', (('aten::foo', ''),))
```
Now the number of specified arguments is added:
```
# bytecode version 6
('operators', (('aten::foo', '', 2),))
```
where "2" means the number of specified arguments.
Since there's bytecode schema change, the bytecode version number is bumped. This PR is to be landed after #56002 , where the version number is bumped from 4 to 5. This PR bumps the version number from 5 to 6.
## Runtime and backward compatibility
When the operator is found (either jit or c10), we have the OperatorHandle, where the operator schema can be accessed by
```
op.value().schema().arguments()
```
Adaptation is implemented to handle backward compatibility. For the example above, the new runtime holds the updated schema:
```
foo(Tensor a, Tensor b, int groups=1) -> Tensor
```
Whereas the model file carries
```
(('aten::foo', ''), 2)
```
We can implement a wrapper around the original function pointer to push the default argument to the stack.
## Deliver time and forward compatibility
At model delivery time, two checks can be done:
### Operator check
Two APIs to be provided:
* Runtime: An API to get a runtime’s ops and their schemas (i.e. the # of args). D27920185(WIP)
* Model: An API to get a model’s ops and their schema requirements (i.e. the # of args required).
The APIs can be used to check
* runtime.ops() is a superset of model.ops()
* for each op in model.ops() validate their schemas are compatible with those in runtime.ops() -- i.e. the # args required in a model op are <= # args in the runtime op.
Note that only root ops in the model needs to be checked here. For transient ops it's not necessary. For example, if a root op, "aten::root" calls "aten::foo", it's "aten::root"'s responsibility to adapt to "aten::foo"'s change, or "aten::root" itself needs to be updated too.
### Bytecode version backport (PR coming)
When delivering a model with bytecode v6, if the runtime only works with bytecode v5 and lower, backport is needed.
* The number of arguments is removed from the operator table
* The bytecode version is changed from 6 to 5
Note that this backport is a pure format change, it does not guarantee the backported model always runs in old runtime. The operator check mentioned before should be done first, before it’s back ported to v5.
Test Plan: Imported from OSS
Reviewed By: gmagogsfm
Differential Revision: D27986544
Pulled By: iseeyuan
fbshipit-source-id: 143e19d4798cfb96b65095538dd648eead4e3fda
2021-05-13 21:19:25 +00:00
|
|
|
}
|
|
|
|
|
auto output = res.toTensor();
|
|
|
|
|
AT_ASSERT(outputref.dim() == output.dim());
|
|
|
|
|
AT_ASSERT(output.equal(outputref));
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
void testDefaultArgsPinv(int num_args) {
|
|
|
|
|
Module m("m");
|
|
|
|
|
if (num_args == 1) {
|
|
|
|
|
m.define(R"(
|
|
|
|
|
def forward(self, input):
|
|
|
|
|
return torch.linalg_pinv(input)
|
|
|
|
|
)");
|
|
|
|
|
} else if (num_args == 2) {
|
|
|
|
|
m.define(R"(
|
|
|
|
|
def forward(self, input):
|
|
|
|
|
return torch.linalg_pinv(input, 1e-5)
|
|
|
|
|
)");
|
|
|
|
|
} else if (num_args == 3) {
|
|
|
|
|
m.define(R"(
|
|
|
|
|
def forward(self, input):
|
|
|
|
|
return torch.linalg_pinv(input, 1e-5, True)
|
|
|
|
|
)");
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
std::vector<torch::jit::IValue> inputs;
|
|
|
|
|
const int N = 28;
|
|
|
|
|
auto input = torch::range(1, N * N, 1);
|
|
|
|
|
input[0] = 1; // a more stable matrix
|
|
|
|
|
input = input.view({N, N});
|
2021-12-11 05:22:38 +00:00
|
|
|
inputs.push_back(input);
|
[PyTorch Mobile][Forward/backward compatibility] Number of arguments for operators (#56845)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56845
Handle forward/backward compatibility caused by added default arguments in mobile. As an example,
In older version, operator aten::foo's schema is
```
foo(Tensor a, Tensor b) -> Tensor
```
In the new version, the schema is updated to
```
foo(Tensor a, Tensor b, int groups=1) -> Tensor
```
## Model file
Serialize the number of specified arguments to each operator into the bytecode operator table. Before the operator table contains operator name and overload name:
```
('operators', (('aten::foo', ''),))
```
Now the number of specified arguments is added:
```
# bytecode version 6
('operators', (('aten::foo', '', 2),))
```
where "2" means the number of specified arguments.
Since there's bytecode schema change, the bytecode version number is bumped. This PR is to be landed after #56002 , where the version number is bumped from 4 to 5. This PR bumps the version number from 5 to 6.
## Runtime and backward compatibility
When the operator is found (either jit or c10), we have the OperatorHandle, where the operator schema can be accessed by
```
op.value().schema().arguments()
```
Adaptation is implemented to handle backward compatibility. For the example above, the new runtime holds the updated schema:
```
foo(Tensor a, Tensor b, int groups=1) -> Tensor
```
Whereas the model file carries
```
(('aten::foo', ''), 2)
```
We can implement a wrapper around the original function pointer to push the default argument to the stack.
## Deliver time and forward compatibility
At model delivery time, two checks can be done:
### Operator check
Two APIs to be provided:
* Runtime: An API to get a runtime’s ops and their schemas (i.e. the # of args). D27920185(WIP)
* Model: An API to get a model’s ops and their schema requirements (i.e. the # of args required).
The APIs can be used to check
* runtime.ops() is a superset of model.ops()
* for each op in model.ops() validate their schemas are compatible with those in runtime.ops() -- i.e. the # args required in a model op are <= # args in the runtime op.
Note that only root ops in the model needs to be checked here. For transient ops it's not necessary. For example, if a root op, "aten::root" calls "aten::foo", it's "aten::root"'s responsibility to adapt to "aten::foo"'s change, or "aten::root" itself needs to be updated too.
### Bytecode version backport (PR coming)
When delivering a model with bytecode v6, if the runtime only works with bytecode v5 and lower, backport is needed.
* The number of arguments is removed from the operator table
* The bytecode version is changed from 6 to 5
Note that this backport is a pure format change, it does not guarantee the backported model always runs in old runtime. The operator check mentioned before should be done first, before it’s back ported to v5.
Test Plan: Imported from OSS
Reviewed By: gmagogsfm
Differential Revision: D27986544
Pulled By: iseeyuan
fbshipit-source-id: 143e19d4798cfb96b65095538dd648eead4e3fda
2021-05-13 21:19:25 +00:00
|
|
|
testLiteModuleCompareResultTensors(m, inputs);
|
|
|
|
|
}
|
|
|
|
|
} // namespace
|
|
|
|
|
|
2021-07-30 03:09:07 +00:00
|
|
|
#if !defined FB_XPLAT_BUILD
|
[PyTorch Mobile][Forward/backward compatibility] Number of arguments for operators (#56845)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56845
Handle forward/backward compatibility caused by added default arguments in mobile. As an example,
In older version, operator aten::foo's schema is
```
foo(Tensor a, Tensor b) -> Tensor
```
In the new version, the schema is updated to
```
foo(Tensor a, Tensor b, int groups=1) -> Tensor
```
## Model file
Serialize the number of specified arguments to each operator into the bytecode operator table. Before the operator table contains operator name and overload name:
```
('operators', (('aten::foo', ''),))
```
Now the number of specified arguments is added:
```
# bytecode version 6
('operators', (('aten::foo', '', 2),))
```
where "2" means the number of specified arguments.
Since there's bytecode schema change, the bytecode version number is bumped. This PR is to be landed after #56002 , where the version number is bumped from 4 to 5. This PR bumps the version number from 5 to 6.
## Runtime and backward compatibility
When the operator is found (either jit or c10), we have the OperatorHandle, where the operator schema can be accessed by
```
op.value().schema().arguments()
```
Adaptation is implemented to handle backward compatibility. For the example above, the new runtime holds the updated schema:
```
foo(Tensor a, Tensor b, int groups=1) -> Tensor
```
Whereas the model file carries
```
(('aten::foo', ''), 2)
```
We can implement a wrapper around the original function pointer to push the default argument to the stack.
## Deliver time and forward compatibility
At model delivery time, two checks can be done:
### Operator check
Two APIs to be provided:
* Runtime: An API to get a runtime’s ops and their schemas (i.e. the # of args). D27920185(WIP)
* Model: An API to get a model’s ops and their schema requirements (i.e. the # of args required).
The APIs can be used to check
* runtime.ops() is a superset of model.ops()
* for each op in model.ops() validate their schemas are compatible with those in runtime.ops() -- i.e. the # args required in a model op are <= # args in the runtime op.
Note that only root ops in the model needs to be checked here. For transient ops it's not necessary. For example, if a root op, "aten::root" calls "aten::foo", it's "aten::root"'s responsibility to adapt to "aten::foo"'s change, or "aten::root" itself needs to be updated too.
### Bytecode version backport (PR coming)
When delivering a model with bytecode v6, if the runtime only works with bytecode v5 and lower, backport is needed.
* The number of arguments is removed from the operator table
* The bytecode version is changed from 6 to 5
Note that this backport is a pure format change, it does not guarantee the backported model always runs in old runtime. The operator check mentioned before should be done first, before it’s back ported to v5.
Test Plan: Imported from OSS
Reviewed By: gmagogsfm
Differential Revision: D27986544
Pulled By: iseeyuan
fbshipit-source-id: 143e19d4798cfb96b65095538dd648eead4e3fda
2021-05-13 21:19:25 +00:00
|
|
|
TEST(LiteInterpreterTest, DefaultArgsPinv) {
|
|
|
|
|
// Test with different number of specified arguments.
|
|
|
|
|
// Arguments not specified take default value.
|
|
|
|
|
for (int num_args = 1; num_args <= 3; ++num_args) {
|
|
|
|
|
testDefaultArgsPinv(num_args);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// bytecode with one specified argument:
|
|
|
|
|
// (6,
|
|
|
|
|
// ('__torch__.m.forward',
|
|
|
|
|
// (('instructions',
|
|
|
|
|
// (('STOREN', 1, 2),
|
|
|
|
|
// ('DROPR', 1, 0),
|
|
|
|
|
// ('MOVE', 2, 0),
|
|
|
|
|
// ('OP', 0, 0),
|
|
|
|
|
// ('RET', 0, 0))),
|
|
|
|
|
// ('operators', (('aten::linalg_pinv', '', 1),)),
|
|
|
|
|
// ('constants', (False, 1e-15)), # default constants are not
|
|
|
|
|
// used
|
|
|
|
|
// ('types', ()),
|
|
|
|
|
// ('register_size', 2)),
|
|
|
|
|
// (('arguments',
|
|
|
|
|
// ((('name', 'self'), ('type', '__torch__.m'), ('default_value',
|
|
|
|
|
// None)),
|
|
|
|
|
// (('name', 'input'), ('type', 'Tensor'), ('default_value',
|
|
|
|
|
// None)))),
|
|
|
|
|
// ('returns',
|
|
|
|
|
// ((('name', ''), ('type', 'Tensor'), ('default_value',
|
|
|
|
|
// None)),)))))
|
|
|
|
|
|
|
|
|
|
// bytecode with 2 specified argument:
|
|
|
|
|
// (6,
|
|
|
|
|
// ('__torch__.m.forward',
|
|
|
|
|
// (('instructions',
|
|
|
|
|
// (('STOREN', 1, 2),
|
|
|
|
|
// ('DROPR', 1, 0),
|
|
|
|
|
// ('MOVE', 2, 0),
|
|
|
|
|
// ('LOADC', 1, 0), # added LOADC for specified argument
|
|
|
|
|
// ('OP', 0, 0),
|
|
|
|
|
// ('RET', 0, 0))),
|
|
|
|
|
// ('operators', (('aten::linalg_pinv', '', 2),)),
|
|
|
|
|
// ('constants', (False, 1e-05)), # updated constant table
|
|
|
|
|
// ('types', ()),
|
|
|
|
|
// ('register_size', 2)),
|
|
|
|
|
// (('arguments',
|
|
|
|
|
// ((('name', 'self'), ('type', '__torch__.m'), ('default_value',
|
|
|
|
|
// None)),
|
|
|
|
|
// (('name', 'input'), ('type', 'Tensor'), ('default_value',
|
|
|
|
|
// None)))),
|
|
|
|
|
// ('returns',
|
|
|
|
|
// ((('name', ''), ('type', 'Tensor'), ('default_value',
|
|
|
|
|
// None)),)))))
|
|
|
|
|
|
|
|
|
|
// bytecode with 3 specified arguments:
|
|
|
|
|
// (6,
|
|
|
|
|
// ('__torch__.m.forward',
|
|
|
|
|
// (('instructions',
|
|
|
|
|
// (('STOREN', 1, 2),
|
|
|
|
|
// ('DROPR', 1, 0),
|
|
|
|
|
// ('MOVE', 2, 0),
|
|
|
|
|
// ('LOADC', 1, 0),
|
|
|
|
|
// ('LOADC', 0, 0),
|
|
|
|
|
// ('OP', 0, 0),
|
|
|
|
|
// ('RET', 0, 0))),
|
|
|
|
|
// ('operators', (('aten::linalg_pinv', '', 3),)),
|
|
|
|
|
// ('constants', (True, 1e-05)),
|
|
|
|
|
// ('types', ()),
|
|
|
|
|
// ('register_size', 2)),
|
|
|
|
|
// (('arguments',
|
|
|
|
|
// ((('name', 'self'), ('type', '__torch__.m'), ('default_value',
|
|
|
|
|
// None)),
|
|
|
|
|
// (('name', 'input'), ('type', 'Tensor'), ('default_value',
|
|
|
|
|
// None)))),
|
|
|
|
|
// ('returns',
|
|
|
|
|
// ((('name', ''), ('type', 'Tensor'), ('default_value',
|
|
|
|
|
// None)),)))))
|
|
|
|
|
}
|
|
|
|
|
|
2021-10-18 05:13:48 +00:00
|
|
|
TEST(LiteInterpreterTest, DefaultArgsTensorinvSpecifyDefault) {
|
[PyTorch Mobile][Forward/backward compatibility] Number of arguments for operators (#56845)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56845
Handle forward/backward compatibility caused by added default arguments in mobile. As an example,
In older version, operator aten::foo's schema is
```
foo(Tensor a, Tensor b) -> Tensor
```
In the new version, the schema is updated to
```
foo(Tensor a, Tensor b, int groups=1) -> Tensor
```
## Model file
Serialize the number of specified arguments to each operator into the bytecode operator table. Before the operator table contains operator name and overload name:
```
('operators', (('aten::foo', ''),))
```
Now the number of specified arguments is added:
```
# bytecode version 6
('operators', (('aten::foo', '', 2),))
```
where "2" means the number of specified arguments.
Since there's bytecode schema change, the bytecode version number is bumped. This PR is to be landed after #56002 , where the version number is bumped from 4 to 5. This PR bumps the version number from 5 to 6.
## Runtime and backward compatibility
When the operator is found (either jit or c10), we have the OperatorHandle, where the operator schema can be accessed by
```
op.value().schema().arguments()
```
Adaptation is implemented to handle backward compatibility. For the example above, the new runtime holds the updated schema:
```
foo(Tensor a, Tensor b, int groups=1) -> Tensor
```
Whereas the model file carries
```
(('aten::foo', ''), 2)
```
We can implement a wrapper around the original function pointer to push the default argument to the stack.
## Deliver time and forward compatibility
At model delivery time, two checks can be done:
### Operator check
Two APIs to be provided:
* Runtime: An API to get a runtime’s ops and their schemas (i.e. the # of args). D27920185(WIP)
* Model: An API to get a model’s ops and their schema requirements (i.e. the # of args required).
The APIs can be used to check
* runtime.ops() is a superset of model.ops()
* for each op in model.ops() validate their schemas are compatible with those in runtime.ops() -- i.e. the # args required in a model op are <= # args in the runtime op.
Note that only root ops in the model needs to be checked here. For transient ops it's not necessary. For example, if a root op, "aten::root" calls "aten::foo", it's "aten::root"'s responsibility to adapt to "aten::foo"'s change, or "aten::root" itself needs to be updated too.
### Bytecode version backport (PR coming)
When delivering a model with bytecode v6, if the runtime only works with bytecode v5 and lower, backport is needed.
* The number of arguments is removed from the operator table
* The bytecode version is changed from 6 to 5
Note that this backport is a pure format change, it does not guarantee the backported model always runs in old runtime. The operator check mentioned before should be done first, before it’s back ported to v5.
Test Plan: Imported from OSS
Reviewed By: gmagogsfm
Differential Revision: D27986544
Pulled By: iseeyuan
fbshipit-source-id: 143e19d4798cfb96b65095538dd648eead4e3fda
2021-05-13 21:19:25 +00:00
|
|
|
// The second argument is specified, but the value is the same as the default
|
|
|
|
|
// value. It's treated as "not specified" since the value can be fetched from
|
|
|
|
|
// schema.
|
|
|
|
|
Module m("m");
|
|
|
|
|
m.define(R"(
|
|
|
|
|
def forward(self, input):
|
2021-10-18 05:13:48 +00:00
|
|
|
return torch.linalg_tensorinv(input, 2)
|
[PyTorch Mobile][Forward/backward compatibility] Number of arguments for operators (#56845)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56845
Handle forward/backward compatibility caused by added default arguments in mobile. As an example,
In older version, operator aten::foo's schema is
```
foo(Tensor a, Tensor b) -> Tensor
```
In the new version, the schema is updated to
```
foo(Tensor a, Tensor b, int groups=1) -> Tensor
```
## Model file
Serialize the number of specified arguments to each operator into the bytecode operator table. Before the operator table contains operator name and overload name:
```
('operators', (('aten::foo', ''),))
```
Now the number of specified arguments is added:
```
# bytecode version 6
('operators', (('aten::foo', '', 2),))
```
where "2" means the number of specified arguments.
Since there's bytecode schema change, the bytecode version number is bumped. This PR is to be landed after #56002 , where the version number is bumped from 4 to 5. This PR bumps the version number from 5 to 6.
## Runtime and backward compatibility
When the operator is found (either jit or c10), we have the OperatorHandle, where the operator schema can be accessed by
```
op.value().schema().arguments()
```
Adaptation is implemented to handle backward compatibility. For the example above, the new runtime holds the updated schema:
```
foo(Tensor a, Tensor b, int groups=1) -> Tensor
```
Whereas the model file carries
```
(('aten::foo', ''), 2)
```
We can implement a wrapper around the original function pointer to push the default argument to the stack.
## Deliver time and forward compatibility
At model delivery time, two checks can be done:
### Operator check
Two APIs to be provided:
* Runtime: An API to get a runtime’s ops and their schemas (i.e. the # of args). D27920185(WIP)
* Model: An API to get a model’s ops and their schema requirements (i.e. the # of args required).
The APIs can be used to check
* runtime.ops() is a superset of model.ops()
* for each op in model.ops() validate their schemas are compatible with those in runtime.ops() -- i.e. the # args required in a model op are <= # args in the runtime op.
Note that only root ops in the model needs to be checked here. For transient ops it's not necessary. For example, if a root op, "aten::root" calls "aten::foo", it's "aten::root"'s responsibility to adapt to "aten::foo"'s change, or "aten::root" itself needs to be updated too.
### Bytecode version backport (PR coming)
When delivering a model with bytecode v6, if the runtime only works with bytecode v5 and lower, backport is needed.
* The number of arguments is removed from the operator table
* The bytecode version is changed from 6 to 5
Note that this backport is a pure format change, it does not guarantee the backported model always runs in old runtime. The operator check mentioned before should be done first, before it’s back ported to v5.
Test Plan: Imported from OSS
Reviewed By: gmagogsfm
Differential Revision: D27986544
Pulled By: iseeyuan
fbshipit-source-id: 143e19d4798cfb96b65095538dd648eead4e3fda
2021-05-13 21:19:25 +00:00
|
|
|
)");
|
|
|
|
|
torch::jit::MobileCode code(m.get_method("forward").graph(), "forward");
|
|
|
|
|
auto arg_nums = code.op_to_num_specified_args();
|
|
|
|
|
ASSERT_EQ(arg_nums.size(), 1);
|
2021-10-18 05:13:48 +00:00
|
|
|
ASSERT_EQ(arg_nums["aten::linalg_tensorinv"], 1);
|
[PyTorch Mobile][Forward/backward compatibility] Number of arguments for operators (#56845)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56845
Handle forward/backward compatibility caused by added default arguments in mobile. As an example,
In older version, operator aten::foo's schema is
```
foo(Tensor a, Tensor b) -> Tensor
```
In the new version, the schema is updated to
```
foo(Tensor a, Tensor b, int groups=1) -> Tensor
```
## Model file
Serialize the number of specified arguments to each operator into the bytecode operator table. Before the operator table contains operator name and overload name:
```
('operators', (('aten::foo', ''),))
```
Now the number of specified arguments is added:
```
# bytecode version 6
('operators', (('aten::foo', '', 2),))
```
where "2" means the number of specified arguments.
Since there's bytecode schema change, the bytecode version number is bumped. This PR is to be landed after #56002 , where the version number is bumped from 4 to 5. This PR bumps the version number from 5 to 6.
## Runtime and backward compatibility
When the operator is found (either jit or c10), we have the OperatorHandle, where the operator schema can be accessed by
```
op.value().schema().arguments()
```
Adaptation is implemented to handle backward compatibility. For the example above, the new runtime holds the updated schema:
```
foo(Tensor a, Tensor b, int groups=1) -> Tensor
```
Whereas the model file carries
```
(('aten::foo', ''), 2)
```
We can implement a wrapper around the original function pointer to push the default argument to the stack.
## Deliver time and forward compatibility
At model delivery time, two checks can be done:
### Operator check
Two APIs to be provided:
* Runtime: An API to get a runtime’s ops and their schemas (i.e. the # of args). D27920185(WIP)
* Model: An API to get a model’s ops and their schema requirements (i.e. the # of args required).
The APIs can be used to check
* runtime.ops() is a superset of model.ops()
* for each op in model.ops() validate their schemas are compatible with those in runtime.ops() -- i.e. the # args required in a model op are <= # args in the runtime op.
Note that only root ops in the model needs to be checked here. For transient ops it's not necessary. For example, if a root op, "aten::root" calls "aten::foo", it's "aten::root"'s responsibility to adapt to "aten::foo"'s change, or "aten::root" itself needs to be updated too.
### Bytecode version backport (PR coming)
When delivering a model with bytecode v6, if the runtime only works with bytecode v5 and lower, backport is needed.
* The number of arguments is removed from the operator table
* The bytecode version is changed from 6 to 5
Note that this backport is a pure format change, it does not guarantee the backported model always runs in old runtime. The operator check mentioned before should be done first, before it’s back ported to v5.
Test Plan: Imported from OSS
Reviewed By: gmagogsfm
Differential Revision: D27986544
Pulled By: iseeyuan
fbshipit-source-id: 143e19d4798cfb96b65095538dd648eead4e3fda
2021-05-13 21:19:25 +00:00
|
|
|
std::vector<torch::jit::IValue> inputs;
|
2021-10-18 05:13:48 +00:00
|
|
|
const int N = 4;
|
|
|
|
|
auto input = torch::rand({N, N, N, N});
|
2021-12-11 05:22:38 +00:00
|
|
|
inputs.push_back(input);
|
[PyTorch Mobile][Forward/backward compatibility] Number of arguments for operators (#56845)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56845
Handle forward/backward compatibility caused by added default arguments in mobile. As an example,
In older version, operator aten::foo's schema is
```
foo(Tensor a, Tensor b) -> Tensor
```
In the new version, the schema is updated to
```
foo(Tensor a, Tensor b, int groups=1) -> Tensor
```
## Model file
Serialize the number of specified arguments to each operator into the bytecode operator table. Before the operator table contains operator name and overload name:
```
('operators', (('aten::foo', ''),))
```
Now the number of specified arguments is added:
```
# bytecode version 6
('operators', (('aten::foo', '', 2),))
```
where "2" means the number of specified arguments.
Since there's bytecode schema change, the bytecode version number is bumped. This PR is to be landed after #56002 , where the version number is bumped from 4 to 5. This PR bumps the version number from 5 to 6.
## Runtime and backward compatibility
When the operator is found (either jit or c10), we have the OperatorHandle, where the operator schema can be accessed by
```
op.value().schema().arguments()
```
Adaptation is implemented to handle backward compatibility. For the example above, the new runtime holds the updated schema:
```
foo(Tensor a, Tensor b, int groups=1) -> Tensor
```
Whereas the model file carries
```
(('aten::foo', ''), 2)
```
We can implement a wrapper around the original function pointer to push the default argument to the stack.
## Deliver time and forward compatibility
At model delivery time, two checks can be done:
### Operator check
Two APIs to be provided:
* Runtime: An API to get a runtime’s ops and their schemas (i.e. the # of args). D27920185(WIP)
* Model: An API to get a model’s ops and their schema requirements (i.e. the # of args required).
The APIs can be used to check
* runtime.ops() is a superset of model.ops()
* for each op in model.ops() validate their schemas are compatible with those in runtime.ops() -- i.e. the # args required in a model op are <= # args in the runtime op.
Note that only root ops in the model needs to be checked here. For transient ops it's not necessary. For example, if a root op, "aten::root" calls "aten::foo", it's "aten::root"'s responsibility to adapt to "aten::foo"'s change, or "aten::root" itself needs to be updated too.
### Bytecode version backport (PR coming)
When delivering a model with bytecode v6, if the runtime only works with bytecode v5 and lower, backport is needed.
* The number of arguments is removed from the operator table
* The bytecode version is changed from 6 to 5
Note that this backport is a pure format change, it does not guarantee the backported model always runs in old runtime. The operator check mentioned before should be done first, before it’s back ported to v5.
Test Plan: Imported from OSS
Reviewed By: gmagogsfm
Differential Revision: D27986544
Pulled By: iseeyuan
fbshipit-source-id: 143e19d4798cfb96b65095538dd648eead4e3fda
2021-05-13 21:19:25 +00:00
|
|
|
testLiteModuleCompareResultTensors(m, inputs);
|
|
|
|
|
}
|
|
|
|
|
|
2021-09-02 07:50:40 +00:00
|
|
|
void testDefaultArgsPinvWithOutArg(int num_args) {
|
|
|
|
|
Module m("m");
|
|
|
|
|
if (num_args == 1) {
|
|
|
|
|
m.define(R"(
|
|
|
|
|
def forward(self, input):
|
|
|
|
|
return torch.linalg_pinv(input, out=input)
|
|
|
|
|
)");
|
|
|
|
|
} else if (num_args == 2) {
|
|
|
|
|
m.define(R"(
|
|
|
|
|
def forward(self, input):
|
|
|
|
|
return torch.linalg_pinv(input, 1e-5, out=input)
|
|
|
|
|
)");
|
|
|
|
|
} else if (num_args == 3) {
|
|
|
|
|
m.define(R"(
|
|
|
|
|
def forward(self, input):
|
|
|
|
|
return torch.linalg_pinv(input, 1e-5, True, out=input)
|
|
|
|
|
)");
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
const int N = 28;
|
|
|
|
|
auto input = torch::range(1, N * N, 1);
|
|
|
|
|
input[0] = 10000; // a more stable matrix
|
|
|
|
|
input = input.view({N, N});
|
|
|
|
|
auto ref = m.run_method("forward", input);
|
|
|
|
|
TORCH_CHECK(!input.equal(torch::range(1, N * N, 1)));
|
|
|
|
|
TORCH_CHECK(input.equal(ref.toTensor()));
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
TEST(LiteInterpreterTest, DefaultArgsPinvWithOutArg) {
|
|
|
|
|
// Test with different number of specified arguments + out arg.
|
|
|
|
|
// Arguments not specified take default value.
|
|
|
|
|
for (int num_args = 1; num_args <= 3; ++num_args) {
|
|
|
|
|
testDefaultArgsPinvWithOutArg(num_args);
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
TEST(LiteInterpreterTest, DefaultArgsWithOutArg) {
|
|
|
|
|
Module m("m");
|
|
|
|
|
m.define(R"(
|
|
|
|
|
def forward(self, x, h):
|
|
|
|
|
torch.add(x, h, out=x)
|
|
|
|
|
)");
|
|
|
|
|
|
|
|
|
|
std::vector<IValue> inputs;
|
|
|
|
|
auto input_x = 2 * torch::ones({});
|
|
|
|
|
auto input_h = torch::ones({});
|
|
|
|
|
auto ref = m.run_method("forward", input_x, input_h);
|
|
|
|
|
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
|
|
|
|
|
m._save_for_mobile(ss, {}, true);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
bc.run_method("forward", input_x, input_h);
|
|
|
|
|
AT_ASSERT(input_x.equal(4 * torch::ones({})));
|
|
|
|
|
|
|
|
|
|
auto ops = _get_model_ops_and_info(ss);
|
|
|
|
|
auto op = ops.find("aten::add.out");
|
|
|
|
|
TORCH_CHECK(
|
|
|
|
|
op != ops.end() && op->second.num_schema_args.has_value() &&
|
2021-09-21 05:22:17 +00:00
|
|
|
op->second.num_schema_args.value() == 3);
|
2021-09-02 07:50:40 +00:00
|
|
|
}
|
|
|
|
|
|
2021-05-25 20:16:46 +00:00
|
|
|
TEST(LiteInterpreterTest, TestExceptionStackWithTwoLevelModuleHierarchy) {
|
|
|
|
|
Module a("A");
|
|
|
|
|
a.define(R"(
|
|
|
|
|
def bar(self, x, y):
|
|
|
|
|
return x + y
|
|
|
|
|
)");
|
|
|
|
|
Module b("B");
|
|
|
|
|
b.register_module("A0", a);
|
|
|
|
|
b.define(R"(
|
|
|
|
|
def foo(self, x, y):
|
|
|
|
|
return self.A0.bar(x, y) + 2
|
|
|
|
|
)");
|
|
|
|
|
Module c("C");
|
|
|
|
|
c.register_module("B0", b);
|
|
|
|
|
c.define(R"(
|
|
|
|
|
def forward(self, x, y):
|
|
|
|
|
return self.B0.foo(x, y) + 3
|
|
|
|
|
)");
|
|
|
|
|
|
|
|
|
|
std::vector<IValue> inputs;
|
|
|
|
|
inputs.emplace_back(torch::rand({2, 4}));
|
|
|
|
|
inputs.emplace_back(torch::rand({13, 9}));
|
|
|
|
|
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
c._save_for_mobile(ss, ExtraFilesMap(), true);
|
|
|
|
|
auto lite_m = _load_for_mobile(ss);
|
|
|
|
|
std::string error_pattern = R"(
|
2021-08-14 04:37:57 +00:00
|
|
|
Module hierarchy:top(C)::<unknown>.B0(B)::foo.A0(A)::bar.aten::add
|
2021-05-25 20:16:46 +00:00
|
|
|
Traceback of TorchScript (most recent call last):
|
2021-08-14 04:37:57 +00:00
|
|
|
File "<string>", line 3, in <unknown>
|
2021-05-25 20:16:46 +00:00
|
|
|
|
|
|
|
|
def forward(self, x, y):
|
|
|
|
|
return self.B0.foo(x, y) + 3
|
|
|
|
|
~~~~~~~~~~~ <--- HERE
|
|
|
|
|
|
|
|
|
|
File "<string>", line 3, in foo
|
|
|
|
|
|
|
|
|
|
def foo(self, x, y):
|
|
|
|
|
return self.A0.bar(x, y) + 2
|
|
|
|
|
~~~~~~~~~~~ <--- HERE
|
|
|
|
|
|
|
|
|
|
File "<string>", line 3, in bar
|
|
|
|
|
|
|
|
|
|
def bar(self, x, y):
|
|
|
|
|
return x + y
|
|
|
|
|
~~~~~ <--- HERE
|
|
|
|
|
)";
|
|
|
|
|
ASSERT_THROWS_WITH_MESSAGE(lite_m.forward(inputs), error_pattern);
|
|
|
|
|
}
|
2021-07-30 03:09:07 +00:00
|
|
|
#endif // !defined(FB_XPLAT_BUILD)
|
2021-05-25 20:16:46 +00:00
|
|
|
|
2020-03-14 01:21:21 +00:00
|
|
|
namespace {
|
|
|
|
|
static auto reg =
|
2020-07-06 22:49:02 +00:00
|
|
|
torch::class_<TorchBindLiteInterpreterTestStruct>(
|
2020-03-24 07:34:43 +00:00
|
|
|
"_TorchScriptTesting",
|
|
|
|
|
"_LiteInterpreterTest")
|
2021-02-04 05:51:12 +00:00
|
|
|
.def(torch::init<>())
|
2020-03-14 01:21:21 +00:00
|
|
|
.def("get", &TorchBindLiteInterpreterTestStruct::get)
|
|
|
|
|
.def_pickle(
|
|
|
|
|
// __getattr__
|
|
|
|
|
[](const c10::intrusive_ptr<TorchBindLiteInterpreterTestStruct>&
|
|
|
|
|
self) -> int64_t { return 0; },
|
|
|
|
|
// __setattr__
|
|
|
|
|
[](int64_t state) {
|
|
|
|
|
return c10::make_intrusive<TorchBindLiteInterpreterTestStruct>();
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
} // namespace
|
|
|
|
|
|
2021-07-30 03:09:07 +00:00
|
|
|
TEST(LiteInterpreterTest, OperatorCacheDifferentiatesDefaultArgs) {
|
|
|
|
|
// Create 3 methods:
|
|
|
|
|
//
|
|
|
|
|
// 1. forward() returns a tensor with dtype=torch.int64 (4)
|
|
|
|
|
// 2. forward2() returns a tensor with dtype=torch.float32 (6)
|
|
|
|
|
// 3. forward3() returns a tensor with dtype=torch.float32 but
|
|
|
|
|
// the dtype is inferred by the input tensor's dtype
|
|
|
|
|
//
|
|
|
|
|
// If caching works correctly, then the result from the full-jit
|
|
|
|
|
// module and the lite module will be the same. Otherwise, it
|
|
|
|
|
// will be different if we don't correctly ignore the cache
|
|
|
|
|
// entry for an operator that has a different number of
|
|
|
|
|
// arguments.
|
|
|
|
|
Module m("m");
|
|
|
|
|
m.define(R"(
|
|
|
|
|
def forward(self):
|
|
|
|
|
ret1 = torch.new_empty(torch.zeros(10), [10], dtype=4)
|
|
|
|
|
return ret1.fill_(25)
|
|
|
|
|
)");
|
|
|
|
|
m.define(R"(
|
|
|
|
|
def forward2(self):
|
|
|
|
|
ret1 = torch.new_empty(torch.zeros(10), [10], dtype=6)
|
|
|
|
|
return ret1.fill_(32.0)
|
|
|
|
|
)");
|
|
|
|
|
m.define(R"(
|
|
|
|
|
def forward3(self):
|
|
|
|
|
ret1 = torch.new_empty(torch.zeros(10), [10])
|
|
|
|
|
return ret1.fill_(12.0)
|
|
|
|
|
)");
|
|
|
|
|
|
|
|
|
|
std::vector<torch::jit::IValue> inputs;
|
|
|
|
|
testLiteModuleCompareResultTensors(m, inputs, "forward");
|
|
|
|
|
testLiteModuleCompareResultTensors(m, inputs, "forward2");
|
|
|
|
|
testLiteModuleCompareResultTensors(m, inputs, "forward3");
|
|
|
|
|
}
|
|
|
|
|
|
2021-10-25 21:43:08 +00:00
|
|
|
TEST(RunTimeTest, RuntimeCall) {
|
|
|
|
|
// def call(x):
|
|
|
|
|
// return x + x
|
|
|
|
|
//
|
|
|
|
|
// def forward(a):
|
|
|
|
|
// x = a + call(a)
|
|
|
|
|
// y = a + call(x)
|
|
|
|
|
// return y
|
|
|
|
|
|
|
|
|
|
std::vector<IValue> instructionsCall{
|
|
|
|
|
to_tuple({"STORE", 1, 0}),
|
|
|
|
|
to_tuple({"LOAD", 1, 0}),
|
|
|
|
|
to_tuple({"MOVE", 1, 0}),
|
|
|
|
|
to_tuple({"LOADC", 0, 0}),
|
|
|
|
|
to_tuple({"OP", 0, 0}),
|
|
|
|
|
to_tuple({"RET", 0, 0}),
|
|
|
|
|
};
|
|
|
|
|
std::vector<IValue> instructionsFoo{
|
|
|
|
|
to_tuple({"STORE", 1, 0}),
|
|
|
|
|
to_tuple({"LOAD", 1, 0}),
|
|
|
|
|
to_tuple({"LOAD", 1, 0}),
|
|
|
|
|
to_tuple({"MOVE", 1, 0}),
|
|
|
|
|
to_tuple({"CALL", 0, 0}),
|
|
|
|
|
to_tuple({"LOADC", 0, 0}),
|
|
|
|
|
to_tuple({"OP", 0, 0}),
|
|
|
|
|
to_tuple({"CALL", 0, 0}),
|
|
|
|
|
to_tuple({"LOADC", 0, 0}),
|
|
|
|
|
to_tuple({"OP", 0, 0}),
|
|
|
|
|
to_tuple({"RET", 0, 0}),
|
|
|
|
|
};
|
|
|
|
|
std::vector<IValue> operatorsFoo{
|
|
|
|
|
to_tuple({"aten::add", "Tensor", 3}),
|
|
|
|
|
};
|
|
|
|
|
std::vector<IValue> constantsFoo{
|
|
|
|
|
1,
|
|
|
|
|
};
|
|
|
|
|
std::vector<IValue> operatorsCall{
|
|
|
|
|
to_tuple({"aten::add", "Tensor", 3}),
|
|
|
|
|
};
|
|
|
|
|
std::vector<IValue> constantsCall{
|
|
|
|
|
1,
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
auto foo = std::make_unique<mobile::Function>(c10::QualifiedName("foo"));
|
|
|
|
|
c10::ivalue::TupleElements debug_handles_m_tuple;
|
|
|
|
|
parseInstructions(
|
|
|
|
|
"foo",
|
|
|
|
|
std::move(*c10::ivalue::Tuple::create(instructionsFoo)).elements(),
|
|
|
|
|
debug_handles_m_tuple,
|
|
|
|
|
foo.get());
|
|
|
|
|
parseOperators(
|
|
|
|
|
std::move(*c10::ivalue::Tuple::create(operatorsFoo)).elements(),
|
|
|
|
|
1,
|
|
|
|
|
foo.get());
|
|
|
|
|
parseConstants(
|
|
|
|
|
std::move(*c10::ivalue::Tuple::create(constantsFoo)).elements(),
|
|
|
|
|
foo.get());
|
|
|
|
|
const size_t rsize = 5;
|
|
|
|
|
parseRegisterSize(rsize, foo.get());
|
|
|
|
|
|
|
|
|
|
auto call = std::make_unique<mobile::Function>(c10::QualifiedName("call"));
|
|
|
|
|
parseInstructions(
|
|
|
|
|
"call",
|
|
|
|
|
std::move(*c10::ivalue::Tuple::create(instructionsCall)).elements(),
|
|
|
|
|
debug_handles_m_tuple,
|
|
|
|
|
call.get());
|
|
|
|
|
parseOperators(
|
|
|
|
|
std::move(*c10::ivalue::Tuple::create(operatorsCall)).elements(),
|
|
|
|
|
1,
|
|
|
|
|
call.get());
|
|
|
|
|
parseConstants(
|
|
|
|
|
std::move(*c10::ivalue::Tuple::create(constantsCall)).elements(),
|
|
|
|
|
call.get());
|
|
|
|
|
parseRegisterSize(rsize, call.get());
|
|
|
|
|
|
|
|
|
|
foo->append_function(*call);
|
|
|
|
|
|
|
|
|
|
std::vector<IValue> inputs{at::tensor(1)};
|
|
|
|
|
foo->run(inputs);
|
|
|
|
|
auto output = inputs[0];
|
|
|
|
|
ASSERT_EQ(output, at::tensor(7));
|
|
|
|
|
}
|
|
|
|
|
|
2021-11-23 20:14:30 +00:00
|
|
|
TEST(LiteInterpreterTest, OperatorSize1) {
|
|
|
|
|
Module m("m");
|
|
|
|
|
m.define(R"(
|
|
|
|
|
def forward(self, input: Tensor, scale:float):
|
|
|
|
|
return torch.upsample_nearest2d(input, [1, 1], float(scale), float(scale))
|
|
|
|
|
)");
|
|
|
|
|
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
const auto& func = bc.get_method("forward").function();
|
|
|
|
|
ASSERT_EQ(
|
2021-12-16 21:06:08 +00:00
|
|
|
func.get_code().operator_input_sizes_.size(),
|
|
|
|
|
func.get_code().operators_.size());
|
2021-11-23 20:14:30 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
TEST(LiteInterpreterTest, OperatorTest2) { // NOLINT (use =delete in gtest)
|
|
|
|
|
const std::vector<std::string> test_programs{
|
|
|
|
|
// test invoking a method with default parameter
|
|
|
|
|
R"(
|
|
|
|
|
def test_func(self, x, b : int = 4):
|
|
|
|
|
return self.foo + x + b
|
|
|
|
|
)",
|
|
|
|
|
// inner method call with default parameter (gets inlined)
|
|
|
|
|
R"(
|
|
|
|
|
def add_with_default_arg(self, x, b : int = 4):
|
|
|
|
|
return self.foo + x + b
|
|
|
|
|
def test_func(self, x):
|
|
|
|
|
return self.add_with_default_arg(x) # invoke method w/ default arg
|
|
|
|
|
)",
|
|
|
|
|
// simple method call
|
|
|
|
|
R"(
|
|
|
|
|
def test_func(self, x):
|
|
|
|
|
b = 4
|
|
|
|
|
return self.foo + x + b
|
|
|
|
|
)",
|
|
|
|
|
};
|
|
|
|
|
for (const auto& test_program : test_programs) {
|
|
|
|
|
Module m("m");
|
|
|
|
|
m.register_parameter("foo", torch::ones({}), false);
|
|
|
|
|
m.define(test_program);
|
|
|
|
|
|
|
|
|
|
std::stringstream ss;
|
|
|
|
|
m._save_for_mobile(ss);
|
|
|
|
|
mobile::Module bc = _load_for_mobile(ss);
|
|
|
|
|
const auto& func = bc.get_method("test_func").function();
|
|
|
|
|
ASSERT_EQ(
|
2021-12-16 21:06:08 +00:00
|
|
|
func.get_code().operator_input_sizes_.size(),
|
|
|
|
|
func.get_code().operators_.size());
|
2021-11-23 20:14:30 +00:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2021-12-02 02:44:34 +00:00
|
|
|
#if !defined FB_XPLAT_BUILD
|
|
|
|
|
// The following test run in fbcode only
|
|
|
|
|
TEST(LiteInterpreterUpgraderTest, DivTensorV2) {
|
|
|
|
|
std::string filePath(__FILE__);
|
|
|
|
|
auto test_model_file = filePath.substr(0, filePath.find_last_of("/\\") + 1);
|
|
|
|
|
test_model_file.append("upgrader_models/test_versioned_div_tensor_v2.ptl");
|
[Operator Versioning][Edge] Change OP to CALL when there is a valid upgrader (#67731)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67731
1. Register upgrader function at loading stage
2. Change OP to CALL when there operator_version from model is smaller than current runtime version and there exists a valid upgrader
The interpreter log is :
```
RUNNING 0 STOREN 1 3
RUNNING 1 DROPR 1
RUNNING 2 LOAD 2
RUNNING 3 LOAD 3
RUNNING 4 CALL 0
RUNNING 0 STOREN 1 2
RUNNING 1 LOAD 1
RUNNING 2 OP 0, aten::is_floating_point
RUNNING 3 JF 3
RUNNING 4 LOADC 1
RUNNING 5 JMP 3
RUNNING 8 STORE 3
RUNNING 9 MOVE 3
RUNNING 10 JF 5
RUNNING 11 LOAD 1
RUNNING 12 LOAD 2
RUNNING 13 OP 1, aten::div.Tensor
RUNNING 14 JMP 5
RUNNING 19 STORE 4
RUNNING 20 DROPR 2
RUNNING 21 DROPR 1
RUNNING 22 MOVE 4
RUNNING 23 RET
RUNNING 5 LOAD 2
RUNNING 6 LOAD 3
RUNNING 7 CALL 0
RUNNING 0 STOREN 1 2
RUNNING 1 LOAD 1
RUNNING 2 OP 0, aten::is_floating_point
RUNNING 3 JF 3
RUNNING 4 LOADC 1
RUNNING 5 JMP 3
RUNNING 8 STORE 3
RUNNING 9 MOVE 3
RUNNING 10 JF 5
RUNNING 11 LOAD 1
RUNNING 12 LOAD 2
RUNNING 13 OP 1, aten::div.Tensor
RUNNING 14 JMP 5
RUNNING 19 STORE 4
RUNNING 20 DROPR 2
RUNNING 21 DROPR 1
RUNNING 22 MOVE 4
RUNNING 23 RET
RUNNING 8 MOVE 2
RUNNING 9 MOVE 3
RUNNING 10 CALL 0
RUNNING 0 STOREN 1 2
RUNNING 1 LOAD 1
RUNNING 2 OP 0, aten::is_floating_point
RUNNING 3 JF 3
RUNNING 4 LOADC 1
RUNNING 5 JMP 3
RUNNING 8 STORE 3
RUNNING 9 MOVE 3
RUNNING 10 JF 5
RUNNING 11 LOAD 1
RUNNING 12 LOAD 2
RUNNING 13 OP 1, aten::div.Tensor
RUNNING 14 JMP 5
RUNNING 19 STORE 4
RUNNING 20 DROPR 2
RUNNING 21 DROPR 1
RUNNING 22 MOVE 4
RUNNING 23 RET
RUNNING 11 TUPLE_CONSTRUCT 3
RUNNING 12 RET
```
The upgrader bytecode is:
```
(STOREN, 1, 2)
(LOAD, 1, 0)
(OP, 0, 0)
(JF, 3, 0)
(LOADC, 1, 0)
(JMP, 3, 0)
(LOAD, 2, 0)
(OP, 0, 0)
(STORE, 3, 0)
(MOVE, 3, 0)
(JF, 5, 0)
(LOAD, 1, 0)
(LOAD, 2, 0)
(OP, 1, 0)
(JMP, 5, 0)
(LOAD, 1, 0)
(LOAD, 2, 0)
(LOADC, 0, 0)
(OP, 2, 0)
(STORE, 4, 0)
(DROPR, 2, 0)
(DROPR, 1, 0)
(MOVE, 4, 0)
(RET, 0, 0)
```
ghstack-source-id: 145635622
Test Plan: describe in summary and CI
Reviewed By: iseeyuan
Differential Revision: D32092517
fbshipit-source-id: 0314b4bda5d2578cdd4e7cfbfd1e3c07fbccf8a3
2021-12-15 03:04:32 +00:00
|
|
|
/*
|
|
|
|
|
(('__torch__.MyModule.forward',
|
|
|
|
|
(('instructions',
|
|
|
|
|
(('STOREN', 1, 3),
|
|
|
|
|
('DROPR', 1, 0),
|
|
|
|
|
('LOAD', 2, 0),
|
|
|
|
|
('LOAD', 3, 0),
|
|
|
|
|
('OP', 0, 0),
|
|
|
|
|
('LOAD', 2, 0),
|
|
|
|
|
('LOAD', 3, 0),
|
|
|
|
|
('OP', 1, 0),
|
|
|
|
|
('MOVE', 2, 0),
|
|
|
|
|
('MOVE', 3, 0),
|
|
|
|
|
('OP', 2, 0),
|
|
|
|
|
('TUPLE_CONSTRUCT', 3, 0),
|
|
|
|
|
('RET', 0, 0))),
|
|
|
|
|
('operators',
|
|
|
|
|
(('aten::div', 'Tensor'),
|
|
|
|
|
('aten::div', 'Tensor'),
|
|
|
|
|
('aten::div', 'Tensor'))),
|
|
|
|
|
('constants', ()),
|
|
|
|
|
('types', ()),
|
|
|
|
|
('register_size', 3))),)
|
|
|
|
|
|
|
|
|
|
*/
|
2021-12-02 02:44:34 +00:00
|
|
|
mobile::Module m_module = _load_for_mobile(test_model_file);
|
[Operator Versioning][Edge] Change OP to CALL when there is a valid upgrader (#67731)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67731
1. Register upgrader function at loading stage
2. Change OP to CALL when there operator_version from model is smaller than current runtime version and there exists a valid upgrader
The interpreter log is :
```
RUNNING 0 STOREN 1 3
RUNNING 1 DROPR 1
RUNNING 2 LOAD 2
RUNNING 3 LOAD 3
RUNNING 4 CALL 0
RUNNING 0 STOREN 1 2
RUNNING 1 LOAD 1
RUNNING 2 OP 0, aten::is_floating_point
RUNNING 3 JF 3
RUNNING 4 LOADC 1
RUNNING 5 JMP 3
RUNNING 8 STORE 3
RUNNING 9 MOVE 3
RUNNING 10 JF 5
RUNNING 11 LOAD 1
RUNNING 12 LOAD 2
RUNNING 13 OP 1, aten::div.Tensor
RUNNING 14 JMP 5
RUNNING 19 STORE 4
RUNNING 20 DROPR 2
RUNNING 21 DROPR 1
RUNNING 22 MOVE 4
RUNNING 23 RET
RUNNING 5 LOAD 2
RUNNING 6 LOAD 3
RUNNING 7 CALL 0
RUNNING 0 STOREN 1 2
RUNNING 1 LOAD 1
RUNNING 2 OP 0, aten::is_floating_point
RUNNING 3 JF 3
RUNNING 4 LOADC 1
RUNNING 5 JMP 3
RUNNING 8 STORE 3
RUNNING 9 MOVE 3
RUNNING 10 JF 5
RUNNING 11 LOAD 1
RUNNING 12 LOAD 2
RUNNING 13 OP 1, aten::div.Tensor
RUNNING 14 JMP 5
RUNNING 19 STORE 4
RUNNING 20 DROPR 2
RUNNING 21 DROPR 1
RUNNING 22 MOVE 4
RUNNING 23 RET
RUNNING 8 MOVE 2
RUNNING 9 MOVE 3
RUNNING 10 CALL 0
RUNNING 0 STOREN 1 2
RUNNING 1 LOAD 1
RUNNING 2 OP 0, aten::is_floating_point
RUNNING 3 JF 3
RUNNING 4 LOADC 1
RUNNING 5 JMP 3
RUNNING 8 STORE 3
RUNNING 9 MOVE 3
RUNNING 10 JF 5
RUNNING 11 LOAD 1
RUNNING 12 LOAD 2
RUNNING 13 OP 1, aten::div.Tensor
RUNNING 14 JMP 5
RUNNING 19 STORE 4
RUNNING 20 DROPR 2
RUNNING 21 DROPR 1
RUNNING 22 MOVE 4
RUNNING 23 RET
RUNNING 11 TUPLE_CONSTRUCT 3
RUNNING 12 RET
```
The upgrader bytecode is:
```
(STOREN, 1, 2)
(LOAD, 1, 0)
(OP, 0, 0)
(JF, 3, 0)
(LOADC, 1, 0)
(JMP, 3, 0)
(LOAD, 2, 0)
(OP, 0, 0)
(STORE, 3, 0)
(MOVE, 3, 0)
(JF, 5, 0)
(LOAD, 1, 0)
(LOAD, 2, 0)
(OP, 1, 0)
(JMP, 5, 0)
(LOAD, 1, 0)
(LOAD, 2, 0)
(LOADC, 0, 0)
(OP, 2, 0)
(STORE, 4, 0)
(DROPR, 2, 0)
(DROPR, 1, 0)
(MOVE, 4, 0)
(RET, 0, 0)
```
ghstack-source-id: 145635622
Test Plan: describe in summary and CI
Reviewed By: iseeyuan
Differential Revision: D32092517
fbshipit-source-id: 0314b4bda5d2578cdd4e7cfbfd1e3c07fbccf8a3
2021-12-15 03:04:32 +00:00
|
|
|
auto intrsuction_list =
|
2021-12-16 21:06:08 +00:00
|
|
|
m_module.get_method("forward").function().get_code().instructions_;
|
[Operator Versioning][Edge] Change OP to CALL when there is a valid upgrader (#67731)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67731
1. Register upgrader function at loading stage
2. Change OP to CALL when there operator_version from model is smaller than current runtime version and there exists a valid upgrader
The interpreter log is :
```
RUNNING 0 STOREN 1 3
RUNNING 1 DROPR 1
RUNNING 2 LOAD 2
RUNNING 3 LOAD 3
RUNNING 4 CALL 0
RUNNING 0 STOREN 1 2
RUNNING 1 LOAD 1
RUNNING 2 OP 0, aten::is_floating_point
RUNNING 3 JF 3
RUNNING 4 LOADC 1
RUNNING 5 JMP 3
RUNNING 8 STORE 3
RUNNING 9 MOVE 3
RUNNING 10 JF 5
RUNNING 11 LOAD 1
RUNNING 12 LOAD 2
RUNNING 13 OP 1, aten::div.Tensor
RUNNING 14 JMP 5
RUNNING 19 STORE 4
RUNNING 20 DROPR 2
RUNNING 21 DROPR 1
RUNNING 22 MOVE 4
RUNNING 23 RET
RUNNING 5 LOAD 2
RUNNING 6 LOAD 3
RUNNING 7 CALL 0
RUNNING 0 STOREN 1 2
RUNNING 1 LOAD 1
RUNNING 2 OP 0, aten::is_floating_point
RUNNING 3 JF 3
RUNNING 4 LOADC 1
RUNNING 5 JMP 3
RUNNING 8 STORE 3
RUNNING 9 MOVE 3
RUNNING 10 JF 5
RUNNING 11 LOAD 1
RUNNING 12 LOAD 2
RUNNING 13 OP 1, aten::div.Tensor
RUNNING 14 JMP 5
RUNNING 19 STORE 4
RUNNING 20 DROPR 2
RUNNING 21 DROPR 1
RUNNING 22 MOVE 4
RUNNING 23 RET
RUNNING 8 MOVE 2
RUNNING 9 MOVE 3
RUNNING 10 CALL 0
RUNNING 0 STOREN 1 2
RUNNING 1 LOAD 1
RUNNING 2 OP 0, aten::is_floating_point
RUNNING 3 JF 3
RUNNING 4 LOADC 1
RUNNING 5 JMP 3
RUNNING 8 STORE 3
RUNNING 9 MOVE 3
RUNNING 10 JF 5
RUNNING 11 LOAD 1
RUNNING 12 LOAD 2
RUNNING 13 OP 1, aten::div.Tensor
RUNNING 14 JMP 5
RUNNING 19 STORE 4
RUNNING 20 DROPR 2
RUNNING 21 DROPR 1
RUNNING 22 MOVE 4
RUNNING 23 RET
RUNNING 11 TUPLE_CONSTRUCT 3
RUNNING 12 RET
```
The upgrader bytecode is:
```
(STOREN, 1, 2)
(LOAD, 1, 0)
(OP, 0, 0)
(JF, 3, 0)
(LOADC, 1, 0)
(JMP, 3, 0)
(LOAD, 2, 0)
(OP, 0, 0)
(STORE, 3, 0)
(MOVE, 3, 0)
(JF, 5, 0)
(LOAD, 1, 0)
(LOAD, 2, 0)
(OP, 1, 0)
(JMP, 5, 0)
(LOAD, 1, 0)
(LOAD, 2, 0)
(LOADC, 0, 0)
(OP, 2, 0)
(STORE, 4, 0)
(DROPR, 2, 0)
(DROPR, 1, 0)
(MOVE, 4, 0)
(RET, 0, 0)
```
ghstack-source-id: 145635622
Test Plan: describe in summary and CI
Reviewed By: iseeyuan
Differential Revision: D32092517
fbshipit-source-id: 0314b4bda5d2578cdd4e7cfbfd1e3c07fbccf8a3
2021-12-15 03:04:32 +00:00
|
|
|
uint64_t number_of_call_instruction = 0;
|
|
|
|
|
for (auto& instruction : intrsuction_list) {
|
|
|
|
|
number_of_call_instruction += (instruction.op == OpCode::CALL);
|
|
|
|
|
}
|
|
|
|
|
// 3 operators will use upgrader
|
|
|
|
|
ASSERT_EQ(number_of_call_instruction, 3);
|
|
|
|
|
|
2021-12-02 02:44:34 +00:00
|
|
|
std::vector<IValue> inputs = {
|
|
|
|
|
IValue(6 * torch::ones({1})), IValue(3 * torch::ones({1}))};
|
|
|
|
|
auto actual_output = m_module.forward(inputs);
|
|
|
|
|
auto expect_output = 2.0 * torch::ones({1});
|
|
|
|
|
auto actual_output_list = actual_output.toTuple()->elements();
|
|
|
|
|
ASSERT_TRUE(actual_output_list[0].toTensor().equal(expect_output));
|
|
|
|
|
}
|
[Operator Versioning][Edge] Change OP to CALL when there is a valid upgrader (#67731)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67731
1. Register upgrader function at loading stage
2. Change OP to CALL when there operator_version from model is smaller than current runtime version and there exists a valid upgrader
The interpreter log is :
```
RUNNING 0 STOREN 1 3
RUNNING 1 DROPR 1
RUNNING 2 LOAD 2
RUNNING 3 LOAD 3
RUNNING 4 CALL 0
RUNNING 0 STOREN 1 2
RUNNING 1 LOAD 1
RUNNING 2 OP 0, aten::is_floating_point
RUNNING 3 JF 3
RUNNING 4 LOADC 1
RUNNING 5 JMP 3
RUNNING 8 STORE 3
RUNNING 9 MOVE 3
RUNNING 10 JF 5
RUNNING 11 LOAD 1
RUNNING 12 LOAD 2
RUNNING 13 OP 1, aten::div.Tensor
RUNNING 14 JMP 5
RUNNING 19 STORE 4
RUNNING 20 DROPR 2
RUNNING 21 DROPR 1
RUNNING 22 MOVE 4
RUNNING 23 RET
RUNNING 5 LOAD 2
RUNNING 6 LOAD 3
RUNNING 7 CALL 0
RUNNING 0 STOREN 1 2
RUNNING 1 LOAD 1
RUNNING 2 OP 0, aten::is_floating_point
RUNNING 3 JF 3
RUNNING 4 LOADC 1
RUNNING 5 JMP 3
RUNNING 8 STORE 3
RUNNING 9 MOVE 3
RUNNING 10 JF 5
RUNNING 11 LOAD 1
RUNNING 12 LOAD 2
RUNNING 13 OP 1, aten::div.Tensor
RUNNING 14 JMP 5
RUNNING 19 STORE 4
RUNNING 20 DROPR 2
RUNNING 21 DROPR 1
RUNNING 22 MOVE 4
RUNNING 23 RET
RUNNING 8 MOVE 2
RUNNING 9 MOVE 3
RUNNING 10 CALL 0
RUNNING 0 STOREN 1 2
RUNNING 1 LOAD 1
RUNNING 2 OP 0, aten::is_floating_point
RUNNING 3 JF 3
RUNNING 4 LOADC 1
RUNNING 5 JMP 3
RUNNING 8 STORE 3
RUNNING 9 MOVE 3
RUNNING 10 JF 5
RUNNING 11 LOAD 1
RUNNING 12 LOAD 2
RUNNING 13 OP 1, aten::div.Tensor
RUNNING 14 JMP 5
RUNNING 19 STORE 4
RUNNING 20 DROPR 2
RUNNING 21 DROPR 1
RUNNING 22 MOVE 4
RUNNING 23 RET
RUNNING 11 TUPLE_CONSTRUCT 3
RUNNING 12 RET
```
The upgrader bytecode is:
```
(STOREN, 1, 2)
(LOAD, 1, 0)
(OP, 0, 0)
(JF, 3, 0)
(LOADC, 1, 0)
(JMP, 3, 0)
(LOAD, 2, 0)
(OP, 0, 0)
(STORE, 3, 0)
(MOVE, 3, 0)
(JF, 5, 0)
(LOAD, 1, 0)
(LOAD, 2, 0)
(OP, 1, 0)
(JMP, 5, 0)
(LOAD, 1, 0)
(LOAD, 2, 0)
(LOADC, 0, 0)
(OP, 2, 0)
(STORE, 4, 0)
(DROPR, 2, 0)
(DROPR, 1, 0)
(MOVE, 4, 0)
(RET, 0, 0)
```
ghstack-source-id: 145635622
Test Plan: describe in summary and CI
Reviewed By: iseeyuan
Differential Revision: D32092517
fbshipit-source-id: 0314b4bda5d2578cdd4e7cfbfd1e3c07fbccf8a3
2021-12-15 03:04:32 +00:00
|
|
|
|
|
|
|
|
TEST(LiteInterpreterUpgraderTest, DivTensorOutV2) {
|
|
|
|
|
std::string filePath(__FILE__);
|
|
|
|
|
auto test_model_file = filePath.substr(0, filePath.find_last_of("/\\") + 1);
|
|
|
|
|
test_model_file.append(
|
|
|
|
|
"upgrader_models/test_versioned_div_tensor_out_v2.ptl");
|
|
|
|
|
/*
|
|
|
|
|
(('__torch__.MyModule.forward',
|
|
|
|
|
(('instructions',
|
|
|
|
|
(('STOREN', 1, 4),
|
|
|
|
|
('DROPR', 1, 0),
|
|
|
|
|
('MOVE', 2, 0),
|
|
|
|
|
('MOVE', 3, 0),
|
|
|
|
|
('MOVE', 4, 0),
|
|
|
|
|
('OP', 0, 0),
|
|
|
|
|
('RET', 0, 0))),
|
|
|
|
|
('operators', (('aten::div', 'out'),)),
|
|
|
|
|
('constants', ()),
|
|
|
|
|
('types', ()),
|
|
|
|
|
('register_size', 4))),)
|
|
|
|
|
*/
|
|
|
|
|
mobile::Module m_module = _load_for_mobile(test_model_file);
|
|
|
|
|
|
|
|
|
|
auto intrsuction_list =
|
2021-12-16 21:06:08 +00:00
|
|
|
m_module.get_method("forward").function().get_code().instructions_;
|
[Operator Versioning][Edge] Change OP to CALL when there is a valid upgrader (#67731)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67731
1. Register upgrader function at loading stage
2. Change OP to CALL when there operator_version from model is smaller than current runtime version and there exists a valid upgrader
The interpreter log is :
```
RUNNING 0 STOREN 1 3
RUNNING 1 DROPR 1
RUNNING 2 LOAD 2
RUNNING 3 LOAD 3
RUNNING 4 CALL 0
RUNNING 0 STOREN 1 2
RUNNING 1 LOAD 1
RUNNING 2 OP 0, aten::is_floating_point
RUNNING 3 JF 3
RUNNING 4 LOADC 1
RUNNING 5 JMP 3
RUNNING 8 STORE 3
RUNNING 9 MOVE 3
RUNNING 10 JF 5
RUNNING 11 LOAD 1
RUNNING 12 LOAD 2
RUNNING 13 OP 1, aten::div.Tensor
RUNNING 14 JMP 5
RUNNING 19 STORE 4
RUNNING 20 DROPR 2
RUNNING 21 DROPR 1
RUNNING 22 MOVE 4
RUNNING 23 RET
RUNNING 5 LOAD 2
RUNNING 6 LOAD 3
RUNNING 7 CALL 0
RUNNING 0 STOREN 1 2
RUNNING 1 LOAD 1
RUNNING 2 OP 0, aten::is_floating_point
RUNNING 3 JF 3
RUNNING 4 LOADC 1
RUNNING 5 JMP 3
RUNNING 8 STORE 3
RUNNING 9 MOVE 3
RUNNING 10 JF 5
RUNNING 11 LOAD 1
RUNNING 12 LOAD 2
RUNNING 13 OP 1, aten::div.Tensor
RUNNING 14 JMP 5
RUNNING 19 STORE 4
RUNNING 20 DROPR 2
RUNNING 21 DROPR 1
RUNNING 22 MOVE 4
RUNNING 23 RET
RUNNING 8 MOVE 2
RUNNING 9 MOVE 3
RUNNING 10 CALL 0
RUNNING 0 STOREN 1 2
RUNNING 1 LOAD 1
RUNNING 2 OP 0, aten::is_floating_point
RUNNING 3 JF 3
RUNNING 4 LOADC 1
RUNNING 5 JMP 3
RUNNING 8 STORE 3
RUNNING 9 MOVE 3
RUNNING 10 JF 5
RUNNING 11 LOAD 1
RUNNING 12 LOAD 2
RUNNING 13 OP 1, aten::div.Tensor
RUNNING 14 JMP 5
RUNNING 19 STORE 4
RUNNING 20 DROPR 2
RUNNING 21 DROPR 1
RUNNING 22 MOVE 4
RUNNING 23 RET
RUNNING 11 TUPLE_CONSTRUCT 3
RUNNING 12 RET
```
The upgrader bytecode is:
```
(STOREN, 1, 2)
(LOAD, 1, 0)
(OP, 0, 0)
(JF, 3, 0)
(LOADC, 1, 0)
(JMP, 3, 0)
(LOAD, 2, 0)
(OP, 0, 0)
(STORE, 3, 0)
(MOVE, 3, 0)
(JF, 5, 0)
(LOAD, 1, 0)
(LOAD, 2, 0)
(OP, 1, 0)
(JMP, 5, 0)
(LOAD, 1, 0)
(LOAD, 2, 0)
(LOADC, 0, 0)
(OP, 2, 0)
(STORE, 4, 0)
(DROPR, 2, 0)
(DROPR, 1, 0)
(MOVE, 4, 0)
(RET, 0, 0)
```
ghstack-source-id: 145635622
Test Plan: describe in summary and CI
Reviewed By: iseeyuan
Differential Revision: D32092517
fbshipit-source-id: 0314b4bda5d2578cdd4e7cfbfd1e3c07fbccf8a3
2021-12-15 03:04:32 +00:00
|
|
|
uint64_t number_of_call_instruction = 0;
|
|
|
|
|
for (auto& instruction : intrsuction_list) {
|
|
|
|
|
number_of_call_instruction += (instruction.op == OpCode::CALL);
|
|
|
|
|
}
|
|
|
|
|
// One operator will use upgrader
|
|
|
|
|
ASSERT_EQ(number_of_call_instruction, 1);
|
|
|
|
|
|
|
|
|
|
std::vector<IValue> inputs{
|
|
|
|
|
IValue(6 * torch::ones({1})),
|
|
|
|
|
IValue(3 * torch::ones({1})),
|
|
|
|
|
IValue(torch::empty({1}))};
|
|
|
|
|
m_module.forward(inputs);
|
|
|
|
|
auto expect_output = 2.0 * torch::ones({1});
|
|
|
|
|
auto actual_output = inputs[2].toTensor();
|
|
|
|
|
// The out argument will be overwritten with the output
|
|
|
|
|
ASSERT_TRUE(actual_output.equal(expect_output));
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
TEST(LiteInterpreterUpgraderTest, DivTensorInplaceV2) {
|
|
|
|
|
std::string filePath(__FILE__);
|
|
|
|
|
auto test_model_file = filePath.substr(0, filePath.find_last_of("/\\") + 1);
|
|
|
|
|
test_model_file.append(
|
|
|
|
|
"upgrader_models/test_versioned_div_tensor_inplace_v2.ptl");
|
|
|
|
|
/*
|
|
|
|
|
(('__torch__.MyModule.forward',
|
|
|
|
|
(('instructions',
|
|
|
|
|
(('STOREN', 1, 3),
|
|
|
|
|
('DROPR', 1, 0),
|
|
|
|
|
('MOVE', 2, 0),
|
|
|
|
|
('MOVE', 3, 0),
|
|
|
|
|
('OP', 0, 0),
|
|
|
|
|
('RET', 0, 0))),
|
|
|
|
|
('operators', (('aten::div_', 'Tensor'),)),
|
|
|
|
|
('constants', ()),
|
|
|
|
|
('types', ()),
|
|
|
|
|
('register_size', 3))),)
|
|
|
|
|
*/
|
|
|
|
|
mobile::Module m_module = _load_for_mobile(test_model_file);
|
|
|
|
|
|
|
|
|
|
auto intrsuction_list =
|
2021-12-16 21:06:08 +00:00
|
|
|
m_module.get_method("forward").function().get_code().instructions_;
|
[Operator Versioning][Edge] Change OP to CALL when there is a valid upgrader (#67731)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67731
1. Register upgrader function at loading stage
2. Change OP to CALL when there operator_version from model is smaller than current runtime version and there exists a valid upgrader
The interpreter log is :
```
RUNNING 0 STOREN 1 3
RUNNING 1 DROPR 1
RUNNING 2 LOAD 2
RUNNING 3 LOAD 3
RUNNING 4 CALL 0
RUNNING 0 STOREN 1 2
RUNNING 1 LOAD 1
RUNNING 2 OP 0, aten::is_floating_point
RUNNING 3 JF 3
RUNNING 4 LOADC 1
RUNNING 5 JMP 3
RUNNING 8 STORE 3
RUNNING 9 MOVE 3
RUNNING 10 JF 5
RUNNING 11 LOAD 1
RUNNING 12 LOAD 2
RUNNING 13 OP 1, aten::div.Tensor
RUNNING 14 JMP 5
RUNNING 19 STORE 4
RUNNING 20 DROPR 2
RUNNING 21 DROPR 1
RUNNING 22 MOVE 4
RUNNING 23 RET
RUNNING 5 LOAD 2
RUNNING 6 LOAD 3
RUNNING 7 CALL 0
RUNNING 0 STOREN 1 2
RUNNING 1 LOAD 1
RUNNING 2 OP 0, aten::is_floating_point
RUNNING 3 JF 3
RUNNING 4 LOADC 1
RUNNING 5 JMP 3
RUNNING 8 STORE 3
RUNNING 9 MOVE 3
RUNNING 10 JF 5
RUNNING 11 LOAD 1
RUNNING 12 LOAD 2
RUNNING 13 OP 1, aten::div.Tensor
RUNNING 14 JMP 5
RUNNING 19 STORE 4
RUNNING 20 DROPR 2
RUNNING 21 DROPR 1
RUNNING 22 MOVE 4
RUNNING 23 RET
RUNNING 8 MOVE 2
RUNNING 9 MOVE 3
RUNNING 10 CALL 0
RUNNING 0 STOREN 1 2
RUNNING 1 LOAD 1
RUNNING 2 OP 0, aten::is_floating_point
RUNNING 3 JF 3
RUNNING 4 LOADC 1
RUNNING 5 JMP 3
RUNNING 8 STORE 3
RUNNING 9 MOVE 3
RUNNING 10 JF 5
RUNNING 11 LOAD 1
RUNNING 12 LOAD 2
RUNNING 13 OP 1, aten::div.Tensor
RUNNING 14 JMP 5
RUNNING 19 STORE 4
RUNNING 20 DROPR 2
RUNNING 21 DROPR 1
RUNNING 22 MOVE 4
RUNNING 23 RET
RUNNING 11 TUPLE_CONSTRUCT 3
RUNNING 12 RET
```
The upgrader bytecode is:
```
(STOREN, 1, 2)
(LOAD, 1, 0)
(OP, 0, 0)
(JF, 3, 0)
(LOADC, 1, 0)
(JMP, 3, 0)
(LOAD, 2, 0)
(OP, 0, 0)
(STORE, 3, 0)
(MOVE, 3, 0)
(JF, 5, 0)
(LOAD, 1, 0)
(LOAD, 2, 0)
(OP, 1, 0)
(JMP, 5, 0)
(LOAD, 1, 0)
(LOAD, 2, 0)
(LOADC, 0, 0)
(OP, 2, 0)
(STORE, 4, 0)
(DROPR, 2, 0)
(DROPR, 1, 0)
(MOVE, 4, 0)
(RET, 0, 0)
```
ghstack-source-id: 145635622
Test Plan: describe in summary and CI
Reviewed By: iseeyuan
Differential Revision: D32092517
fbshipit-source-id: 0314b4bda5d2578cdd4e7cfbfd1e3c07fbccf8a3
2021-12-15 03:04:32 +00:00
|
|
|
uint64_t number_of_call_instruction = 0;
|
|
|
|
|
for (auto& instruction : intrsuction_list) {
|
|
|
|
|
number_of_call_instruction += (instruction.op == OpCode::CALL);
|
|
|
|
|
}
|
|
|
|
|
// One operator will use upgrader
|
|
|
|
|
ASSERT_EQ(number_of_call_instruction, 1);
|
|
|
|
|
|
|
|
|
|
std::vector<IValue> inputs{
|
|
|
|
|
IValue(6 * torch::ones({1})), IValue(3 * torch::ones({1}))};
|
|
|
|
|
m_module.forward(inputs);
|
|
|
|
|
auto expect_output = 2.0 * torch::ones({1});
|
|
|
|
|
auto actual_output = inputs[0].toTensor();
|
|
|
|
|
// The out argument will be overwritten with the output
|
|
|
|
|
ASSERT_TRUE(actual_output.equal(expect_output));
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
TEST(LiteInterpreterUpgraderTest, DivScalarFloatV2) {
|
|
|
|
|
std::string filePath(__FILE__);
|
|
|
|
|
auto test_model_file = filePath.substr(0, filePath.find_last_of("/\\") + 1);
|
|
|
|
|
test_model_file.append(
|
|
|
|
|
"upgrader_models/test_versioned_div_scalar_float_v2.ptl");
|
|
|
|
|
/*
|
|
|
|
|
(('__torch__.MyModuleFloat.forward',
|
|
|
|
|
(('instructions',
|
|
|
|
|
(('STOREN', 1, 3),
|
|
|
|
|
('DROPR', 1, 0),
|
|
|
|
|
('MOVE', 2, 0),
|
|
|
|
|
('MOVE', 3, 0),
|
|
|
|
|
('OP', 0, 0),
|
|
|
|
|
('RET', 0, 0))),
|
|
|
|
|
('operators', (('aten::div', 'Scalar'),)),
|
|
|
|
|
('constants', ()),
|
|
|
|
|
('types', ()),
|
|
|
|
|
('register_size', 3))),)
|
|
|
|
|
*/
|
|
|
|
|
|
|
|
|
|
mobile::Module m_module = _load_for_mobile(test_model_file);
|
|
|
|
|
|
|
|
|
|
auto intrsuction_list =
|
2021-12-16 21:06:08 +00:00
|
|
|
m_module.get_method("forward").function().get_code().instructions_;
|
[Operator Versioning][Edge] Change OP to CALL when there is a valid upgrader (#67731)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67731
1. Register upgrader function at loading stage
2. Change OP to CALL when there operator_version from model is smaller than current runtime version and there exists a valid upgrader
The interpreter log is :
```
RUNNING 0 STOREN 1 3
RUNNING 1 DROPR 1
RUNNING 2 LOAD 2
RUNNING 3 LOAD 3
RUNNING 4 CALL 0
RUNNING 0 STOREN 1 2
RUNNING 1 LOAD 1
RUNNING 2 OP 0, aten::is_floating_point
RUNNING 3 JF 3
RUNNING 4 LOADC 1
RUNNING 5 JMP 3
RUNNING 8 STORE 3
RUNNING 9 MOVE 3
RUNNING 10 JF 5
RUNNING 11 LOAD 1
RUNNING 12 LOAD 2
RUNNING 13 OP 1, aten::div.Tensor
RUNNING 14 JMP 5
RUNNING 19 STORE 4
RUNNING 20 DROPR 2
RUNNING 21 DROPR 1
RUNNING 22 MOVE 4
RUNNING 23 RET
RUNNING 5 LOAD 2
RUNNING 6 LOAD 3
RUNNING 7 CALL 0
RUNNING 0 STOREN 1 2
RUNNING 1 LOAD 1
RUNNING 2 OP 0, aten::is_floating_point
RUNNING 3 JF 3
RUNNING 4 LOADC 1
RUNNING 5 JMP 3
RUNNING 8 STORE 3
RUNNING 9 MOVE 3
RUNNING 10 JF 5
RUNNING 11 LOAD 1
RUNNING 12 LOAD 2
RUNNING 13 OP 1, aten::div.Tensor
RUNNING 14 JMP 5
RUNNING 19 STORE 4
RUNNING 20 DROPR 2
RUNNING 21 DROPR 1
RUNNING 22 MOVE 4
RUNNING 23 RET
RUNNING 8 MOVE 2
RUNNING 9 MOVE 3
RUNNING 10 CALL 0
RUNNING 0 STOREN 1 2
RUNNING 1 LOAD 1
RUNNING 2 OP 0, aten::is_floating_point
RUNNING 3 JF 3
RUNNING 4 LOADC 1
RUNNING 5 JMP 3
RUNNING 8 STORE 3
RUNNING 9 MOVE 3
RUNNING 10 JF 5
RUNNING 11 LOAD 1
RUNNING 12 LOAD 2
RUNNING 13 OP 1, aten::div.Tensor
RUNNING 14 JMP 5
RUNNING 19 STORE 4
RUNNING 20 DROPR 2
RUNNING 21 DROPR 1
RUNNING 22 MOVE 4
RUNNING 23 RET
RUNNING 11 TUPLE_CONSTRUCT 3
RUNNING 12 RET
```
The upgrader bytecode is:
```
(STOREN, 1, 2)
(LOAD, 1, 0)
(OP, 0, 0)
(JF, 3, 0)
(LOADC, 1, 0)
(JMP, 3, 0)
(LOAD, 2, 0)
(OP, 0, 0)
(STORE, 3, 0)
(MOVE, 3, 0)
(JF, 5, 0)
(LOAD, 1, 0)
(LOAD, 2, 0)
(OP, 1, 0)
(JMP, 5, 0)
(LOAD, 1, 0)
(LOAD, 2, 0)
(LOADC, 0, 0)
(OP, 2, 0)
(STORE, 4, 0)
(DROPR, 2, 0)
(DROPR, 1, 0)
(MOVE, 4, 0)
(RET, 0, 0)
```
ghstack-source-id: 145635622
Test Plan: describe in summary and CI
Reviewed By: iseeyuan
Differential Revision: D32092517
fbshipit-source-id: 0314b4bda5d2578cdd4e7cfbfd1e3c07fbccf8a3
2021-12-15 03:04:32 +00:00
|
|
|
uint64_t number_of_call_instruction = 0;
|
|
|
|
|
for (auto& instruction : intrsuction_list) {
|
|
|
|
|
number_of_call_instruction += (instruction.op == OpCode::CALL);
|
|
|
|
|
}
|
|
|
|
|
// One operator will use upgrader
|
|
|
|
|
ASSERT_EQ(number_of_call_instruction, 1);
|
|
|
|
|
|
|
|
|
|
std::vector<IValue> inputs{IValue(6 * torch::ones({1})), IValue(3.0)};
|
|
|
|
|
auto output = m_module.forward(inputs);
|
|
|
|
|
auto expect_output = 2.0 * torch::ones({1});
|
|
|
|
|
auto actual_output = output.toTensor();
|
|
|
|
|
|
|
|
|
|
// The out argument will be overwritten with the output
|
|
|
|
|
ASSERT_TRUE(actual_output.equal(expect_output));
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
TEST(LiteInterpreterUpgraderTest, DivScalarReciprocalFloatV2) {
|
|
|
|
|
std::string filePath(__FILE__);
|
|
|
|
|
auto test_model_file = filePath.substr(0, filePath.find_last_of("/\\") + 1);
|
|
|
|
|
test_model_file.append(
|
|
|
|
|
"upgrader_models/test_versioned_div_scalar_reciprocal_float_v2.ptl");
|
|
|
|
|
/*
|
|
|
|
|
(('__torch__.MyModuleFloat.forward',
|
|
|
|
|
(('instructions',
|
|
|
|
|
(('STOREN', 1, 3),
|
|
|
|
|
('DROPR', 1, 0),
|
|
|
|
|
('MOVE', 2, 0),
|
|
|
|
|
('OP', 0, 0),
|
|
|
|
|
('MOVE', 3, 0),
|
|
|
|
|
('OP', 1, 0),
|
|
|
|
|
('RET', 0, 0))),
|
|
|
|
|
('operators', (('aten::reciprocal', ''), ('aten::mul', 'Scalar'))),
|
|
|
|
|
('constants', ()),
|
|
|
|
|
('types', ()),
|
|
|
|
|
('register_size', 3))),)
|
|
|
|
|
*/
|
|
|
|
|
mobile::Module m_module = _load_for_mobile(test_model_file);
|
|
|
|
|
|
|
|
|
|
auto intrsuction_list =
|
2021-12-16 21:06:08 +00:00
|
|
|
m_module.get_method("forward").function().get_code().instructions_;
|
[Operator Versioning][Edge] Change OP to CALL when there is a valid upgrader (#67731)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67731
1. Register upgrader function at loading stage
2. Change OP to CALL when there operator_version from model is smaller than current runtime version and there exists a valid upgrader
The interpreter log is :
```
RUNNING 0 STOREN 1 3
RUNNING 1 DROPR 1
RUNNING 2 LOAD 2
RUNNING 3 LOAD 3
RUNNING 4 CALL 0
RUNNING 0 STOREN 1 2
RUNNING 1 LOAD 1
RUNNING 2 OP 0, aten::is_floating_point
RUNNING 3 JF 3
RUNNING 4 LOADC 1
RUNNING 5 JMP 3
RUNNING 8 STORE 3
RUNNING 9 MOVE 3
RUNNING 10 JF 5
RUNNING 11 LOAD 1
RUNNING 12 LOAD 2
RUNNING 13 OP 1, aten::div.Tensor
RUNNING 14 JMP 5
RUNNING 19 STORE 4
RUNNING 20 DROPR 2
RUNNING 21 DROPR 1
RUNNING 22 MOVE 4
RUNNING 23 RET
RUNNING 5 LOAD 2
RUNNING 6 LOAD 3
RUNNING 7 CALL 0
RUNNING 0 STOREN 1 2
RUNNING 1 LOAD 1
RUNNING 2 OP 0, aten::is_floating_point
RUNNING 3 JF 3
RUNNING 4 LOADC 1
RUNNING 5 JMP 3
RUNNING 8 STORE 3
RUNNING 9 MOVE 3
RUNNING 10 JF 5
RUNNING 11 LOAD 1
RUNNING 12 LOAD 2
RUNNING 13 OP 1, aten::div.Tensor
RUNNING 14 JMP 5
RUNNING 19 STORE 4
RUNNING 20 DROPR 2
RUNNING 21 DROPR 1
RUNNING 22 MOVE 4
RUNNING 23 RET
RUNNING 8 MOVE 2
RUNNING 9 MOVE 3
RUNNING 10 CALL 0
RUNNING 0 STOREN 1 2
RUNNING 1 LOAD 1
RUNNING 2 OP 0, aten::is_floating_point
RUNNING 3 JF 3
RUNNING 4 LOADC 1
RUNNING 5 JMP 3
RUNNING 8 STORE 3
RUNNING 9 MOVE 3
RUNNING 10 JF 5
RUNNING 11 LOAD 1
RUNNING 12 LOAD 2
RUNNING 13 OP 1, aten::div.Tensor
RUNNING 14 JMP 5
RUNNING 19 STORE 4
RUNNING 20 DROPR 2
RUNNING 21 DROPR 1
RUNNING 22 MOVE 4
RUNNING 23 RET
RUNNING 11 TUPLE_CONSTRUCT 3
RUNNING 12 RET
```
The upgrader bytecode is:
```
(STOREN, 1, 2)
(LOAD, 1, 0)
(OP, 0, 0)
(JF, 3, 0)
(LOADC, 1, 0)
(JMP, 3, 0)
(LOAD, 2, 0)
(OP, 0, 0)
(STORE, 3, 0)
(MOVE, 3, 0)
(JF, 5, 0)
(LOAD, 1, 0)
(LOAD, 2, 0)
(OP, 1, 0)
(JMP, 5, 0)
(LOAD, 1, 0)
(LOAD, 2, 0)
(LOADC, 0, 0)
(OP, 2, 0)
(STORE, 4, 0)
(DROPR, 2, 0)
(DROPR, 1, 0)
(MOVE, 4, 0)
(RET, 0, 0)
```
ghstack-source-id: 145635622
Test Plan: describe in summary and CI
Reviewed By: iseeyuan
Differential Revision: D32092517
fbshipit-source-id: 0314b4bda5d2578cdd4e7cfbfd1e3c07fbccf8a3
2021-12-15 03:04:32 +00:00
|
|
|
uint64_t number_of_call_instruction = 0;
|
|
|
|
|
for (auto& instruction : intrsuction_list) {
|
|
|
|
|
number_of_call_instruction += (instruction.op == OpCode::CALL);
|
|
|
|
|
}
|
|
|
|
|
// No operator will use upgrader
|
|
|
|
|
ASSERT_EQ(number_of_call_instruction, 0);
|
|
|
|
|
|
|
|
|
|
std::vector<IValue> inputs{IValue(6 * torch::ones({1})), IValue(3.0)};
|
|
|
|
|
auto output = m_module.forward(inputs);
|
|
|
|
|
auto expect_output = 0.5 * torch::ones({1});
|
|
|
|
|
auto actual_output = output.toTensor();
|
|
|
|
|
std::cout << "expect output: " << expect_output;
|
|
|
|
|
std::cout << "actual output: " << actual_output;
|
|
|
|
|
// The out argument will be overwritten with the output
|
|
|
|
|
ASSERT_TRUE(actual_output.equal(expect_output));
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
TEST(LiteInterpreterUpgraderTest, DivScalarReciprocalIntV2) {
|
|
|
|
|
std::string filePath(__FILE__);
|
|
|
|
|
auto test_model_file = filePath.substr(0, filePath.find_last_of("/\\") + 1);
|
|
|
|
|
test_model_file.append(
|
|
|
|
|
"upgrader_models/test_versioned_div_scalar_reciprocal_int_v2.ptl");
|
|
|
|
|
/*
|
|
|
|
|
(('__torch__.MyModuleInt.forward',
|
|
|
|
|
(('instructions',
|
|
|
|
|
(('STOREN', 1, 3),
|
|
|
|
|
('DROPR', 1, 0),
|
|
|
|
|
('MOVE', 2, 0),
|
|
|
|
|
('OP', 0, 0),
|
|
|
|
|
('MOVE', 3, 0),
|
|
|
|
|
('OP', 1, 0),
|
|
|
|
|
('RET', 0, 0))),
|
|
|
|
|
('operators', (('aten::reciprocal', ''), ('aten::mul', 'Scalar'))),
|
|
|
|
|
('constants', ()),
|
|
|
|
|
('types', ()),
|
|
|
|
|
('register_size', 3))),)
|
|
|
|
|
*/
|
|
|
|
|
mobile::Module m_module = _load_for_mobile(test_model_file);
|
|
|
|
|
|
|
|
|
|
auto intrsuction_list =
|
2021-12-16 21:06:08 +00:00
|
|
|
m_module.get_method("forward").function().get_code().instructions_;
|
[Operator Versioning][Edge] Change OP to CALL when there is a valid upgrader (#67731)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67731
1. Register upgrader function at loading stage
2. Change OP to CALL when there operator_version from model is smaller than current runtime version and there exists a valid upgrader
The interpreter log is :
```
RUNNING 0 STOREN 1 3
RUNNING 1 DROPR 1
RUNNING 2 LOAD 2
RUNNING 3 LOAD 3
RUNNING 4 CALL 0
RUNNING 0 STOREN 1 2
RUNNING 1 LOAD 1
RUNNING 2 OP 0, aten::is_floating_point
RUNNING 3 JF 3
RUNNING 4 LOADC 1
RUNNING 5 JMP 3
RUNNING 8 STORE 3
RUNNING 9 MOVE 3
RUNNING 10 JF 5
RUNNING 11 LOAD 1
RUNNING 12 LOAD 2
RUNNING 13 OP 1, aten::div.Tensor
RUNNING 14 JMP 5
RUNNING 19 STORE 4
RUNNING 20 DROPR 2
RUNNING 21 DROPR 1
RUNNING 22 MOVE 4
RUNNING 23 RET
RUNNING 5 LOAD 2
RUNNING 6 LOAD 3
RUNNING 7 CALL 0
RUNNING 0 STOREN 1 2
RUNNING 1 LOAD 1
RUNNING 2 OP 0, aten::is_floating_point
RUNNING 3 JF 3
RUNNING 4 LOADC 1
RUNNING 5 JMP 3
RUNNING 8 STORE 3
RUNNING 9 MOVE 3
RUNNING 10 JF 5
RUNNING 11 LOAD 1
RUNNING 12 LOAD 2
RUNNING 13 OP 1, aten::div.Tensor
RUNNING 14 JMP 5
RUNNING 19 STORE 4
RUNNING 20 DROPR 2
RUNNING 21 DROPR 1
RUNNING 22 MOVE 4
RUNNING 23 RET
RUNNING 8 MOVE 2
RUNNING 9 MOVE 3
RUNNING 10 CALL 0
RUNNING 0 STOREN 1 2
RUNNING 1 LOAD 1
RUNNING 2 OP 0, aten::is_floating_point
RUNNING 3 JF 3
RUNNING 4 LOADC 1
RUNNING 5 JMP 3
RUNNING 8 STORE 3
RUNNING 9 MOVE 3
RUNNING 10 JF 5
RUNNING 11 LOAD 1
RUNNING 12 LOAD 2
RUNNING 13 OP 1, aten::div.Tensor
RUNNING 14 JMP 5
RUNNING 19 STORE 4
RUNNING 20 DROPR 2
RUNNING 21 DROPR 1
RUNNING 22 MOVE 4
RUNNING 23 RET
RUNNING 11 TUPLE_CONSTRUCT 3
RUNNING 12 RET
```
The upgrader bytecode is:
```
(STOREN, 1, 2)
(LOAD, 1, 0)
(OP, 0, 0)
(JF, 3, 0)
(LOADC, 1, 0)
(JMP, 3, 0)
(LOAD, 2, 0)
(OP, 0, 0)
(STORE, 3, 0)
(MOVE, 3, 0)
(JF, 5, 0)
(LOAD, 1, 0)
(LOAD, 2, 0)
(OP, 1, 0)
(JMP, 5, 0)
(LOAD, 1, 0)
(LOAD, 2, 0)
(LOADC, 0, 0)
(OP, 2, 0)
(STORE, 4, 0)
(DROPR, 2, 0)
(DROPR, 1, 0)
(MOVE, 4, 0)
(RET, 0, 0)
```
ghstack-source-id: 145635622
Test Plan: describe in summary and CI
Reviewed By: iseeyuan
Differential Revision: D32092517
fbshipit-source-id: 0314b4bda5d2578cdd4e7cfbfd1e3c07fbccf8a3
2021-12-15 03:04:32 +00:00
|
|
|
uint64_t number_of_call_instruction = 0;
|
|
|
|
|
for (auto& instruction : intrsuction_list) {
|
|
|
|
|
number_of_call_instruction += (instruction.op == OpCode::CALL);
|
|
|
|
|
}
|
|
|
|
|
// No operator will use upgrader
|
|
|
|
|
ASSERT_EQ(number_of_call_instruction, 0);
|
|
|
|
|
|
|
|
|
|
std::vector<IValue> inputs{IValue(6 * torch::ones({1})), IValue(3.0)};
|
|
|
|
|
auto output = m_module.forward(inputs);
|
|
|
|
|
auto expect_output = 0.5 * torch::ones({1});
|
|
|
|
|
auto actual_output = output.toTensor();
|
|
|
|
|
|
|
|
|
|
// The out argument will be overwritten with the output
|
|
|
|
|
ASSERT_TRUE(actual_output.equal(expect_output));
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
TEST(LiteInterpreterUpgraderTest, DivScalarScalarV2) {
|
|
|
|
|
std::string filePath(__FILE__);
|
|
|
|
|
auto test_model_file = filePath.substr(0, filePath.find_last_of("/\\") + 1);
|
|
|
|
|
test_model_file.append(
|
|
|
|
|
"upgrader_models/test_versioned_div_scalar_scalar_v2.ptl");
|
|
|
|
|
/*
|
|
|
|
|
(('__torch__.MyModule.forward',
|
|
|
|
|
(('instructions',
|
|
|
|
|
(('STOREN', 1, 5),
|
|
|
|
|
('DROPR', 1, 0),
|
|
|
|
|
('LOAD', 2, 0),
|
|
|
|
|
('LOAD', 3, 0),
|
|
|
|
|
('OP', 0, 0),
|
|
|
|
|
('MOVE', 2, 0),
|
|
|
|
|
('LOAD', 4, 0),
|
|
|
|
|
('OP', 1, 0),
|
|
|
|
|
('LOAD', 3, 0),
|
|
|
|
|
('MOVE', 4, 0),
|
|
|
|
|
('OP', 2, 0),
|
|
|
|
|
('MOVE', 3, 0),
|
|
|
|
|
('MOVE', 5, 0),
|
|
|
|
|
('OP', 3, 0),
|
|
|
|
|
('TUPLE_CONSTRUCT', 4, 0),
|
|
|
|
|
('RET', 0, 0))),
|
|
|
|
|
('operators',
|
|
|
|
|
(('aten::div', ''),
|
|
|
|
|
('aten::div', 'float'),
|
|
|
|
|
('aten::div', ''),
|
|
|
|
|
('aten::div', 'int'))),
|
|
|
|
|
('constants', ()),
|
|
|
|
|
('types', ()),
|
|
|
|
|
('register_size', 5))),)
|
|
|
|
|
*/
|
|
|
|
|
mobile::Module m_module = _load_for_mobile(test_model_file);
|
|
|
|
|
auto intrsuction_list =
|
2021-12-16 21:06:08 +00:00
|
|
|
m_module.get_method("forward").function().get_code().instructions_;
|
[Operator Versioning][Edge] Change OP to CALL when there is a valid upgrader (#67731)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67731
1. Register upgrader function at loading stage
2. Change OP to CALL when there operator_version from model is smaller than current runtime version and there exists a valid upgrader
The interpreter log is :
```
RUNNING 0 STOREN 1 3
RUNNING 1 DROPR 1
RUNNING 2 LOAD 2
RUNNING 3 LOAD 3
RUNNING 4 CALL 0
RUNNING 0 STOREN 1 2
RUNNING 1 LOAD 1
RUNNING 2 OP 0, aten::is_floating_point
RUNNING 3 JF 3
RUNNING 4 LOADC 1
RUNNING 5 JMP 3
RUNNING 8 STORE 3
RUNNING 9 MOVE 3
RUNNING 10 JF 5
RUNNING 11 LOAD 1
RUNNING 12 LOAD 2
RUNNING 13 OP 1, aten::div.Tensor
RUNNING 14 JMP 5
RUNNING 19 STORE 4
RUNNING 20 DROPR 2
RUNNING 21 DROPR 1
RUNNING 22 MOVE 4
RUNNING 23 RET
RUNNING 5 LOAD 2
RUNNING 6 LOAD 3
RUNNING 7 CALL 0
RUNNING 0 STOREN 1 2
RUNNING 1 LOAD 1
RUNNING 2 OP 0, aten::is_floating_point
RUNNING 3 JF 3
RUNNING 4 LOADC 1
RUNNING 5 JMP 3
RUNNING 8 STORE 3
RUNNING 9 MOVE 3
RUNNING 10 JF 5
RUNNING 11 LOAD 1
RUNNING 12 LOAD 2
RUNNING 13 OP 1, aten::div.Tensor
RUNNING 14 JMP 5
RUNNING 19 STORE 4
RUNNING 20 DROPR 2
RUNNING 21 DROPR 1
RUNNING 22 MOVE 4
RUNNING 23 RET
RUNNING 8 MOVE 2
RUNNING 9 MOVE 3
RUNNING 10 CALL 0
RUNNING 0 STOREN 1 2
RUNNING 1 LOAD 1
RUNNING 2 OP 0, aten::is_floating_point
RUNNING 3 JF 3
RUNNING 4 LOADC 1
RUNNING 5 JMP 3
RUNNING 8 STORE 3
RUNNING 9 MOVE 3
RUNNING 10 JF 5
RUNNING 11 LOAD 1
RUNNING 12 LOAD 2
RUNNING 13 OP 1, aten::div.Tensor
RUNNING 14 JMP 5
RUNNING 19 STORE 4
RUNNING 20 DROPR 2
RUNNING 21 DROPR 1
RUNNING 22 MOVE 4
RUNNING 23 RET
RUNNING 11 TUPLE_CONSTRUCT 3
RUNNING 12 RET
```
The upgrader bytecode is:
```
(STOREN, 1, 2)
(LOAD, 1, 0)
(OP, 0, 0)
(JF, 3, 0)
(LOADC, 1, 0)
(JMP, 3, 0)
(LOAD, 2, 0)
(OP, 0, 0)
(STORE, 3, 0)
(MOVE, 3, 0)
(JF, 5, 0)
(LOAD, 1, 0)
(LOAD, 2, 0)
(OP, 1, 0)
(JMP, 5, 0)
(LOAD, 1, 0)
(LOAD, 2, 0)
(LOADC, 0, 0)
(OP, 2, 0)
(STORE, 4, 0)
(DROPR, 2, 0)
(DROPR, 1, 0)
(MOVE, 4, 0)
(RET, 0, 0)
```
ghstack-source-id: 145635622
Test Plan: describe in summary and CI
Reviewed By: iseeyuan
Differential Revision: D32092517
fbshipit-source-id: 0314b4bda5d2578cdd4e7cfbfd1e3c07fbccf8a3
2021-12-15 03:04:32 +00:00
|
|
|
uint64_t number_of_call_instruction = 0;
|
|
|
|
|
for (auto& instruction : intrsuction_list) {
|
|
|
|
|
number_of_call_instruction += (instruction.op == OpCode::CALL);
|
|
|
|
|
}
|
|
|
|
|
// No operator will use upgrader
|
|
|
|
|
ASSERT_EQ(number_of_call_instruction, 0);
|
|
|
|
|
|
|
|
|
|
std::vector<IValue> inputs{IValue(20.0), IValue(10), IValue(2.0), IValue(5)};
|
|
|
|
|
auto output = m_module.forward(inputs);
|
|
|
|
|
auto output_list = output.toTupleRef().elements();
|
|
|
|
|
auto expect_output = std::vector<IValue>(
|
|
|
|
|
{IValue(2.0), IValue(10.0), IValue(5.0), IValue(2.0)});
|
|
|
|
|
// auto actual_output = output.toTensor();
|
|
|
|
|
for (size_t i = 0; i < expect_output.size(); i++) {
|
|
|
|
|
ASSERT_EQ(output_list[i], expect_output[i]);
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
TEST(LiteInterpreterUpgraderTest, DivScalarIntV2) {
|
|
|
|
|
std::string filePath(__FILE__);
|
|
|
|
|
auto test_model_file = filePath.substr(0, filePath.find_last_of("/\\") + 1);
|
|
|
|
|
test_model_file.append(
|
|
|
|
|
"upgrader_models/test_versioned_div_scalar_int_v2.ptl");
|
|
|
|
|
/*
|
|
|
|
|
(('__torch__.MyModuleInt.forward',
|
|
|
|
|
(('instructions',
|
|
|
|
|
(('STOREN', 1, 3),
|
|
|
|
|
('DROPR', 1, 0),
|
|
|
|
|
('MOVE', 2, 0),
|
|
|
|
|
('MOVE', 3, 0),
|
|
|
|
|
('OP', 0, 0),
|
|
|
|
|
('RET', 0, 0))),
|
|
|
|
|
('operators', (('aten::div', 'Scalar'),)),
|
|
|
|
|
('constants', ()),
|
|
|
|
|
('types', ()),
|
|
|
|
|
('register_size', 3))),)
|
|
|
|
|
*/
|
|
|
|
|
mobile::Module m_module = _load_for_mobile(test_model_file);
|
|
|
|
|
|
|
|
|
|
auto intrsuction_list =
|
2021-12-16 21:06:08 +00:00
|
|
|
m_module.get_method("forward").function().get_code().instructions_;
|
[Operator Versioning][Edge] Change OP to CALL when there is a valid upgrader (#67731)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67731
1. Register upgrader function at loading stage
2. Change OP to CALL when there operator_version from model is smaller than current runtime version and there exists a valid upgrader
The interpreter log is :
```
RUNNING 0 STOREN 1 3
RUNNING 1 DROPR 1
RUNNING 2 LOAD 2
RUNNING 3 LOAD 3
RUNNING 4 CALL 0
RUNNING 0 STOREN 1 2
RUNNING 1 LOAD 1
RUNNING 2 OP 0, aten::is_floating_point
RUNNING 3 JF 3
RUNNING 4 LOADC 1
RUNNING 5 JMP 3
RUNNING 8 STORE 3
RUNNING 9 MOVE 3
RUNNING 10 JF 5
RUNNING 11 LOAD 1
RUNNING 12 LOAD 2
RUNNING 13 OP 1, aten::div.Tensor
RUNNING 14 JMP 5
RUNNING 19 STORE 4
RUNNING 20 DROPR 2
RUNNING 21 DROPR 1
RUNNING 22 MOVE 4
RUNNING 23 RET
RUNNING 5 LOAD 2
RUNNING 6 LOAD 3
RUNNING 7 CALL 0
RUNNING 0 STOREN 1 2
RUNNING 1 LOAD 1
RUNNING 2 OP 0, aten::is_floating_point
RUNNING 3 JF 3
RUNNING 4 LOADC 1
RUNNING 5 JMP 3
RUNNING 8 STORE 3
RUNNING 9 MOVE 3
RUNNING 10 JF 5
RUNNING 11 LOAD 1
RUNNING 12 LOAD 2
RUNNING 13 OP 1, aten::div.Tensor
RUNNING 14 JMP 5
RUNNING 19 STORE 4
RUNNING 20 DROPR 2
RUNNING 21 DROPR 1
RUNNING 22 MOVE 4
RUNNING 23 RET
RUNNING 8 MOVE 2
RUNNING 9 MOVE 3
RUNNING 10 CALL 0
RUNNING 0 STOREN 1 2
RUNNING 1 LOAD 1
RUNNING 2 OP 0, aten::is_floating_point
RUNNING 3 JF 3
RUNNING 4 LOADC 1
RUNNING 5 JMP 3
RUNNING 8 STORE 3
RUNNING 9 MOVE 3
RUNNING 10 JF 5
RUNNING 11 LOAD 1
RUNNING 12 LOAD 2
RUNNING 13 OP 1, aten::div.Tensor
RUNNING 14 JMP 5
RUNNING 19 STORE 4
RUNNING 20 DROPR 2
RUNNING 21 DROPR 1
RUNNING 22 MOVE 4
RUNNING 23 RET
RUNNING 11 TUPLE_CONSTRUCT 3
RUNNING 12 RET
```
The upgrader bytecode is:
```
(STOREN, 1, 2)
(LOAD, 1, 0)
(OP, 0, 0)
(JF, 3, 0)
(LOADC, 1, 0)
(JMP, 3, 0)
(LOAD, 2, 0)
(OP, 0, 0)
(STORE, 3, 0)
(MOVE, 3, 0)
(JF, 5, 0)
(LOAD, 1, 0)
(LOAD, 2, 0)
(OP, 1, 0)
(JMP, 5, 0)
(LOAD, 1, 0)
(LOAD, 2, 0)
(LOADC, 0, 0)
(OP, 2, 0)
(STORE, 4, 0)
(DROPR, 2, 0)
(DROPR, 1, 0)
(MOVE, 4, 0)
(RET, 0, 0)
```
ghstack-source-id: 145635622
Test Plan: describe in summary and CI
Reviewed By: iseeyuan
Differential Revision: D32092517
fbshipit-source-id: 0314b4bda5d2578cdd4e7cfbfd1e3c07fbccf8a3
2021-12-15 03:04:32 +00:00
|
|
|
uint64_t number_of_call_instruction = 0;
|
|
|
|
|
for (auto& instruction : intrsuction_list) {
|
|
|
|
|
number_of_call_instruction += (instruction.op == OpCode::CALL);
|
|
|
|
|
}
|
|
|
|
|
// One operator will use upgrader
|
|
|
|
|
ASSERT_EQ(number_of_call_instruction, 1);
|
|
|
|
|
|
|
|
|
|
std::vector<IValue> inputs{IValue(6 * torch::ones({1})), IValue(3)};
|
|
|
|
|
auto output = m_module.forward(inputs);
|
|
|
|
|
auto expect_output = 2.0 * torch::ones({1});
|
|
|
|
|
auto actual_output = output.toTensor();
|
|
|
|
|
|
|
|
|
|
// The out argument will be overwritten with the output
|
|
|
|
|
ASSERT_TRUE(actual_output.equal(expect_output));
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
TEST(LiteInterpreterUpgraderTest, DivScalarInplaceFloatV2) {
|
|
|
|
|
std::string filePath(__FILE__);
|
|
|
|
|
auto test_model_file = filePath.substr(0, filePath.find_last_of("/\\") + 1);
|
|
|
|
|
test_model_file.append(
|
|
|
|
|
"upgrader_models/test_versioned_div_scalar_inplace_float_v2.ptl");
|
|
|
|
|
/*
|
|
|
|
|
(('__torch__.MyModuleFloat.forward',
|
|
|
|
|
(('instructions',
|
|
|
|
|
(('STOREN', 1, 3),
|
|
|
|
|
('DROPR', 1, 0),
|
|
|
|
|
('MOVE', 2, 0),
|
|
|
|
|
('MOVE', 3, 0),
|
|
|
|
|
('OP', 0, 0),
|
|
|
|
|
('RET', 0, 0))),
|
|
|
|
|
('operators', (('aten::div_', 'Scalar'),)),
|
|
|
|
|
('constants', ()),
|
|
|
|
|
('types', ()),
|
|
|
|
|
('register_size', 3))),)
|
|
|
|
|
*/
|
|
|
|
|
|
|
|
|
|
mobile::Module m_module = _load_for_mobile(test_model_file);
|
|
|
|
|
|
|
|
|
|
auto intrsuction_list =
|
2021-12-16 21:06:08 +00:00
|
|
|
m_module.get_method("forward").function().get_code().instructions_;
|
[Operator Versioning][Edge] Change OP to CALL when there is a valid upgrader (#67731)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67731
1. Register upgrader function at loading stage
2. Change OP to CALL when there operator_version from model is smaller than current runtime version and there exists a valid upgrader
The interpreter log is :
```
RUNNING 0 STOREN 1 3
RUNNING 1 DROPR 1
RUNNING 2 LOAD 2
RUNNING 3 LOAD 3
RUNNING 4 CALL 0
RUNNING 0 STOREN 1 2
RUNNING 1 LOAD 1
RUNNING 2 OP 0, aten::is_floating_point
RUNNING 3 JF 3
RUNNING 4 LOADC 1
RUNNING 5 JMP 3
RUNNING 8 STORE 3
RUNNING 9 MOVE 3
RUNNING 10 JF 5
RUNNING 11 LOAD 1
RUNNING 12 LOAD 2
RUNNING 13 OP 1, aten::div.Tensor
RUNNING 14 JMP 5
RUNNING 19 STORE 4
RUNNING 20 DROPR 2
RUNNING 21 DROPR 1
RUNNING 22 MOVE 4
RUNNING 23 RET
RUNNING 5 LOAD 2
RUNNING 6 LOAD 3
RUNNING 7 CALL 0
RUNNING 0 STOREN 1 2
RUNNING 1 LOAD 1
RUNNING 2 OP 0, aten::is_floating_point
RUNNING 3 JF 3
RUNNING 4 LOADC 1
RUNNING 5 JMP 3
RUNNING 8 STORE 3
RUNNING 9 MOVE 3
RUNNING 10 JF 5
RUNNING 11 LOAD 1
RUNNING 12 LOAD 2
RUNNING 13 OP 1, aten::div.Tensor
RUNNING 14 JMP 5
RUNNING 19 STORE 4
RUNNING 20 DROPR 2
RUNNING 21 DROPR 1
RUNNING 22 MOVE 4
RUNNING 23 RET
RUNNING 8 MOVE 2
RUNNING 9 MOVE 3
RUNNING 10 CALL 0
RUNNING 0 STOREN 1 2
RUNNING 1 LOAD 1
RUNNING 2 OP 0, aten::is_floating_point
RUNNING 3 JF 3
RUNNING 4 LOADC 1
RUNNING 5 JMP 3
RUNNING 8 STORE 3
RUNNING 9 MOVE 3
RUNNING 10 JF 5
RUNNING 11 LOAD 1
RUNNING 12 LOAD 2
RUNNING 13 OP 1, aten::div.Tensor
RUNNING 14 JMP 5
RUNNING 19 STORE 4
RUNNING 20 DROPR 2
RUNNING 21 DROPR 1
RUNNING 22 MOVE 4
RUNNING 23 RET
RUNNING 11 TUPLE_CONSTRUCT 3
RUNNING 12 RET
```
The upgrader bytecode is:
```
(STOREN, 1, 2)
(LOAD, 1, 0)
(OP, 0, 0)
(JF, 3, 0)
(LOADC, 1, 0)
(JMP, 3, 0)
(LOAD, 2, 0)
(OP, 0, 0)
(STORE, 3, 0)
(MOVE, 3, 0)
(JF, 5, 0)
(LOAD, 1, 0)
(LOAD, 2, 0)
(OP, 1, 0)
(JMP, 5, 0)
(LOAD, 1, 0)
(LOAD, 2, 0)
(LOADC, 0, 0)
(OP, 2, 0)
(STORE, 4, 0)
(DROPR, 2, 0)
(DROPR, 1, 0)
(MOVE, 4, 0)
(RET, 0, 0)
```
ghstack-source-id: 145635622
Test Plan: describe in summary and CI
Reviewed By: iseeyuan
Differential Revision: D32092517
fbshipit-source-id: 0314b4bda5d2578cdd4e7cfbfd1e3c07fbccf8a3
2021-12-15 03:04:32 +00:00
|
|
|
uint64_t number_of_call_instruction = 0;
|
|
|
|
|
for (auto& instruction : intrsuction_list) {
|
|
|
|
|
number_of_call_instruction += (instruction.op == OpCode::CALL);
|
|
|
|
|
}
|
|
|
|
|
// One operator will use upgrader
|
|
|
|
|
ASSERT_EQ(number_of_call_instruction, 1);
|
|
|
|
|
|
|
|
|
|
std::vector<IValue> inputs{IValue(6 * torch::ones({1})), IValue(3.0)};
|
|
|
|
|
auto output = m_module.forward(inputs);
|
|
|
|
|
auto expect_output = 2.0 * torch::ones({1});
|
|
|
|
|
auto actual_output = output.toTensor();
|
|
|
|
|
|
|
|
|
|
// The out argument will be overwritten with the output
|
|
|
|
|
ASSERT_TRUE(actual_output.equal(expect_output));
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
TEST(LiteInterpreterUpgraderTest, DivScalarInplaceIntV2) {
|
|
|
|
|
std::string filePath(__FILE__);
|
|
|
|
|
auto test_model_file = filePath.substr(0, filePath.find_last_of("/\\") + 1);
|
|
|
|
|
test_model_file.append(
|
|
|
|
|
"upgrader_models/test_versioned_div_scalar_inplace_int_v2.ptl");
|
|
|
|
|
/*
|
|
|
|
|
(('__torch__.MyModuleInt.forward',
|
|
|
|
|
(('instructions',
|
|
|
|
|
(('STOREN', 1, 3),
|
|
|
|
|
('DROPR', 1, 0),
|
|
|
|
|
('MOVE', 2, 0),
|
|
|
|
|
('MOVE', 3, 0),
|
|
|
|
|
('OP', 0, 0),
|
|
|
|
|
('RET', 0, 0))),
|
|
|
|
|
('operators', (('aten::div_', 'Scalar'),)),
|
|
|
|
|
('constants', ()),
|
|
|
|
|
('types', ()),
|
|
|
|
|
('register_size', 3))),)
|
|
|
|
|
*/
|
|
|
|
|
|
|
|
|
|
mobile::Module m_module = _load_for_mobile(test_model_file);
|
|
|
|
|
|
|
|
|
|
auto intrsuction_list =
|
2021-12-16 21:06:08 +00:00
|
|
|
m_module.get_method("forward").function().get_code().instructions_;
|
[Operator Versioning][Edge] Change OP to CALL when there is a valid upgrader (#67731)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67731
1. Register upgrader function at loading stage
2. Change OP to CALL when there operator_version from model is smaller than current runtime version and there exists a valid upgrader
The interpreter log is :
```
RUNNING 0 STOREN 1 3
RUNNING 1 DROPR 1
RUNNING 2 LOAD 2
RUNNING 3 LOAD 3
RUNNING 4 CALL 0
RUNNING 0 STOREN 1 2
RUNNING 1 LOAD 1
RUNNING 2 OP 0, aten::is_floating_point
RUNNING 3 JF 3
RUNNING 4 LOADC 1
RUNNING 5 JMP 3
RUNNING 8 STORE 3
RUNNING 9 MOVE 3
RUNNING 10 JF 5
RUNNING 11 LOAD 1
RUNNING 12 LOAD 2
RUNNING 13 OP 1, aten::div.Tensor
RUNNING 14 JMP 5
RUNNING 19 STORE 4
RUNNING 20 DROPR 2
RUNNING 21 DROPR 1
RUNNING 22 MOVE 4
RUNNING 23 RET
RUNNING 5 LOAD 2
RUNNING 6 LOAD 3
RUNNING 7 CALL 0
RUNNING 0 STOREN 1 2
RUNNING 1 LOAD 1
RUNNING 2 OP 0, aten::is_floating_point
RUNNING 3 JF 3
RUNNING 4 LOADC 1
RUNNING 5 JMP 3
RUNNING 8 STORE 3
RUNNING 9 MOVE 3
RUNNING 10 JF 5
RUNNING 11 LOAD 1
RUNNING 12 LOAD 2
RUNNING 13 OP 1, aten::div.Tensor
RUNNING 14 JMP 5
RUNNING 19 STORE 4
RUNNING 20 DROPR 2
RUNNING 21 DROPR 1
RUNNING 22 MOVE 4
RUNNING 23 RET
RUNNING 8 MOVE 2
RUNNING 9 MOVE 3
RUNNING 10 CALL 0
RUNNING 0 STOREN 1 2
RUNNING 1 LOAD 1
RUNNING 2 OP 0, aten::is_floating_point
RUNNING 3 JF 3
RUNNING 4 LOADC 1
RUNNING 5 JMP 3
RUNNING 8 STORE 3
RUNNING 9 MOVE 3
RUNNING 10 JF 5
RUNNING 11 LOAD 1
RUNNING 12 LOAD 2
RUNNING 13 OP 1, aten::div.Tensor
RUNNING 14 JMP 5
RUNNING 19 STORE 4
RUNNING 20 DROPR 2
RUNNING 21 DROPR 1
RUNNING 22 MOVE 4
RUNNING 23 RET
RUNNING 11 TUPLE_CONSTRUCT 3
RUNNING 12 RET
```
The upgrader bytecode is:
```
(STOREN, 1, 2)
(LOAD, 1, 0)
(OP, 0, 0)
(JF, 3, 0)
(LOADC, 1, 0)
(JMP, 3, 0)
(LOAD, 2, 0)
(OP, 0, 0)
(STORE, 3, 0)
(MOVE, 3, 0)
(JF, 5, 0)
(LOAD, 1, 0)
(LOAD, 2, 0)
(OP, 1, 0)
(JMP, 5, 0)
(LOAD, 1, 0)
(LOAD, 2, 0)
(LOADC, 0, 0)
(OP, 2, 0)
(STORE, 4, 0)
(DROPR, 2, 0)
(DROPR, 1, 0)
(MOVE, 4, 0)
(RET, 0, 0)
```
ghstack-source-id: 145635622
Test Plan: describe in summary and CI
Reviewed By: iseeyuan
Differential Revision: D32092517
fbshipit-source-id: 0314b4bda5d2578cdd4e7cfbfd1e3c07fbccf8a3
2021-12-15 03:04:32 +00:00
|
|
|
uint64_t number_of_call_instruction = 0;
|
|
|
|
|
for (auto& instruction : intrsuction_list) {
|
|
|
|
|
number_of_call_instruction += (instruction.op == OpCode::CALL);
|
|
|
|
|
}
|
|
|
|
|
// One operator will use upgrader
|
|
|
|
|
ASSERT_EQ(number_of_call_instruction, 1);
|
|
|
|
|
|
|
|
|
|
std::vector<IValue> inputs{IValue(6 * torch::ones({1})), IValue(3)};
|
|
|
|
|
auto output = m_module.forward(inputs);
|
|
|
|
|
auto expect_output = 2.0 * torch::ones({1});
|
|
|
|
|
auto actual_output = output.toTensor();
|
|
|
|
|
|
|
|
|
|
// The out argument will be overwritten with the output
|
|
|
|
|
ASSERT_TRUE(actual_output.equal(expect_output));
|
|
|
|
|
}
|
|
|
|
|
|
2021-12-02 02:44:34 +00:00
|
|
|
#endif // !defined(FB_XPLAT_BUILD)
|
|
|
|
|
|
2021-12-15 03:04:32 +00:00
|
|
|
TEST(LiteInterpreterUpgraderTest, Upgrader) {
|
|
|
|
|
std::vector<mobile::Function> upgrader_functions;
|
|
|
|
|
|
|
|
|
|
for (auto& byteCodeFunctionWithOperator : getUpgraderBytecodeList()) {
|
2022-05-18 00:42:56 +00:00
|
|
|
byteCodeFunctionWithOperator.function.initialize_operators(true);
|
2021-12-15 03:04:32 +00:00
|
|
|
ASSERT_EQ(
|
2021-12-16 21:06:08 +00:00
|
|
|
byteCodeFunctionWithOperator.function.get_code().operators_.size(),
|
|
|
|
|
byteCodeFunctionWithOperator.function.get_code().op_names_.size());
|
|
|
|
|
if (byteCodeFunctionWithOperator.function.get_code().operators_.empty()) {
|
2021-12-15 03:04:32 +00:00
|
|
|
for (const auto& op : byteCodeFunctionWithOperator.operators) {
|
|
|
|
|
byteCodeFunctionWithOperator.function.append_operator(
|
2022-04-07 01:39:37 +00:00
|
|
|
op.name, op.overload_name, op.num_specified_args);
|
2021-12-15 03:04:32 +00:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
upgrader_functions.push_back(byteCodeFunctionWithOperator.function);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
ASSERT_EQ(getUpgraderBytecodeList().size(), upgrader_functions.size());
|
|
|
|
|
}
|
|
|
|
|
|
2022-01-07 19:19:15 +00:00
|
|
|
void enumerateTupleType(
|
|
|
|
|
size_t depth,
|
|
|
|
|
std::vector<TypePtr>& current,
|
|
|
|
|
const std::vector<TypePtr>& candidates,
|
|
|
|
|
std::vector<TypePtr>& out) {
|
|
|
|
|
static std::vector<std::string> fieldNames;
|
|
|
|
|
if (depth > fieldNames.size()) {
|
|
|
|
|
fieldNames.reserve(depth);
|
|
|
|
|
for (size_t i = fieldNames.size(); i < depth; i++) {
|
|
|
|
|
fieldNames.push_back("field" + std::to_string(i));
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
if (depth == 0) {
|
|
|
|
|
out.push_back(TupleType::create(current));
|
|
|
|
|
while (fieldNames.size() > current.size()) {
|
|
|
|
|
fieldNames.pop_back();
|
|
|
|
|
}
|
|
|
|
|
out.push_back(TupleType::createNamed("NamedTuple", fieldNames, current));
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
for (const auto& type : candidates) {
|
|
|
|
|
if (containsAnyType(type)) {
|
|
|
|
|
continue;
|
|
|
|
|
}
|
|
|
|
|
current.push_back(type);
|
|
|
|
|
enumerateTupleType(depth - 1, current, candidates, out);
|
|
|
|
|
current.pop_back();
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2022-01-14 02:21:14 +00:00
|
|
|
class LiteInterpreterDynamicTypeTestFixture
|
|
|
|
|
: public ::testing::TestWithParam<size_t> {
|
|
|
|
|
protected:
|
|
|
|
|
void SetUp() override {
|
|
|
|
|
cu = std::make_shared<CompilationUnit>();
|
|
|
|
|
std::vector<TypePtr> keyTypes = {
|
|
|
|
|
AnyType::get(),
|
|
|
|
|
IntType::get(),
|
|
|
|
|
BoolType::get(),
|
|
|
|
|
FloatType::get(),
|
|
|
|
|
ComplexType::get(),
|
|
|
|
|
StringType::get(),
|
|
|
|
|
TensorType::get(),
|
|
|
|
|
DeviceObjType::get(),
|
|
|
|
|
};
|
|
|
|
|
types = {
|
|
|
|
|
NoneType::get(),
|
|
|
|
|
NumberType::get(),
|
|
|
|
|
ClassType::create("__torch__.TestClass1", cu),
|
|
|
|
|
ClassType::create("__torch__.TestClass2", cu),
|
|
|
|
|
AnyListType::get(),
|
|
|
|
|
AnyTupleType::get(),
|
|
|
|
|
StreamObjType::get(),
|
|
|
|
|
CapsuleType::get(),
|
|
|
|
|
GeneratorType::get(),
|
|
|
|
|
StorageType::get(),
|
|
|
|
|
VarType::create("t"),
|
|
|
|
|
VarType::create("v"),
|
|
|
|
|
AnyClassType::get()};
|
|
|
|
|
std::copy(keyTypes.begin(), keyTypes.end(), back_inserter(types));
|
|
|
|
|
auto expandTypes = [&](size_t tupleSize) {
|
|
|
|
|
std::vector<TypePtr> nested;
|
|
|
|
|
for (const auto& type : types) {
|
|
|
|
|
if (!(type == AnyType::get())) {
|
|
|
|
|
nested.emplace_back(ListType::create(type));
|
|
|
|
|
if (!(type == NoneType::get() ||
|
|
|
|
|
type->kind() == OptionalType::Kind)) {
|
|
|
|
|
nested.emplace_back(OptionalType::create(type));
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
for (const auto& keyType : keyTypes) {
|
|
|
|
|
nested.emplace_back(DictType::create(keyType, type));
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
std::vector<TypePtr> tmp;
|
|
|
|
|
enumerateTupleType(tupleSize, tmp, types, nested);
|
|
|
|
|
std::move(
|
|
|
|
|
std::begin(nested), std::end(nested), std::back_inserter(types));
|
|
|
|
|
};
|
|
|
|
|
expandTypes(1);
|
|
|
|
|
expandTypes(1);
|
|
|
|
|
}
|
|
|
|
|
std::shared_ptr<CompilationUnit> cu;
|
|
|
|
|
std::vector<TypePtr> types;
|
|
|
|
|
|
|
|
|
|
public:
|
|
|
|
|
static constexpr size_t kNumSplits = 10;
|
|
|
|
|
};
|
|
|
|
|
|
2022-01-07 19:19:15 +00:00
|
|
|
/**
|
|
|
|
|
* Enumerate all possible JIT types appearing in mobile runtime, and test
|
|
|
|
|
* whether subtyping relation is preserved after one of the JIT types is
|
|
|
|
|
* converted to DynamicType.
|
|
|
|
|
*
|
|
|
|
|
* We firstly enumerate all "base" types in a vector, and implement
|
|
|
|
|
* expandTypes() to enumerate container types one "level" up for a given set
|
|
|
|
|
* of types. We call expandTypes() twice to test types nested less or equal
|
|
|
|
|
* to two levels. e.g. List[Optional[Tensor]], Optional[Dict[Int, Bool]], etc.
|
|
|
|
|
*/
|
2022-01-14 02:21:14 +00:00
|
|
|
TEST_P(LiteInterpreterDynamicTypeTestFixture, Conformance) {
|
|
|
|
|
size_t num = types.size() / LiteInterpreterDynamicTypeTestFixture::kNumSplits;
|
|
|
|
|
size_t begin = num * GetParam();
|
|
|
|
|
size_t end = std::min(types.size(), begin + num);
|
2022-01-07 19:19:15 +00:00
|
|
|
for (const auto& a : types) {
|
|
|
|
|
auto da = DynamicType::create(*a);
|
2022-01-14 02:21:14 +00:00
|
|
|
for (size_t i = begin; i < end; i++) {
|
|
|
|
|
const auto& b = types[i];
|
2022-01-07 19:19:15 +00:00
|
|
|
bool result = a->isSubtypeOf(*b);
|
|
|
|
|
EXPECT_EQ(result, da->isSubtypeOf(*b));
|
|
|
|
|
result = b->isSubtypeOf(*a);
|
|
|
|
|
EXPECT_EQ(result, b->isSubtypeOf(*da));
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2023-03-27 18:46:09 +00:00
|
|
|
INSTANTIATE_TEST_SUITE_P(
|
2022-01-14 02:21:14 +00:00
|
|
|
PyTorch,
|
|
|
|
|
LiteInterpreterDynamicTypeTestFixture,
|
|
|
|
|
::testing::Range(
|
|
|
|
|
static_cast<size_t>(0),
|
|
|
|
|
LiteInterpreterDynamicTypeTestFixture::kNumSplits));
|
|
|
|
|
|
Bytecode export flow (#25187)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25187
The bytecode export flow: dump the bytecode format for the light weighted interpreter.
* The bytecode is generated without input spec optimization. It would be more generic (input independent) with no obvious performance degradation (to be tested).
* Main API: torch::jit::script::Module::save(filename, extra_files, bool *bytecode_format* = false).
* Both bytecode and module object are exported in pickle format.
* The module object (in data.pkl) is the same as the original JIT model.
* The serializer is dependent on pickle only (no protobuf or Json).
* The major functionality is forked in ScriptModuleSerializer2::serialize().
* The test loader is test_bc_export.cpp.
* Simple APIs are added in Code and its implementation to get necessary information (instructions, operators and constants).
* Since there's no dependency on graph/node, GetAttr is promoted from an operator to first-class instruction (https://github.com/pytorch/pytorch/pull/25151) .
* Some definitions (instructions, writeArchive, etc) that are shared by full JIT and bytecode are pulled out of the local namespace (https://github.com/pytorch/pytorch/pull/25148).
The output layout looks like:
* folders of methods.
* In each method folder (for example, forward/):
* bytecode.pkl: instructions and operators
* constants{.pkl,/}: constant list in constants.pkl. If there are tensors in constants, the binary tensor files in constants/ folder.
* data{.pkl,/}: the module object, with binary tensor files in data/ folder. The same as in torchscript.
Test Plan: Imported from OSS
Differential Revision: D17076411
fbshipit-source-id: 46eb298e7320d1e585b0101effc0fcfd09219046
2019-09-25 23:34:05 +00:00
|
|
|
} // namespace jit
|
2020-02-13 00:25:13 +00:00
|
|
|
} // namespace torch
|