pytorch/torch/csrc/jit/serialization/python_print.h
Mike Ruberry e66445878d Adds dynamic versioning pattern (#40279)
Summary:
BC NOTE:

This change makes it so modules saved with torch.jit.save in PyTorch 1.6 can be loaded by previous versions of PyTorch unless they use torch.div or (soon) torch.full. It also lets tensors saved using torch.save be loaded by previous versions. So this is the opposite of BC-breaking, but I'm using that label to highlight this issue since we don't have a "BC-improving" label.

PR NOTE:
When an operator's semantics change in PyTorch we want to do two things:

1) Preserve the semantics of older serialized Torchscript programs that use the operator
2) Ensure the new semantics are respected

Historically, this meant writing a Versioned Symbol that would remap older versions of the operator into current PyTorch code (1), and bumping the produced file format version (2). Unfortunately, bumping the produced file format version is a nuclear option for ensuring semantics are respected, since it also prevents older versions of PyTorch from loading anything (even tensors!) from newer versions.

Dynamic versioning addresses the nuclear consequences of bumping the produced file format version by only bumping it when necessary. That is, when an operator with changed semantics is detected in the serialized Torchscript. This will prevent Torchscript programs that use the changed operator from loading on earlier versions of PyTorch, as desired, but will have no impact on programs that don't use the changed operator.

Note that this change is only applicable when using torch.jit.save and torch.jit.load. torch.save pickles the given object using pickle (by default), which saves a function's Python directly.

No new tests for this behavior are added since the existing tests for versioned division in test_save_load already validate that models with div are loaded correctly at version 4.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40279

Reviewed By: dzhulgakov

Differential Revision: D22168291

Pulled By: mruberry

fbshipit-source-id: e71d6380e727e25123c7eedf6d80e5d7f1fe9f95
2020-06-24 12:52:50 -07:00

37 lines
876 B
C++

#pragma once
#include <torch/csrc/WindowsTorchApiMacro.h>
#include <torch/csrc/jit/ir/ir.h>
#include <iostream>
#include <vector>
namespace torch {
namespace jit {
struct Method;
struct Module;
struct PythonPrintImpl;
struct TORCH_API PythonPrint {
PythonPrint(
std::vector<at::Tensor>& tensor_table,
std::vector<c10::NamedTypePtr>& deps_table,
c10::TypePrinter type_printer = nullptr,
bool enforce_importable = false);
void printNamedType(const c10::NamedTypePtr& classType);
void printFunction(const Function& callee);
void printMethod(const Function& callee);
std::string str() const;
const SourceRangeRecords& ranges() const;
uint64_t minVersion() const;
~PythonPrint();
private:
std::shared_ptr<PythonPrintImpl> pImpl;
};
TORCH_API bool printerHasSpecialCaseFor(c10::Symbol sym);
} // namespace jit
} // namespace torch