pytorch/c10/core/SymNodeImpl.h
Edward Z. Yang 2f7cfecd86 Complete revamp of float/promotion sympy handling (#126905)
At a high level, the idea behind this PR is:

* Make it clearer what the promotion and int/float rules for various Sympy operations are. Operators that previously were polymorphic over int/float are now split into separate operators for clarity. We never do mixed int/float addition/multiplication etc in sympy, instead, we always promote to the appropriate operator. (However, equality is currently not done correctly.)
* Enforce strict typing on ValueRanges: if you have a ValueRange for a float, the lower and upper MUST be floats, and so forth for integers.

The story begins in **torch/utils/_sympy/functions.py**. Here, I make some changes to how we represent certain operations in sympy expressions:

* FloorDiv now only supports integer inputs; to do float floor division, do a truediv and then a trunc. Additionally, we remove the divide out addition by gcd optimization, because sympy gcd is over fields and is willing to generate rationals (but rationals are bad for ValueRange strict typing).
* ModularIndexing, LShift, RShift now assert they are given integer inputs.
* Mod only supports integer inputs; eventually we will support FloatMod (left for later work, when we build out Sympy support for floating operations). Unfortunately, I couldn't assert integer inputs here, because of a bad interaction with sympy's inequality solver that is used by the offline solver
* TrueDiv is split into FloatTrueDiv and IntTrueDiv. This allows for us to eventually generate accurate code for Python semantics IntTrueDiv, which is written in a special way to preserve precision when the inputs are >= 2**53 beyond what first coercing the integer to floats and then doing true division.
* Trunc is split to TruncToFloat and TruncToInt.
* Round is updated to return a float, not an int, making it consistent with the round op handler in Inductor. To get Python-style conversion to int, we call TruncToInt on the result.
* RoundDecimal updated to consistently only ever return a float
* Add ToFloat for explicit coercion to float (required so we can enforce strict ValueRanges typing)

In **torch/__init__.py**, we modify SymInt and SymFloat to appropriately call into new bindings that route to these refined sympy operations.  Also, we modify `torch.sym_min` and `torch.sym_max` to have promotion semantics (if one argument is a float, the return result is always a float), making them inconsistent with builtins.min/max, but possible to do type analysis without runtime information.

We also need to introduce some new op handlers in **torch/_inductor/ops_handler.py**:

* `to_int` for truncation to int64, directly corresponding to TruncToInt; this can be implemented by trunc and dtype, but with a dedicated handler it is more convenient for roundtripping in Sympy
* `int_truediv` for Python-style integer true division, which has higher precision than casting to floats and then running `truediv`

These changes have consequences. First, we need to make some administrative changes:

* Actually wire up these Sympy functions from SymInt/SymFloat in **torch/fx/experimental/sym_node.py**, including the new promotion rules (promote2)
* Add support for new Sympy functions in **torch/utils/_sympy/interp.py**, **torch/utils/_sympy/reference.py**
  * In particular, in torch.utils._sympy.reference, we have a strong preference to NOT do nontrivial compute, instead, everything in ops handler should map to a singular sympy function
  * TODO: I chose to roundtrip mod back to our Mod function, but I think I'm going to have to deal with the C/Python inconsistency this to fix tests here
* Add printer support for the Sympy functions in **torch/_inductor/codegen/common.py**, **torch/_inductor/codegen/cpp_utils.py**, **torch/_inductor/codegen/triton.py**. `int_truediv` and mixed precision equality is currently not implemented soundly, so we will lose precision in codegen for large values. TODO: The additions here are not exhaustive yet
* Update ValueRanges logic to use new sympy functions in **torch/utils/_sympy/value_ranges.py**. In general, we prefer to use the new Sympy function rather than try to roll things by hand, which is what was done previously for many VR analysis functions.

In **torch/fx/experimental/symbolic_shapes.py** we need to make some symbolic reasoning adjustments:

* Avoid generation of rational subexpressions by removing simplification of `x // y` into `floor(x / y)`. This simplification then triggers an addition simplification rule `(x + y) / c --> x / c + y / c` which is bad because x / c is a rational number now
* `_assert_bound_is_rational` is no more, we no longer generate rational bounds
* Don't intersect non-int value ranges with the `int_range`
* Support more sympy Functions for guard SYMPY_INTERP
* Assert the type of value range is consistent with the variable type

The new asserts uncovered necessary bug fixes:

* **torch/_inductor/codegen/cpp.py**, **torch/_inductor/select_algorithm.py**, **torch/_inductor/sizevars.py** - Ensure Wild/Symbol manually allocated in Inductor is marked `is_integer` so it's accepted to build expressions
* **torch/_inductor/utils.py** - make sure you actually pass in sympy.Expr to these functions
* **torch/_inductor/ir.py** - make_contiguous_strides_for takes int/SymInt, not sympy.Expr!
* **torch/export/dynamic_shapes.py** - don't use infinity to represent int ranges, instead use sys.maxsize - 1

Because of the removal of some symbolic reasoning that produced rationals, some of our symbolic reasoning has gotten worse and we are unable to simplify some guards. Check the TODO at **test/test_proxy_tensor.py**

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/126905
Approved by: https://github.com/xadupre, https://github.com/lezcano
2024-06-06 02:29:45 +00:00

236 lines
6.3 KiB
C++

#pragma once
#include <c10/macros/Export.h>
#include <c10/util/ArrayRef.h>
#include <c10/util/Exception.h>
#include <c10/util/Optional.h>
#include <c10/util/intrusive_ptr.h>
#include <cstdint>
#include <ostream>
#include <string>
namespace c10 {
class SymNodeImpl;
using SymNode = c10::intrusive_ptr<SymNodeImpl>;
// When you add a method, you also need to edit
// torch/csrc/jit/python/init.cpp
// torch/csrc/utils/python_symnode.h
// c10/core/ConstantSymNodeImpl.h
class C10_API SymNodeImpl : public c10::intrusive_ptr_target {
public:
~SymNodeImpl() override = default;
template <typename T>
c10::intrusive_ptr<T> dyn_cast() const {
return c10::intrusive_ptr<T>::reclaim_copy(dynamic_cast<T*>(this));
}
// these could be pure virtual when we implement LTC versions
virtual bool is_int() {
TORCH_CHECK(false, "NYI");
}
virtual bool is_bool() {
TORCH_CHECK(false, "NYI");
}
virtual bool is_float() {
TORCH_CHECK(false, "NYI");
}
virtual bool is_nested_int() const {
return false;
}
virtual SymNode add(const SymNode& other) {
TORCH_CHECK(false, "NYI");
}
virtual SymNode sub(const SymNode& other) {
TORCH_CHECK(false, "NYI");
}
virtual SymNode mul(const SymNode& other) {
TORCH_CHECK(false, "NYI");
}
// NB: legacy, prefer float_truediv or int_truediv
virtual SymNode truediv(const SymNode& other) {
TORCH_CHECK(false, "NYI");
}
virtual SymNode float_truediv(const SymNode& other) {
return truediv(other);
}
virtual SymNode int_truediv(const SymNode& other) {
return truediv(other);
}
// NB: legacy, prefer float_pow or pow_by_natural
virtual SymNode pow(const SymNode& other) {
TORCH_CHECK(false, "NYI");
}
virtual SymNode float_pow(const SymNode& other) {
return pow(other);
}
virtual SymNode pow_by_natural(const SymNode& other) {
return pow(other);
}
// NB: legacy, prefer int_floordiv
virtual SymNode floordiv(const SymNode& other) {
TORCH_CHECK(false, "NYI");
}
virtual SymNode int_floordiv(const SymNode& other) {
return floordiv(other);
}
virtual SymNode mod(const SymNode& other) {
TORCH_CHECK(false, "NYI");
}
virtual SymNode eq(const SymNode& other) {
TORCH_CHECK(false, "NYI");
}
virtual SymNode ne(const SymNode& other) {
TORCH_CHECK(false, "NYI");
}
virtual SymNode gt(const SymNode& other) {
TORCH_CHECK(false, "NYI");
}
virtual SymNode lt(const SymNode& other) {
TORCH_CHECK(false, "NYI");
}
virtual SymNode le(const SymNode& other) {
TORCH_CHECK(false, "NYI");
}
virtual SymNode ge(const SymNode& other) {
TORCH_CHECK(false, "NYI");
}
virtual SymNode ceil() {
TORCH_CHECK(false, "NYI");
}
virtual SymNode floor() {
TORCH_CHECK(false, "NYI");
}
virtual SymNode neg() {
TORCH_CHECK(false, "NYI");
};
virtual SymNode sym_min(const SymNode& other) {
TORCH_CHECK(false, "NYI");
};
virtual SymNode sym_max(const SymNode& other) {
TORCH_CHECK(false, "NYI");
};
virtual SymNode sym_or(const SymNode& other) {
TORCH_CHECK(false, "NYI");
};
virtual SymNode sym_and(const SymNode& other) {
TORCH_CHECK(false, "NYI");
};
virtual SymNode sym_not() {
TORCH_CHECK(false, "NYI");
};
virtual SymNode sym_ite(const SymNode& then_val, const SymNode& else_val) {
TORCH_CHECK(false, "NYI");
};
// NB: self is ignored here, only the arguments are used
virtual SymNode is_contiguous(
ArrayRef<SymNode> sizes,
ArrayRef<SymNode> strides) {
TORCH_CHECK(false, "NYI");
};
virtual SymNode is_channels_last_contiguous_2d(
ArrayRef<SymNode> sizes,
ArrayRef<SymNode> strides) {
TORCH_CHECK(false, "NYI");
};
virtual SymNode is_channels_last_contiguous_3d(
ArrayRef<SymNode> sizes,
ArrayRef<SymNode> strides) {
TORCH_CHECK(false, "NYI");
};
virtual SymNode is_channels_last_strides_2d(
ArrayRef<SymNode> sizes,
ArrayRef<SymNode> strides) {
TORCH_CHECK(false, "NYI");
};
virtual SymNode is_channels_last_strides_3d(
ArrayRef<SymNode> sizes,
ArrayRef<SymNode> strides) {
TORCH_CHECK(false, "NYI");
};
virtual SymNode is_non_overlapping_and_dense(
ArrayRef<SymNode> sizes,
ArrayRef<SymNode> strides) {
TORCH_CHECK(false, "NYI");
};
virtual SymNode clone() {
TORCH_CHECK(false, "NYI");
};
virtual SymNode sym_float() {
TORCH_CHECK(false, "NYI");
}
virtual SymNode wrap_int(int64_t num) {
TORCH_CHECK(false, "NYI");
};
virtual SymNode wrap_float(double num) {
TORCH_CHECK(false, "NYI");
};
virtual SymNode wrap_bool(bool num) {
TORCH_CHECK(false, "NYI");
};
virtual int64_t guard_int(const char* file, int64_t line) {
TORCH_CHECK(false, "NYI");
};
virtual bool guard_bool(const char* file, int64_t line) {
TORCH_CHECK(false, "NYI");
};
virtual double guard_float(const char* file, int64_t line) {
TORCH_CHECK(false, "NYI");
};
virtual bool guard_size_oblivious(const char* file, int64_t line) {
// No improvement for unbacked SymBools by default, replace this
// with a better implementation!
return guard_bool(file, line);
}
virtual bool expect_true(const char* file, int64_t line) {
// No improvement for unbacked SymBools by default, replace this
// with a better implementation!
return guard_bool(file, line);
};
virtual bool expect_size(const char* file, int64_t line) {
// No improvement for unbacked SymInts by default, replace this
// with a better implementation!
return ge(wrap_int(0))->guard_bool(file, line);
};
virtual int64_t int_() {
TORCH_CHECK(false, "NYI");
};
virtual bool bool_() {
TORCH_CHECK(false, "NYI");
};
virtual bool has_hint() {
TORCH_CHECK(false, "NYI");
};
virtual std::string str() {
TORCH_CHECK(false, "NYI");
};
virtual std::optional<int64_t> nested_int() {
return c10::nullopt;
}
virtual std::optional<int64_t> nested_int_coeff() {
return c10::nullopt;
}
virtual std::optional<int64_t> constant_int() {
return c10::nullopt;
}
virtual std::optional<bool> constant_bool() {
return c10::nullopt;
}
virtual std::optional<int64_t> maybe_as_int() {
return c10::nullopt;
}
virtual bool is_constant() {
return false;
}
virtual bool is_symbolic() {
return true;
}
std::ostream& operator<<(std::ostream& os) {
os << str();
return os;
}
};
} // namespace c10