Commit graph

31 commits

Author SHA1 Message Date
Edward Yang
b4a35632f9 Add function to materialize COW storages (#117053)
Summary: From Kurt Mohler, see https://github.com/pytorch/pytorch/pull/113396 (manually imported due to ghimport problems)

Test Plan: sandcastle, OSS CI

Differential Revision: D52610522

Pull Request resolved: https://github.com/pytorch/pytorch/pull/117053
Approved by: https://github.com/malfet, https://github.com/kurtamohler
2024-01-10 15:34:16 +00:00
PyTorch MergeBot
f36d09fcb7 Revert "Add function to materialize COW storages (#113396)"
This reverts commit e2f090086b.

Reverted https://github.com/pytorch/pytorch/pull/113396 on behalf of https://github.com/DanilBaibak due to Break internal build ([comment](https://github.com/pytorch/pytorch/pull/113396#issuecomment-1818769090))
2023-11-20 10:26:01 +00:00
Kurt Mohler
e2f090086b Add function to materialize COW storages (#113396)
Part of #109833

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113396
Approved by: https://github.com/ezyang
2023-11-17 01:58:51 +00:00
feifan
c73da67d46 new_qtensor support privateuseone allocator. (#111464)
I want to create a quant tensor through `PerTensorAffineQuantizer`. But I found that it will throw error because of the lake of judgment for PrivateUse1.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111464
Approved by: https://github.com/ezyang
2023-11-01 05:16:58 +00:00
FFFrog
68cb854d73 Fix CPUFallback Mechinasm on TensorList Type (#105209)
Fixes #104965

Currently, the cpufallback mechinasm lack the code logic of TensorList, so some operators like _foreach_add_/_foreach_add don`t work well.

cc  @bdhirsh

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105209
Approved by: https://github.com/bdhirsh
2023-08-05 15:38:30 +00:00
FFFrog
ae4b2d272f Fix the Test of duplicate registration on genarator (#106536)
The duplicate registration test case shown in the figure below has always failed.
3d165dc3f3/test/test_cpp_extensions_open_device_registration.py (L171-L173)

3d165dc3f3/aten/src/ATen/core/GeneratorForPrivateuseone.h (L36-L37)

Because there is a static variable in the ```self.module.register_generator()``` function, it will only be initialized once.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106536
Approved by: https://github.com/albanD
2023-08-04 16:09:40 +00:00
Brian Hirsh
4a549dd57a AOTAutograd: correctness fix when tracing custom autograd functions that alias inputs (#102992)
Fixes https://github.com/pytorch/pytorch/issues/102970. See the comment [here](https://github.com/pytorch/pytorch/issues/102970#issuecomment-1577223773) for details.

We normally treat "outputs that alias inputs" specially in AOTAutograd, by replaying the views at runtime, instead of baking them into the graph. For views that are part of custom autograd functions though, we can't do that view-replay, since it will clobber the backwards function that the user specified in their custom autograd.Function.

Right now in this PR, I distinguish between "aliased inputs that are normal views" vs. "aliased inputs that are views that came from an autograd.Function call" by checking the outputs `.grad_fn` field, to see if it inherits from our custom CBackward function class. Then I added a new `OutputType` enum value, that we effectively treat the "normal" way (the same way that we treat ordinary, non-aliased outputs). The new enum val is mostly for debugging - so we can print it and know that our graph had custom autograd.Function aliased outputs in it.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102992
Approved by: https://github.com/ezyang, https://github.com/zou3519
2023-07-31 19:02:12 +00:00
shibo19
7047d132fd add context support for custom device (#105056)
Fixes #ISSUE_NUMBER
as the title, add context support for custom device and testcase.
And in the future, we may want to refactor these hooks for different device to unify the APIs, would you agree my
idea? @albanD
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105056
Approved by: https://github.com/albanD
2023-07-29 12:56:03 +00:00
Bug Hunter Yan
b7777c812e extend serialization for tensor metadata (#99808)
Fixes #ISSUE_NUMBER
Add the serialization logic of backend metadata to the serialization of tensor, which is implemented through custom registration functions.

In #97429 , the structure backendMeta is provided in TensorImpl, and we think that this part of information may also need to be serialized for custom.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99808
Approved by: https://github.com/ezyang, https://github.com/huydhn
2023-06-14 01:43:21 +00:00
Bug Hunter Yan
0c470b17e3 Extend storage create for custom storageImpl (#100237)
Fixes #ISSUE_NUMBER

For the scenario where users inherit storageimpl to implement their own subclasses, the current storage creation method cannot correctly create storage objects.

Refer to the registration method of Allocator to expand the creation method of storageimpl, users can register their own custom storageimpl creation.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100237
Approved by: https://github.com/albanD
2023-05-17 04:30:13 +00:00
PyTorch MergeBot
1272cd73da Revert "extend serialization for tensor metadata (#99808)"
This reverts commit 4b9bc6f2a6.

Reverted https://github.com/pytorch/pytorch/pull/99808 on behalf of https://github.com/izaitsevfb due to Breaks internal builds: ld.lld: error: undefined symbol: torch::jit::GetBackendMetaSerialization() ([comment](https://github.com/pytorch/pytorch/pull/99808#issuecomment-1550071656))
2023-05-16 17:22:25 +00:00
fakeYan
4b9bc6f2a6 extend serialization for tensor metadata (#99808)
Fixes #ISSUE_NUMBER
Add the serialization logic of backend metadata to the serialization of tensor, which is implemented through custom registration functions.

In #97429 , the structure backendMeta is provided in TensorImpl, and we think that this part of information may also need to be serialized for custom.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99808
Approved by: https://github.com/ezyang
2023-05-15 19:45:34 +00:00
zhi.cai
bf50180b4a enable dispatch stub for backend PrivateUse1 (#99611)
When expanding the new backend of pytorch in the form of out ot tree, Privateuse1 will be reused. So we also need to support PrivateUse1 in the dispatch stub module

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99611
Approved by: https://github.com/ezyang
2023-05-12 04:02:12 +00:00
XDaoHong
a723f1f2b9 fix _privateuse1_tag problem (#100632)
Fix _privateuse1_tag bug in torch/serialization.py
Add device_index after device_type.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100632
Approved by: https://github.com/ezyang
2023-05-10 09:53:19 +00:00
PyTorch MergeBot
5c14eea1de Revert "extend serialization for tensor metadata (#99808)"
This reverts commit 73dd6f04c9.

Reverted https://github.com/pytorch/pytorch/pull/99808 on behalf of https://github.com/atalman due to breaking internal builds ([comment](https://github.com/pytorch/pytorch/pull/99808#issuecomment-1536823538))
2023-05-05 21:55:52 +00:00
Bug Hunter Yan
73dd6f04c9 extend serialization for tensor metadata (#99808)
Fixes #ISSUE_NUMBER
Add the serialization logic of backend metadata to the serialization of tensor, which is implemented through custom registration functions.

In #97429 , the structure backendMeta is provided in TensorImpl, and we think that this part of information may also need to be serialized for custom.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99808
Approved by: https://github.com/ezyang
2023-05-04 20:32:11 +00:00
wbigat
b02aa5e71d [Feature] storage resize_ support custom device. (#99882)
Fixes #99326

Support storage resize_ for custom device, by calling dispatched tensor operations.

@ezyang  this pr is another case  that was brought up in issue #99326,  please take a moment to review this change.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99882
Approved by: https://github.com/ezyang
2023-04-27 20:18:35 +00:00
wbigat
ee5f09ab80 [Feature] storage pin memory support custom device. (#99712)
Fixes #99326

Support storage pin_memory and is_pinned for custom device, by calling dispatched tensor operations.

@ezyang  this pr is what we have discussed in issue #99326, would you please take a moment to review it, thanks.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99712
Approved by: https://github.com/ezyang
2023-04-21 18:31:01 +00:00
Bug Hunter Yan
2b54d673fc Add custom backend case for storage and automatically generate storage attributes. (#98478)
Currently storage only considers partial backend. We want storage to create on custom backend by key PrivateUse1.
It also provides an easy automatic generation of storage-related attributes.
When the user registers a new backend, the corresponding methods and attributes can be automatically generated.
Do this code.
`torch.utils.rename_privateuse1_backend('foo')`
`torch.utils.generate_storage_for_privateuse1_backend()`
Then, get the following methods and attributes.
`torch.TypedStorage.is_foo`
`torch.TypedStorage.foo()`
`torch.UntypedStorage.is_foo`
`torch.UntypedStorage.foo()`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98478
Approved by: https://github.com/albanD
2023-04-17 19:18:39 +00:00
fakeYan
668c578083 Automatically generate attributes and methods for custom backends. (#98066)
Fixes #ISSUE_NUMBER
#97593
A new extension mechanism has been added.
When the user registers a new backend, the corresponding methods and attributes can be automatically generated.
Do this code.
`torch.utils.rename_privateuse1_backend('foo')`
`torch.utils.generate_for_privateuse1_backend()`
Then, get the following methods and attributes.
`torch.Tensor.is_foo`
`torch.Tensor.foo()`
`torch.nn.Module.foo()`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98066
Approved by: https://github.com/albanD
2023-04-13 22:04:05 +00:00
PyTorch MergeBot
cb3c478069 Revert "refactor(add privateuseone floder in aten/src/ATen): add a PrivateUse… (#98127)"
This reverts commit 5a537e291d.

Reverted https://github.com/pytorch/pytorch/pull/98127 on behalf of https://github.com/weiwangmeta due to Sorry, our internal code is not ready to take such changes
2023-04-08 05:32:21 +00:00
ykddd
5a537e291d refactor(add privateuseone floder in aten/src/ATen): add a PrivateUse… (#98127)
Add a PrivateUse1 folder to contain all the feature adaptations for PrivateUse1 under Aten,For example GetGeneratorPrivate which is used for the three-party backend to register his own Generator implementation.This makes it easier for us to centrally manage these features, and it will increase the convenience of adaptation for different back-end manufacturers. For more info: https://github.com/pytorch/pytorch/issues/98073

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98127
Approved by: https://github.com/bdhirsh
2023-04-07 03:43:16 +00:00
donnyyou
8a6e28ccd3 Fix typo for generator. (#97136)
Fix typo for generator.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97136
Approved by: https://github.com/Skylion007, https://github.com/kit1980
2023-03-20 20:43:56 +00:00
shibo
7038458c5b Add Generator register for the privateuse1 backend (#93920)
Fixes #92202
Add generator regiter for the backend of `privateuseone`

module: backend
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93920
Approved by: https://github.com/bdhirsh
2023-03-07 03:43:23 +00:00
Edward Z. Yang
19e27b1556 Make dispatcher registrations of SymInt functions backwards compatible (#84557)
Previously, when we SymInt-ify a schema, this is a BC-breaking change
for all people who registered functions for that function; they
must accept c10::SymInt where they previously accepted int64_t.
This is not great.

With this change, I accept old type registrations transparently.  The
idea is in several parts:

- At the registration site, at compile time I have no idea whether or not
  if the function being registered has a SymInt schema or not.  So I
  must defer the exact compatibility check.  What I do instead is
  check if the function pointer registered to me has SymInt in the
  argument or not.  If it does, I assume it is new-style and ensure
  it is also registered to a special sym_ slot on KernelFunction.
  If not, it only goes in the conventional slot.

- At the dispatcher site, I know at compile time whether or not this
  is a SymInt function.  If it is, I check for a sym_ slot on the
  KernelFunction, and preferentially use that.  If no such slot
  exists, I then fall back to the regular slot... but I convert
  all SymInt arguments to int64_t arguments (doing assertions that
  no true symbolic integer was passed.)  I can skip this test entirely
  if the function doesn't have any SymInts in it; in that case I know
  that only the original slot could have been registered. Fortunately,
  both branches of the short circuit typecheck, so I didn't have to
  use SFINAE or if-constexpr to make it work; just a plain if statement
  that I expect the compiler to optimize away.

- Schema validation is now modestly more complicated. There are two parts. First, function schema validation proceeds by checking if the signature in question has any SymInt-like types in it or not. If it does, we do function schema validation against the real types; if it doesn't, we do validation against the fake types (but only for symint; MemoryFormat is always MemoryFormat). Second, cpp signature validation also keeps track of a "symint" cpp signature and a "non-symint" cpp signature. We only compare symint with symint, and non-symint with non-symint. I did not implement checking a conflict between a symint and non-symint cpp signature, though in principle you could try converting the SymInt types to non-SymInt types and doing the comparison that way.

To show it is working, I remove a bunch of c10::asIntArrayRefSlow shims, as the dispatcher is able to insert them automatically now.

I didn't update the Metal registrations (though they can get similar treatment) as OSS CI coverage is insufficient for this case.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: [D39280965](https://our.internmc.facebook.com/intern/diff/D39280965)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84557
Approved by: https://github.com/wconstab
2022-09-07 16:30:21 +00:00
Edward Z. Yang
ad44670fa1 Back out "Revert D38984222: Don't introduce new overload for SymInt (#83628)" (#84173)
Also Back out "Revert D39075159: [acc_tensor] Use SymIntArrayRef for overloaded empty.memory_format's signature"

Original commit changeset: dab4a9dba4fa
Original commit changeset: dcaf16c037a9

Original Phabricator Diff: D38984222
Original Phabricator Diff: D39075159

Also update Metal registrations for C++ registration changes.

Also update NNPI registration to account for tightened schema checking

Differential Revision: [D39084762](https://our.internmc.facebook.com/intern/diff/D39084762/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D39084762/)!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84173
Approved by: https://github.com/Krovatkin
2022-08-29 18:01:07 +00:00
PyTorch MergeBot
c7edcd6968 Revert "Don't introduce new overload for SymInt (#83628)"
This reverts commit 9790d90e4b.

Reverted https://github.com/pytorch/pytorch/pull/83628 on behalf of https://github.com/malfet due to Breaks internal builds, see D39076487
2022-08-27 01:23:17 +00:00
Edward Z. Yang
9790d90e4b Don't introduce new overload for SymInt (#83628)
Previously, we introduced new SymInt overloads for every function we wanted.  This led to a lot of boilerplate, and also a lot of confusion about how the overloads needed to be implemented.

This PR takes a simpler but more risky approach: just take the original function and changes its ints to SymInts.

This is BC-breaking in the following ways:

* The C++ API for registering implementations for aten operators will change from int64_t to SymInt whenever you make this change. Code generated registrations in PyTorch do not change as codegen handles the translation automatically, but manual registrations will need to follow the change.  Typically, if you now accept a SymInt where you previously only took int64_t, you have to convert it back manually.  This will definitely break XLA, see companion PR https://github.com/pytorch/xla/pull/3914 Note that not all dispatch keys get the automatic translation; all the composite keys and Meta keys are modified to take SymInt directly (because they should handle them directly), and so there are adjustments for this.

This is not BC-breaking in the following ways:

* The user facing C++ API remains compatible.  Even if a function changes from int to SymInt, the default C++ binding still takes only ints.  (e.g., at::empty(IntArrayRef, ...).  To call with SymInts, you must call at::empty_symint instead. This involved adding two more signatures to CppSignatureGroup; in many cases I refactored code to iterate over all signatures in the group instead of hard-coding the two that previously existed.
* This is TorchScript compatible; internally we treat SymInts as ints so there is no change to what happens at runtime in TorchScript. In particular, it's OK to reference an empty schema by its old type (using int types), as long as you're not doing string equality (which you shouldn't be), these parse to the same underyling type.

Structure of the PR:

* The general strategy of this PR is that, even when you write `SymInt` inside `native_functions.yaml`, sometimes, we will treat it *as if* it were an `int`. This idea pervades the codegen changes, where we have a translation from SymInt to c10::SymInt or int64_t, and this is controlled by a symint kwarg which I added and then audited all call sites to decide which I wanted. Here are some of the major places where we pick one or the other:
  * The C++ FunctionSchema representation represents `SymInt` as `int`. There are a few places we do need to know that we actually have a SymInt and we consult `real_type()` to get the real type in this case. In particular:
    * When we do schema validation of C++ operator registration, we must compare against true schema (as the C++ API will provide `c10::SymInt`, and this will only be accepted if the schema is `SymInt`. This is handled with cloneWithRealTypes before we check for schema differences.
    * In `toIValue` argument parsing, we parse against the true schema value. For backwards compatibility reasons, I do still accept ints in many places where Layout/SymInt/etc were expected. (Well, accepting int where SymInt is expected is not BC, it's just the right logic!)
  * In particular, because SymInt never shows up as type() in FunctionSchema, this means that we no longer need a dedicated Tag::SymInt. This is good, because SymInts never show up in mobile anyway.
* Changes to functorch/aten are mostly about tracking changes to the C++ API registration convention. Additionally, since SymInt overloads no longer exist, registrations for SymInt implementations are deleted. In many cases, the old implementations did not properly support SymInts; I did not add any new functionality with this PR, but I did try to annotate with TODOs where this is work to do. Finally, because the signature of `native::` API changed from int to SymInt, I need to find alternative APIs for people who were directly calling these functions to call. Typically, I insert a new dispatch call when perf doesn't matter, or use `at::compositeexplicitautograd` namespace to handle other caes.
* The change to `make_boxed_from_unboxed_functor.h` is so that we accept a plain IntList IValue anywhere a SymIntList is expected; these are read-only arguments so covariant typing is OK.
* I change how unboxing logic works slightly. Previously, we interpret the C++ type for Layout/etc directly as IntType JIT type, which works well because the incoming IValue is tagged as an integer. Now, we interpret the C++ type for Layout as its true type, e.g., LayoutType (change to `jit_type.h`), but then we accept an int IValue for it anyway. This makes it symmetric with SymInt, where we interpret the C++ type as SymIntType, and then accept SymInt and int IValues for it.
* I renamed the `empty.names` overload to `empty_names` to make it less confusing (I kept mixing it up with the real empty overload)
* I deleted the `empty.SymInt` overload, which ended up killing a pile of functions. (This was originally a separate PR but the profiler expect test was giving me grief so I folded it in.)
* I deleted the LazyDynamicOpsTest tests. These were failing after these changes, and I couldn't figure out why they used to be passing: they make use of `narrow_copy` which didn't actually support SymInts; they were immediately converted to ints.
* I bashed LTC into working. The patches made here are not the end of the story. The big problem is that SymInt translates into Value, but what if you have a list of SymInt? This cannot be conveniently represented in the IR today, since variadic Values are not supported. To work around this, I translate SymInt[] into plain int[] (this is fine for tests because LTC dynamic shapes never actually worked); but this will need to be fixed for proper LTC SymInt support. The LTC codegen also looked somewhat questionable; I added comments based on my code reading.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83628
Approved by: https://github.com/albanD, https://github.com/bdhirsh
2022-08-26 01:35:40 +00:00
PyTorch MergeBot
a7edf71360 Revert "Don't introduce new overload for SymInt (#83628)"
This reverts commit 8fae7027b3.

Reverted https://github.com/pytorch/pytorch/pull/83628 on behalf of https://github.com/malfet due to breaking internal builds, see https://www.internalfb.com/diff/D38984222
2022-08-25 00:49:40 +00:00
Edward Z. Yang
8fae7027b3 Don't introduce new overload for SymInt (#83628)
Previously, we introduced new SymInt overloads for every function we wanted.  This led to a lot of boilerplate, and also a lot of confusion about how the overloads needed to be implemented.

This PR takes a simpler but more risky approach: just take the original function and changes its ints to SymInts.

This is BC-breaking in the following ways:

* The C++ API for registering implementations for aten operators will change from int64_t to SymInt whenever you make this change. Code generated registrations in PyTorch do not change as codegen handles the translation automatically, but manual registrations will need to follow the change.  Typically, if you now accept a SymInt where you previously only took int64_t, you have to convert it back manually.  This will definitely break XLA, see companion PR https://github.com/pytorch/xla/pull/3914 Note that not all dispatch keys get the automatic translation; all the composite keys and Meta keys are modified to take SymInt directly (because they should handle them directly), and so there are adjustments for this.

This is not BC-breaking in the following ways:

* The user facing C++ API remains compatible.  Even if a function changes from int to SymInt, the default C++ binding still takes only ints.  (e.g., at::empty(IntArrayRef, ...).  To call with SymInts, you must call at::empty_symint instead. This involved adding two more signatures to CppSignatureGroup; in many cases I refactored code to iterate over all signatures in the group instead of hard-coding the two that previously existed.
* This is TorchScript compatible; internally we treat SymInts as ints so there is no change to what happens at runtime in TorchScript. In particular, it's OK to reference an empty schema by its old type (using int types), as long as you're not doing string equality (which you shouldn't be), these parse to the same underyling type.

Structure of the PR:

* The general strategy of this PR is that, even when you write `SymInt` inside `native_functions.yaml`, sometimes, we will treat it *as if* it were an `int`. This idea pervades the codegen changes, where we have a translation from SymInt to c10::SymInt or int64_t, and this is controlled by a symint kwarg which I added and then audited all call sites to decide which I wanted. Here are some of the major places where we pick one or the other:
  * The C++ FunctionSchema representation represents `SymInt` as `int`. There are a few places we do need to know that we actually have a SymInt and we consult `real_type()` to get the real type in this case. In particular:
    * When we do schema validation of C++ operator registration, we must compare against true schema (as the C++ API will provide `c10::SymInt`, and this will only be accepted if the schema is `SymInt`. This is handled with cloneWithRealTypes before we check for schema differences.
    * In `toIValue` argument parsing, we parse against the true schema value. For backwards compatibility reasons, I do still accept ints in many places where Layout/SymInt/etc were expected. (Well, accepting int where SymInt is expected is not BC, it's just the right logic!)
  * In particular, because SymInt never shows up as type() in FunctionSchema, this means that we no longer need a dedicated Tag::SymInt. This is good, because SymInts never show up in mobile anyway.
* Changes to functorch/aten are mostly about tracking changes to the C++ API registration convention. Additionally, since SymInt overloads no longer exist, registrations for SymInt implementations are deleted. In many cases, the old implementations did not properly support SymInts; I did not add any new functionality with this PR, but I did try to annotate with TODOs where this is work to do. Finally, because the signature of `native::` API changed from int to SymInt, I need to find alternative APIs for people who were directly calling these functions to call. Typically, I insert a new dispatch call when perf doesn't matter, or use `at::compositeexplicitautograd` namespace to handle other caes.
* The change to `make_boxed_from_unboxed_functor.h` is so that we accept a plain IntList IValue anywhere a SymIntList is expected; these are read-only arguments so covariant typing is OK.
* I change how unboxing logic works slightly. Previously, we interpret the C++ type for Layout/etc directly as IntType JIT type, which works well because the incoming IValue is tagged as an integer. Now, we interpret the C++ type for Layout as its true type, e.g., LayoutType (change to `jit_type.h`), but then we accept an int IValue for it anyway. This makes it symmetric with SymInt, where we interpret the C++ type as SymIntType, and then accept SymInt and int IValues for it.
* I renamed the `empty.names` overload to `empty_names` to make it less confusing (I kept mixing it up with the real empty overload)
* I deleted the `empty.SymInt` overload, which ended up killing a pile of functions. (This was originally a separate PR but the profiler expect test was giving me grief so I folded it in.)
* I deleted the LazyDynamicOpsTest tests. These were failing after these changes, and I couldn't figure out why they used to be passing: they make use of `narrow_copy` which didn't actually support SymInts; they were immediately converted to ints.
* I bashed LTC into working. The patches made here are not the end of the story. The big problem is that SymInt translates into Value, but what if you have a list of SymInt? This cannot be conveniently represented in the IR today, since variadic Values are not supported. To work around this, I translate SymInt[] into plain int[] (this is fine for tests because LTC dynamic shapes never actually worked); but this will need to be fixed for proper LTC SymInt support. The LTC codegen also looked somewhat questionable; I added comments based on my code reading.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83628
Approved by: https://github.com/albanD, https://github.com/bdhirsh
2022-08-23 22:04:07 +00:00
Brian Hirsh
282de5539d add open device registration test with cpp extensions (#80477)
Adding a test for open device registration using cpp extensions.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80477
Approved by: https://github.com/albanD, https://github.com/malfet
2022-07-12 01:46:16 +00:00