pytorch/torch/_dynamo
Yanbo Liang 3916d729c8 [Dynamo] tensor.type() should return tensor types with CPU and GPU variants (#90021)
Fix errors from [7k github models](https://github.com/pytorch/torchdynamo/issues/1884)
```
Traceback (most recent call last):
  File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/utils.py", line 1062, in get_fake_value
    return wrap_fake_exception(
  File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/utils.py", line 739, in wrap_fake_exception
    return fn()
  File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/utils.py", line 1063, in <lambda>
    lambda: run_node(tx.output, node, args, kwargs, nnmodule)
  File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/utils.py", line 1112, in run_node
    raise RuntimeError(
RuntimeError: Failed running call_function <function einsum at 0x7fd8f246a4c0>(*('i,j->ij', FakeTensor(FakeTensor(..., device='meta', size=(4,)), cpu), FakeTensor(FakeTensor(..., device='meta', size=(2,)), cuda:0)), **{}):
Unhandled FakeTensor Device Propagation for aten.mul.Tensor, found two different devices cpu, cuda:0
(scroll up for backtrace)
```

The root cause is: ```tensor.type()``` should return ```torch.cuda.FloatTensor``` rather than ```torch.FloatTensor``` if it's on GPU.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90021
Approved by: https://github.com/jansel
2022-12-02 18:57:43 +00:00
..
optimizations Reland "Dynamo, FX, Inductor Progress Bars (#88384)" … (#90055) 2022-12-02 13:28:00 +00:00
variables [Dynamo] tensor.type() should return tensor types with CPU and GPU variants (#90021) 2022-12-02 18:57:43 +00:00
__init__.py Add torch._dynamo to docs (#89510) 2022-11-23 16:33:13 +00:00
allowed_functions.py Graph-break on FSDP in dynamo (#87420) 2022-10-25 17:07:44 +00:00
bytecode_analysis.py Fix line numbers bug (#87247) 2022-10-19 22:44:01 +00:00
bytecode_transformation.py Fix line numbers bug (#87247) 2022-10-19 22:44:01 +00:00
codegen.py [dynamo] Port all pytorch/dynamo and test/dynamo pieces over from symbolic-shapes branch (#88768) 2022-11-13 04:50:21 +00:00
config.py add env/config flag to disable dynamo (#89828) 2022-11-30 01:59:44 +00:00
convert_frame.py Type torch._dynamo.guards (#89919) 2022-12-01 13:43:10 +00:00
debug_utils.py Add arguments to collect_results (#89611) 2022-11-30 04:25:33 +00:00
eval_frame.py Disable dynamo on optimizer lazy initialization (#89902) 2022-12-02 01:15:11 +00:00
exc.py Fix all references to torchdynamo from the merge (#87731) 2022-10-31 06:51:07 +00:00
guards.py Type torch._dynamo.guards (#89919) 2022-12-01 13:43:10 +00:00
logging.py Reland "Dynamo, FX, Inductor Progress Bars (#88384)" … (#90055) 2022-12-02 13:28:00 +00:00
mutation_guard.py
output_graph.py Type torch._dynamo.guards (#89919) 2022-12-01 13:43:10 +00:00
profiler.py
replay_record.py
resume_execution.py
side_effects.py [dynamo] mutable local caching to make dynamo faster at tracing mutation (#89170) 2022-11-19 01:47:48 +00:00
skipfiles.py Disable optimizer tracing, enable for tests only (#89500) 2022-11-24 04:15:34 +00:00
source.py Implement guard_source on RandomValueSource (#89711) 2022-11-28 00:32:48 +00:00
symbolic_convert.py Cache guards once per variable tracker, rather than re-propagating them repeatedly (#89827) 2022-12-02 01:45:05 +00:00
test_case.py [dynamo] Unify raise_on_* config to suppress_errors and raise by default (#87440) 2022-10-21 17:03:29 +00:00
test_minifier_common.py Add comprehensive minifier tests (#88022) 2022-11-17 02:02:29 +00:00
testing.py [dashboard][huggingface] skip accuracy checks for really large models… (#89273) 2022-11-19 00:22:45 +00:00
types.py Type torch._dynamo.guards (#89919) 2022-12-01 13:43:10 +00:00
utils.py [dynamo][benchmarks] Call zero grad (#90026) 2022-12-02 04:05:57 +00:00