pytorch/test/distributed/checkpoint
Daulet Askarov 50567f7081 Pass device to is_pinned call inside TensorProperties.create_from_tensor (#128896)
Summary:
The default input device for is_pinned function is Cuda. This can unnecessarily create Cuda context for CPU tensors when just generating TensorProperties, bloating memory usage. Passing the device to the is_pinned call site inside def create_from_tensor solves this issue.

This also fixes Model Store test
https://www.internalfb.com/intern/test/844425019931542?ref_report_id=0
which is currently broken on memory usage assertions.

Test Plan: UT

Differential Revision: D58695006

Pull Request resolved: https://github.com/pytorch/pytorch/pull/128896
Approved by: https://github.com/fegin
2024-06-19 08:50:46 +00:00
..
e2e
fsdp
test_checkpoint.py
test_compatibility.py
test_dedup_tensors.py
test_dtensor_checkpoint.py
test_dtensor_resharding.py
test_file_system_checkpoint.py
test_file_system_checkpoint_cpu.py
test_format_utils.py
test_fsdp_model_state.py
test_fsdp_optim_state.py
test_fsdp_tp_checkpoint_conversion.py
test_fsspec.py
test_hsdp_checkpoint.py
test_nested_dict.py
test_planner.py
test_save_load_api.py
test_state_dict.py [DSD] Add unittest to verify HSDP1 + broadcast_from_rank0 (#128755) 2024-06-18 19:42:51 +00:00
test_state_dict_utils.py
test_tp_checkpoint.py
test_traverse.py [BE]: Enable ruff TCH rules and autofixes for better imports (#127688) 2024-06-06 16:55:58 +00:00
test_utils.py Pass device to is_pinned call inside TensorProperties.create_from_tensor (#128896) 2024-06-19 08:50:46 +00:00