pytorch/test/distributed
Pritam Damania ad260ae7fd Disable test_joing_running_workers for TSAN. (#46966)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46966

These tests had false positives in TSAN for modifying thread local
variables:

```
WARNING: ThreadSanitizer: data race (pid=5364)
  Write of size 8 at 0x7b2c0004ff70 by thread T2:
    #0 free <null> (libtools_build_sanitizers_tsan-py.so+0xde6ad)
    #1 __GI__dl_deallocate_tls

  Previous write of size 1 at 0x7b2c0004ff71 by thread T3:
    #0 at::GradMode::set_enabled(bool) caffe2/aten/src/ATen/core/grad_mode.cpp:20 (libcaffe2_ATen-core.so+0x40e013)
    #1 torch::autograd::set_grad_enabled(_object*, _object*) caffe2/torch/csrc/autograd/init.cpp:143 (libcaffe2__C_impl_cuda.so+0x115ef0e)
    #2 _PyMethodDef_RawFastCallKeywords

  Thread T3 (tid=5385, finished) created by main thread at:
    #0 pthread_create <null> (libtools_build_sanitizers_tsan-py.so+0xc5a86)
    #1 PyThread_start_new_thread
```
ghstack-source-id: 115330433

Test Plan: waitforbuildbot

Reviewed By: mrshenli

Differential Revision: D24584411

fbshipit-source-id: e35f704dfcb7b161a13a4902beaf8b1e41ccd596
2020-10-28 19:28:04 -07:00
..
_pipeline/sync Disable test_joing_running_workers for TSAN. (#46966) 2020-10-28 19:28:04 -07:00
algorithms/ddp_comm_hooks add skip_if_rocm to all requires_nccl tests (#45158) 2020-09-24 08:37:49 -07:00
nn/jit
rpc
test_c10d.py torch.nn.modules.LazyModuleMixin and torch.nn.LazyLinear (Shape Inference II) (#44538) 2020-10-19 13:13:54 -07:00
test_c10d_spawn.py Workaround for bug in DistributedDataParallel (#46186) 2020-10-13 07:34:02 -07:00
test_data_parallel.py Enable DataParallel to run zero input Module (#46565) 2020-10-22 18:04:33 -07:00
test_distributed_fork.py Remove py2 compatible future imports (#44735) 2020-09-16 12:55:57 -07:00
test_distributed_spawn.py Remove py2 compatible future imports (#44735) 2020-09-16 12:55:57 -07:00
test_nccl.py