mirror of
https://github.com/saymrwulf/pytorch.git
synced 2026-05-14 20:57:59 +00:00
# Motivation Fix https://github.com/pytorch/pytorch/issues/138577. # Solution 1. All UTs in `test/inductor/test_compiled_optimizers.py` are fixed by https://github.com/pytorch/pytorch/pull/134170 2. UT in `test/inductor/test_pattern_matcher.py` is introduced by https://github.com/pytorch/pytorch/pull/138089, we will skip this UT due to the unsupported feature `max_autotune_gemm_backends:Triton`. 3. We have a new impl related to `histc`, so we remove the expected failure from `test/inductor/test_torchinductor_opinfo.py` 4. We support `avg_pool3d` for `fp16` data type, so we remove the expected failure from `test/inductor/test_torchinductor_opinfo.py` 5. CUDA-bias code is introduced by https://github.com/pytorch/pytorch/issues/138472, we just generalize it to `GPU_TYPE`. # Additional Context > Why update torch-xpu-ops commit pin here? We have to update commit pin to avoid the build failure raised by the code change [C10_UNUSED](https://github.com/pytorch/pytorch/pull/138364). > What does the feature of torch-xpu-ops update? 1. Add some foreach ops, like `unary ops` and `foreach_clamp_max` etc; 2. Add some maxpool ops forward and backward, like `averge_pool3d` and `max_pool3d` 3. Add some other ops, like `log_normal_`, `index_copy`, and `mode` etc; 4. fix build failure related to `C10_UNUSED`; Pull Request resolved: https://github.com/pytorch/pytorch/pull/138548 Approved by: https://github.com/malfet, https://github.com/EikanWang |
||
|---|---|---|
| .. | ||
| benchmark@0d98dba29d | ||
| composable_kernel@11b7a4db00 | ||
| cpp-httplib@3b6597bba9 | ||
| cpuinfo@1e83a2fdd3 | ||
| cudnn_frontend@2533f5e5c1 | ||
| cutlass@bbe579a9e3 | ||
| eigen@3147391d94 | ||
| fbgemm@dbc3157bf2 | ||
| flatbuffers@01834de25e | ||
| fmt@0c9fce2ffe | ||
| FP16@4dfe081cf6 | ||
| FXdiv@b408327ac2 | ||
| gemmlowp | ||
| gloo@5354032ea0 | ||
| googletest@e2239ee604 | ||
| ideep@41d636c2bb | ||
| ittapi@5b8a7d7422 | ||
| kineto@ed052ea024 | ||
| mimalloc@b66e3214d8 | ||
| miniz-2.1.0 | ||
| nccl | ||
| nlohmann@87cda1d664 | ||
| NNPACK@c07e3a0400 | ||
| NVTX@e170594ac7 | ||
| onnx@3bf92c03a9 | ||
| opentelemetry-cpp@a799f4aed9 | ||
| pocketfft@9d3ab05a7f | ||
| protobuf@d1eca4e4b4 | ||
| psimd@072586a71b | ||
| pthreadpool@4fe0e1e183 | ||
| pybind11@a2e59f0e70 | ||
| python-peachpy@f45429b087 | ||
| sleef@60e76d2bce | ||
| tensorflow_cuda_bazel_build/cuda | ||
| tensorpipe@52791a2fd2 | ||
| valgrind-headers | ||
| VulkanMemoryAllocator@a6bfc23725 | ||
| XNNPACK@87ee0b46b8 | ||
| BUCK.oss | ||
| BUILD | ||
| build_bundled.py | ||
| cpp-httplib.BUILD | ||
| cuda.BUILD | ||
| cudnn.BUILD | ||
| cudnn_frontend.BUILD | ||
| cutlass.BUILD | ||
| eigen.BUILD | ||
| fmt.BUILD | ||
| generate-cpuinfo-wrappers.py | ||
| generate-xnnpack-wrappers.py | ||
| glog.buck.bzl | ||
| gloo.BUILD | ||
| ideep.BUILD | ||
| kineto.buck.bzl | ||
| kineto.BUILD | ||
| LICENSES_BUNDLED.txt | ||
| METADATA.bzl | ||
| mkl-dnn.BUILD | ||
| mkl.BUILD | ||
| mkl_headers.BUILD | ||
| nlohmann.BUILD | ||
| onnx.BUILD | ||
| opentelemetry-cpp.BUILD | ||
| README.md | ||
| sleef.BUILD | ||
| sleef.bzl | ||
| substitution.bzl | ||
| tensorpipe.BUILD | ||
| xnnpack.buck.bzl | ||
| xnnpack_src_defs.bzl | ||
| xnnpack_wrapper_defs.bzl | ||
| xpu.txt | ||
This folder contains vendored copies of third-party libraries that we use.