mirror of
https://github.com/saymrwulf/transformers.git
synced 2026-05-14 20:58:08 +00:00
[doc] fix some typos and add xpu to the testing documentation (#29894)
fix typo
This commit is contained in:
parent
22d159ddf9
commit
7c19fafe44
2 changed files with 11 additions and 12 deletions
|
|
@ -168,7 +168,7 @@ pytest -k "ada and not adam" tests/test_optimization.py
|
|||
For example to run both `test_adafactor` and `test_adam_w` you can use:
|
||||
|
||||
```bash
|
||||
pytest -k "test_adam_w or test_adam_w" tests/test_optimization.py
|
||||
pytest -k "test_adafactor or test_adam_w" tests/test_optimization.py
|
||||
```
|
||||
|
||||
Note that we use `or` here, since we want either of the keywords to match to include both.
|
||||
|
|
@ -457,7 +457,7 @@ Let's depict the GPU requirements in the following table:
|
|||
|
||||
|
||||
| n gpus | decorator |
|
||||
|--------+--------------------------------|
|
||||
|--------|--------------------------------|
|
||||
| `>= 0` | `@require_torch` |
|
||||
| `>= 1` | `@require_torch_gpu` |
|
||||
| `>= 2` | `@require_torch_multi_gpu` |
|
||||
|
|
@ -518,21 +518,21 @@ To run the test suite on a specific torch device add `TRANSFORMERS_TEST_DEVICE="
|
|||
TRANSFORMERS_TEST_DEVICE="cpu" pytest tests/utils/test_logging.py
|
||||
```
|
||||
|
||||
This variable is useful for testing custom or less common PyTorch backends such as `mps`. It can also be used to achieve the same effect as `CUDA_VISIBLE_DEVICES` by targeting specific GPUs or testing in CPU-only mode.
|
||||
This variable is useful for testing custom or less common PyTorch backends such as `mps`, `xpu` or `npu`. It can also be used to achieve the same effect as `CUDA_VISIBLE_DEVICES` by targeting specific GPUs or testing in CPU-only mode.
|
||||
|
||||
Certain devices will require an additional import after importing `torch` for the first time. This can be specified using the environment variable `TRANSFORMERS_TEST_BACKEND`:
|
||||
|
||||
```bash
|
||||
TRANSFORMERS_TEST_BACKEND="torch_npu" pytest tests/utils/test_logging.py
|
||||
```
|
||||
Alternative backends may also require the replacement of device-specific functions. For example `torch.cuda.manual_seed` may need to be replaced with a device-specific seed setter like `torch.npu.manual_seed` to correctly set a random seed on the device. To specify a new backend with backend-specific device functions when running the test suite, create a Python device specification file in the format:
|
||||
Alternative backends may also require the replacement of device-specific functions. For example `torch.cuda.manual_seed` may need to be replaced with a device-specific seed setter like `torch.npu.manual_seed` or `torch.xpu.manual_seed` to correctly set a random seed on the device. To specify a new backend with backend-specific device functions when running the test suite, create a Python device specification file `spec.py` in the format:
|
||||
|
||||
```
|
||||
```python
|
||||
import torch
|
||||
import torch_npu
|
||||
import torch_npu # for xpu, replace it with `import intel_extension_for_pytorch`
|
||||
# !! Further additional imports can be added here !!
|
||||
|
||||
# Specify the device name (eg. 'cuda', 'cpu', 'npu')
|
||||
# Specify the device name (eg. 'cuda', 'cpu', 'npu', 'xpu', 'mps')
|
||||
DEVICE_NAME = 'npu'
|
||||
|
||||
# Specify device-specific backends to dispatch to.
|
||||
|
|
@ -541,11 +541,10 @@ MANUAL_SEED_FN = torch.npu.manual_seed
|
|||
EMPTY_CACHE_FN = torch.npu.empty_cache
|
||||
DEVICE_COUNT_FN = torch.npu.device_count
|
||||
```
|
||||
This format also allows for specification of any additional imports required. To use this file to replace equivalent methods in the test suite, set the environment variable `TRANSFORMERS_TEST_DEVICE_SPEC` to the path of the spec file.
|
||||
This format also allows for specification of any additional imports required. To use this file to replace equivalent methods in the test suite, set the environment variable `TRANSFORMERS_TEST_DEVICE_SPEC` to the path of the spec file, e.g. `TRANSFORMERS_TEST_DEVICE_SPEC=spec.py`.
|
||||
|
||||
Currently, only `MANUAL_SEED_FN`, `EMPTY_CACHE_FN` and `DEVICE_COUNT_FN` are supported for device-specific dispatch.
|
||||
|
||||
|
||||
### Distributed training
|
||||
|
||||
`pytest` can't deal with distributed training directly. If this is attempted - the sub-processes don't do the right
|
||||
|
|
@ -579,7 +578,7 @@ pytest -s tests/utils/test_logging.py
|
|||
To send test results to JUnit format output:
|
||||
|
||||
```bash
|
||||
py.test tests --junitxml=result.xml
|
||||
pytest tests --junitxml=result.xml
|
||||
```
|
||||
|
||||
### Color control
|
||||
|
|
|
|||
|
|
@ -792,13 +792,13 @@ def require_torch_xpu(test_case):
|
|||
|
||||
def require_torch_multi_xpu(test_case):
|
||||
"""
|
||||
Decorator marking a test that requires a multi-XPU setup with IPEX and atleast one XPU device. These tests are
|
||||
Decorator marking a test that requires a multi-XPU setup with IPEX and at least one XPU device. These tests are
|
||||
skipped on a machine without IPEX or multiple XPUs.
|
||||
|
||||
To run *only* the multi_xpu tests, assuming all test names contain multi_xpu: $ pytest -sv ./tests -k "multi_xpu"
|
||||
"""
|
||||
if not is_torch_xpu_available():
|
||||
return unittest.skip("test requires IPEX and atleast one XPU device")(test_case)
|
||||
return unittest.skip("test requires IPEX and at least one XPU device")(test_case)
|
||||
|
||||
return unittest.skipUnless(torch.xpu.device_count() > 1, "test requires multiple XPUs")(test_case)
|
||||
|
||||
|
|
|
|||
Loading…
Reference in a new issue