pytorch/docs/source
Alban Desmaison 9cdd1d7e48 Docs module check (#67440)
Summary:
Add check to make sure we do not add new submodules without documenting them in an rst file.
This is especially important because our doc coverage only runs for modules that are properly listed.

temporarily removed "torch" from the list to make sure the failure in CI looks as expected. EDIT: fixed now

This is what a CI failure looks like for the top level torch module as an example:
![image](https://user-images.githubusercontent.com/6359743/139264690-01af48b3-cb2f-4cfc-a50f-975fca0a8140.png)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/67440

Reviewed By: jbschlosser

Differential Revision: D32005310

Pulled By: albanD

fbshipit-source-id: 05cb2abc2472ea4f71f7dc5c55d021db32146928
2021-11-01 06:24:27 -07:00
..
_static
_templates
community Update contribution_guide.rst (#64142) 2021-08-30 19:26:59 -07:00
elastic (torchelastic) make --max_restarts explicit in the quickstart and runner docs (#65838) 2021-09-29 19:29:01 -07:00
notes Update extending doc to cover forward mode AD (#66962) 2021-10-27 14:18:38 -07:00
rpc Support Union in TorchScript (#64234) 2021-09-03 06:12:24 -07:00
scripts [docs] Add images to some activation functions (#65415) 2021-09-22 11:05:29 -07:00
__config__.rst
amp.rst
autograd.rst Update extending doc to cover forward mode AD (#66962) 2021-10-27 14:18:38 -07:00
backends.rst
benchmark_utils.rst
bottleneck.rst
checkpoint.rst
complex_numbers.rst
conf.py Docs module check (#67440) 2021-11-01 06:24:27 -07:00
cpp_extension.rst
cpp_index.rst
cuda.rst [CUDA graphs] Beta, not prototype (#65247) 2021-09-20 13:32:36 -07:00
cudnn_persistent_rnn.rst Remove orphan from cuDNN persistent note (#65160) 2021-09-21 11:09:47 -07:00
cudnn_rnn_determinism.rst
data.rst Add a warning about DataLoader num_workers > 0 "memory leak" (#64337) 2021-09-01 21:49:41 -07:00
ddp_comm_hooks.rst [DDP Comm Hook] Add debugging communication hooks to ddp_comm_hooks.rst (#64352) 2021-09-01 17:37:19 -07:00
distributed.algorithms.join.rst
distributed.elastic.rst
distributed.optim.rst
distributed.rst Update distributed.rst to show that CUDA send/recv on GPU is supported (#65601) 2021-09-24 12:30:10 -07:00
distributions.rst
dlpack.rst
docutils.conf
fft.rst C++ API and docs for hfftn (#66127) 2021-10-07 12:48:36 -07:00
futures.rst
fx.rst
hub.rst
index.rst
jit.rst Back out "D30740897 Add fusion enabled apis" (#64500) 2021-09-04 20:55:58 -07:00
jit_builtin_functions.rst
jit_language_reference.rst Document torch.jit.is_tracing() (#67326) 2021-10-28 09:56:09 -07:00
jit_language_reference_v2.rst Document torch.jit.is_tracing() (#67326) 2021-10-28 09:56:09 -07:00
jit_python_reference.rst
jit_unsupported.rst
linalg.rst Create linalg.matrix_exp (#62715) 2021-10-19 09:07:15 -07:00
math-quantizer-equation.png
mobile_optimizer.rst
model_zoo.rst
multiprocessing.rst
name_inference.rst
named_tensor.rst
nn.functional.rst
nn.init.rst
nn.rst Implements the orthogonal parametrization (#62089) 2021-08-30 13:12:07 -07:00
onnx.rst ONNX: Delete or document skipped ORT tests (#64470) (#66143) 2021-10-22 13:46:16 -07:00
optim.rst To add SequentialLR to PyTorch Core Schedulers (#64037) 2021-09-09 09:36:32 -07:00
package.rst [package] add some docs describing how to debug dependencies (#65704) 2021-09-27 12:14:23 -07:00
pipeline.rst fixed comments referring fairscale master branch (#65531) 2021-09-23 14:37:58 -07:00
profiler.rst
quantization-support.rst quantization docs: remove erroneous rebase artifact (#66577) 2021-10-14 11:30:47 -07:00
quantization.rst pytorch quantization: document the custom module APIs (#67449) 2021-10-29 05:22:17 -07:00
random.rst
rpc.rst
sparse.rst
special.rst [special] special alias for softmax (#62251) 2021-10-01 03:55:32 -07:00
storage.rst
tensor_attributes.rst
tensor_view.rst Add tensor.{adjoint(),H,mT,mH} methods and properties (#64179) 2021-10-13 07:44:43 -07:00
tensorboard.rst
tensors.rst [numpy] add torch.argwhere (#64257) 2021-10-30 15:26:11 -07:00
testing.rst [Doc] make_tensor to torch.testing module (#63925) 2021-08-30 12:25:40 -07:00
torch.ao.ns._numeric_suite.rst Quantization docs: add pages for Numeric Suite (Eager and FX) (#66380) 2021-10-11 18:47:58 -07:00
torch.ao.ns._numeric_suite_fx.rst Quantization docs: add pages for Numeric Suite (Eager and FX) (#66380) 2021-10-11 18:47:58 -07:00
torch.overrides.rst
torch.rst [numpy] add torch.argwhere (#64257) 2021-10-30 15:26:11 -07:00
type_info.rst clarify that torch.finfo.tiny is the smallest normal number (#63241) 2021-08-18 13:44:52 -07:00