pytorch/torch
Peter Goldsborough 7978ba45ba Update path in CI script to access ninja (#13646)
Summary:
We weren't running C++ extensions tests in CI.
Also, let's error hard when `ninja` is not available instead of skipping C++ extensions tests.

Fixes https://github.com/pytorch/pytorch/issues/13622

ezyang soumith yf225
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13646

Differential Revision: D12961468

Pulled By: goldsborough

fbshipit-source-id: 917c8a14063dc40e6ab79a0f7d345ae2d3566ba4
2018-11-07 14:31:29 -08:00
..
_thnn Update from Facebook (#8887) 2018-06-26 14:55:48 -07:00
autograd fix handling of single input in gradcheck (#13543) 2018-11-04 20:28:34 -08:00
backends Add support for torch.backends.cudnn.enabled (#13057) 2018-10-31 09:31:09 -07:00
contrib Remove stages from IR, they are not longer used 2018-10-05 13:58:15 -07:00
csrc Added the finer bucketing option for DDP (#13607) 2018-11-07 12:00:55 -08:00
cuda Rewrite http://pytorch.org -> https://pytorch.org throughout project (#12636) 2018-10-15 13:03:27 -07:00
distributed Rename DistBackend -> Backend (#11830) 2018-11-07 11:58:12 -08:00
distributions Rename potrf to cholesky (#12699) 2018-11-01 15:10:55 -07:00
for_onnx
jit Remove compileFunction (#13640) 2018-11-06 19:37:06 -08:00
legacy Remove torch/legacy (#11823) 2018-09-20 14:00:54 -07:00
lib codemod tensor.type().is_cuda(), tensor.type().is_sparse() (#13590) 2018-11-07 07:27:42 -08:00
multiprocessing Add torch.multiprocessing.spawn helper (#13518) 2018-11-06 14:08:37 -08:00
nn Distributed Data Parallel documentation for PT1 release (#13657) 2018-11-07 12:11:57 -08:00
onnx Support new upsample in symbolic, caffe2 backend & caffe2 frontend (#13272) 2018-11-05 19:13:57 -08:00
optim Add name for required optimizer parameter. (#13202) 2018-10-29 15:02:21 -07:00
sparse
testing
utils Update path in CI script to access ninja (#13646) 2018-11-07 14:31:29 -08:00
__init__.py Update '__all__' in '__init.py__' (#12762) 2018-10-18 17:52:10 -07:00
_jit_internal.py Speed up resolution callback creation (#12859) 2018-10-23 20:40:04 -07:00
_ops.py Resolve builtins using a dict rather than by name (#10927) 2018-08-28 11:25:11 -07:00
_six.py Add weak script modules (#12682) 2018-10-23 09:06:02 -07:00
_storage_docs.py
_tensor_docs.py Add diag_embed to ATen and torch (#12447) 2018-11-05 08:55:28 -08:00
_tensor_str.py Fix print precision and match numpy behavior (#12746) 2018-10-24 18:12:51 -07:00
_torch_docs.py small fixes regarding docu of torch tensors (#13635) 2018-11-06 17:24:42 -08:00
_utils.py Don't serialize hooks (#11705) 2018-10-16 20:11:03 -07:00
_utils_internal.py Use fixed MASTER_PORT in test_distributed (#13109) 2018-10-25 08:51:34 -07:00
abi-check.cpp Fixes for Torch Script C++ API (#11682) 2018-09-17 09:54:50 -07:00
CMakeLists.txt Replace cursors with OrderedDict (#13427) 2018-11-07 11:10:05 -08:00
extension.h Restructure torch/torch.h and extension.h (#13482) 2018-11-05 16:46:52 -08:00
functional.py Rename potrf to cholesky (#12699) 2018-11-01 15:10:55 -07:00
hub.py Hub Implementation (#12228) 2018-10-29 18:43:14 -07:00
random.py
README.txt
script.h Use torch:: instead of at:: in all C++ APIs (#13523) 2018-11-06 14:32:25 -08:00
serialization.py Reimplement storage slicing. (#11314) 2018-09-06 16:11:59 -07:00
storage.py Use torch.save in _StorageBase.__reduce__ (#9184) 2018-07-06 07:24:53 -07:00
tensor.py Rename potrf to cholesky (#12699) 2018-11-01 15:10:55 -07:00

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.