Commit graph

329 commits

Author SHA1 Message Date
Olivia Jain
60089f7093
Cuda11.4 (#8709)
* initial update from 11.1 to 11.4

* change 11.4.1 to 11.4.0

* adjusting to match nvidia/cuda image tags

* adjusting to match nvidia/cuda image tags centos7

* correction to 11.4.0

* correction to 11.4.0

* update to cuda 11.4

* change training back to 11.1

* change training back to 11.1

* point to correct nvcr.io/nvidia/cuda 11.4.1 image

* change centos8 to centos7

* correct cudnn path

* Update linux-gpu-ci-pipeline.yml for Azure Pipelines

* Update c-api-noopenmp-packaging-pipelines.yml

* need to resolve centos images but remove space and change to 11.4

* Update linux-gpu-ci-pipeline.yml

* add cudnn to docker image

* bump devtoolset to 10

* revert cuda 11.4 change to setup_env_trt

* orttraining back to 11.1

* use nvcr.io

* Fix previous change back to cuda 11.1

* update cudnn path

* use cudnn image (revert if failure)
2021-08-17 16:36:26 -07:00
Changming Sun
ae6fdd3333
Bring code coverage dashboard back (#8394) 2021-08-16 20:54:39 -07:00
Dmitri Smirnov
8713d76dd1
Introduce C and C++ APIs for Sparse Tensors (#8621)
Add IsSparseTensor
  Add CreateSparseTensor
 Add utilities and test fully sparse instantiation
 Fully sparse blocksparse
 Add test and docs for fully sparse tensor instantiation
 Rework creation API
 Use API
 Non string API
 Retrofit of existing String API
 Add tests
 Add documentation
 Address build issues (Winml pending)
 Add inference test
 Bump binary size
 Add ifdef DISABLE CONTRIB
2021-08-16 16:33:47 -07:00
Changming Sun
f04a235c77
Update manylinux build scripts (#8724)
Update manylinux build scripts. Sync it with the latest upstream.
2021-08-13 12:04:00 -07:00
liqun Fu
bec24ca4c1
create packaging pipeline to support cuda11.4 (#8663) 2021-08-11 17:44:57 -07:00
Edward Chen
20f006c580
Remove flake8 check from CMake build. (#8662) 2021-08-09 14:10:36 -07:00
Suffian Khan
6dd59a1117
revert onnx version (#8643) 2021-08-09 05:53:40 -07:00
Ashwini Khade
96eb9810ba
Update onnx (#8458)
* updates for picking pnnx commit

* add tests filter to c# tests

* plus test fixes

* fix versioning for contrib ops

* fix tests

* test filter for optional ops

* more versioning related updates

* fix test

* fix layernorm spec

* more updates

* update docs

* add more test filters

* more filters

* update binary size threshold

* update docs

* plus more fixes

* updates per review

* update to release commit

* add filters for optional type tests

* plus updates
2021-08-05 09:21:44 -07:00
stevenlix
ee99fb400c
Upgrade TensorRT to v8.0.1 (#8512)
* update onnx-tensorrt parser to master

* disable unsupported tests

* add cuda sm 75 for T4

* update tensorrt pipeline

* update trt pipelines

* update trt pipelines

* Update linux-gpu-tensorrt-ci-pipeline.yml

* update trt cid pipeline

* Update linux-gpu-tensorrt-ci-pipeline.yml

* Update Tensorrt Windows build pool and TensorRT/CUDA/CuDNN version

* update to cuda11.4 in trt ci pipeline

* update base image to cuda11.4

* update packaging pipeline to cuda11.4

* clean up

* remove cuda11.1 and cuda11.3 docker file

* disable unsupported tensorrt tests at runtime

* Update linux-multi-gpu-tensorrt-ci-pipeline.yml
2021-08-02 11:20:31 -07:00
Changming Sun
0510688411
Update compliance tasks in python packaging pipeline and fix some compile warnings (#8471)
1. Update SDLNativeRules from v2 to v3. The new one allows us setting excluded paths.
2. Update TSAUpload from v1 to v2. And add a config file ".gdn/.gdntsa" for it.
3. Fix some parentheses warnings
4. Update cmake to the latest.
5. Remove "--x86" build option from pipeline yaml files. Now we can auto-detect cpu architecture from python. So we don't need to ask user to specify it.
2021-07-30 17:16:37 -07:00
Dmitri Smirnov
950fe5e28b
Implement SparseTensor and infrastructure suppport and advance ONNX commit (#8038)
SparseTensor support
  Implement Builder pattern
  Fix support for 1-D and 2-D COO indices
  Implement and test CSR support.
  Handle shape inference for SparseTensors
  Implement conversion for COO, CSR and tests.
  Address the case where constant sparse initializer is the output.
  Implement test infra for SparseTensors
  Implement SparseDenseMatMul for Csr and COO and tested it.
  Add hash for SparseToDenseMatMul
  Finish shared provider refactor
  Refactor GetOrCreate to Create
  Working on py interface
  Expose OrtDevice and use it in allocate_numpy
	Adjust Sparse interfaces, add support for string SparseTensor. Add tests.
	Add and test to_cuda()
	Add accessors to format specific indices
	Test values and indices views, read-only flag, after GC access
	Add sparse related methods to OrtValue
	Re-work SparseTensor wrapper, add OrtValue methods
	Rework numpy_array_to_cuda/to_cpu
	Add run_with_ort_values
	Add models and test sparse_mat_mul with run_with_ort_values
	Refactor sparse tensor to use a single buffer
        Ifdef x86 Eigen CSR sparse matmul implementation
        Exclude broken test, check for string type when copying cross device
       Split pybind schema, regenerate docs, add exclusion
       Conditionally exclude schema module
       Update docs fix cuda build
       Add test to a filter and renerate JS docs
      Add conversion and test string support for sparse tensors
      Exclude conversion utils from minimal build
      Add CUDA Memcpy and adjust provider interfaces
2021-07-22 15:24:36 -07:00
Thiago Crepaldi
9073c094d4 Update torch litghning and re-enable test 2021-07-22 14:18:07 -07:00
Adam Pocock
55b26b6951
[Java] Adds support for DNNL, OpenVINO, TensorRT shared providers and refactors the CUDA shared provider loader (#8013) 2021-07-20 22:33:15 -07:00
Ryan Hill
cc9f793b48
Move one function from cuda_provider_factory.h (#8407) 2021-07-19 17:55:59 -07:00
Maajid khan
1686e8ff57
[OpenVINO-EP] 2021.4 Release (#8369)
* Changes to ensure the openvino-ep-2021.4 branch is created
* Fix failing cpp and python unit tests
* Fixed Myriad Tests for Ov_2021.4
* Disabled failing python tests for myriad
* Fixes models which were breaking w.r.t 2021.4
* Added fixes to Fix tinyyolov3 working on Myriad
and MaskRcnn, FasterRcnn using GPU_FP32
* Added FP16 output data type support for ngraph
* Implemented ReadNetwork() method

->Using Core::ReadNetwork() method for reading and creating a CNNNework

->Since OpenVINO™ 2020.4 version, Inference Engine enables reading ONNX models
  via the Inference Engine Core API and there is no need to use directly the low-level
  ONNX* Importer API anymore. To read ONNX* models, it's recommended to use the
  Core::ReadNetwork() method that provide a uniform way to read models from ONNX format.

Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>

* Fixed ngraph f16 supported output type

Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>

* Added comments in data_ops.cc

Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>

* Fixed broken windows build

Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>

* Disable failing CPP tests on CPU

Some of the convtranspose tests are failing on
OpenVINO-EP CPU due to accuracy mismatch w.r.t
default CPU. so currently we are disbaling
these tests.

Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>

* Updated for ov version 2021.4

* Changes to include qdq ops in code

* Disabled failing python tests on GPU

Disabled two maxpool python tests on
GPU as they were passing but throwing
segfault

Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>

* Fix the backward compatibility issue

ReadNetwork() API has a bug and will only work
starting from OpenVINO 2021.4 version.

The previous versions will still have to use
onnx importer route

Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>

* Fix CMakeLists.txt for OpenVINO EP

If a directory with OpenVINO is sourced,
the latest OpenVINO settings have to
be imported.

Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>

Co-authored-by: sfatimar <sahar.fatima@intel/com>
Co-authored-by: sfatimar <64512376+sfatimar@users.noreply.github.com>
Co-authored-by: Aravind Gunda <aravindx.gunda@intel.com>
2021-07-19 10:40:56 -07:00
Guoyu Wang
c5038063ed
Add iOS/macOS static framework (#8357)
* Add ability to generate ios static framework

* Fix typos

* Add pod cache clean, update some comments of previous commit

* Fix CI failure with newly added cpuinfo library

* Update test model (CoreML requires node has a name)

* Addressed CR comments
2021-07-14 16:39:17 -07:00
Chen Fu
df4cb6f301
Adding pytorch cpuinfo as dependency (#8178)
Pytorch cpuinfo library allows us to query current cpu features, micro-architecture and cache size, etc. These information is needed for targeted performance optimizations.

Unfortunately it does not work under Windows/ARM. We need to develop our own later
2021-07-12 14:21:12 -07:00
Guoyu Wang
10142f9510
Add metadata_props to ORT model (#8340)
* Add metadata_props to ORT model

* Minor update

* Update python binding, and increase the minimal pipeline size threshold

* Fixed a small bug in serializing ir_version

* Remove temp ort.py.fbs and add it to .gitignore
2021-07-09 11:28:27 -07:00
baijumeswani
090bae21ab
Pinning pillow version to 8.2.0 to circumvent regression introduced by 8.3.0 (#8303) 2021-07-06 13:02:39 -07:00
Suffian Khan
008c5f7640
Use single builder image across Python versions for ROCm wheels (#8302)
* first attempt share docker image across python and torch versons

* set dependency between jobs

* fix yaml grammer

* remove python version from first stage

* clean deepspeed directroy

* split into two images according torch version

* fix yaml syntax

* invalidate cache

* remove DS to prevent torch 1.9.0 upgrade
2021-07-06 11:56:00 -07:00
baijumeswani
2bda2a62fd
Pin version of Pillow to 8.2.0 to circumvent noncompatibility with numpy (#8278) 2021-07-02 09:05:49 -07:00
Thiago Crepaldi
83be3759bc
Add post-install command to build PyTorch CPP extensions from within onnxruntime package (#8027)
ORTModule requires two PyTorch CPP extensions that are currently JIT compiled. The runtime compilation can cause issues in some environments without all build requirements or in environments with multiple instances of ORTModule running in parallel

This PR creates a custom command to compile such extensions that must be manually executed before ORTModule is executed for the first time. When users try to use ORTModule before the extensions are compiled, an error with instructions are raised

PyTorch CPP Extensions for ORTModule can be compiled by running:
python -m onnxruntime.training.ortmodule.torch_cpp_extensions.install

Full build environment is needed for this
2021-06-28 18:11:58 -07:00
Changming Sun
25db5706bb
Change "Export PyTorch CustomOp" build pipeline to use Ubuntu 20.04 (#8158)
Change "Export PyTorch CustomOp" build pipeline to use Ubuntu 20.04
2021-06-28 16:13:55 -07:00
liqunfu
9366114028
make pipelines to support torch1.8.1 and torch1.9.0 (#8084) 2021-06-25 14:55:49 -07:00
Negin Raoof
80b7b134bf
Adding optional ops in contrib ops (#7946)
* Added optional const spec
2021-06-24 13:16:31 -07:00
Changming Sun
f000dfddbe
Update run_dockerbuild.sh: set default python version based on OS version (#8136) 2021-06-23 15:50:03 -07:00
Changming Sun
6e2b064aec
Delete some unused code in run_dockerbuild.sh and Enable Nuget CUDA tests (#8089)
1. Remove some unused code and simplify tools/ci_build/github/linux/run_dockerbuild.sh.
2. Enable Nuget CUDA tests. The original design was we could leverage Directory.Build.props and let cmake generate the required properties(USE_CUDA/...) there. However, in nuget packaging pipeline we test the package on a different host that doesn't run cmake command and doesn't have the auto-generated Directory.Build.props file.
2021-06-22 18:43:33 -07:00
Chi Lo
27d1784d44
Add TRT 7.1 Pipeline (#8073)
* Revert for testing TensorRT 7.1

* change to origianl googletest version

* change machine

* remove build arg

* change back machine

* revert back googletest version

* Make it ready to merge to master

* revert onnx-tensorrt to v7.1

* rename yml

* use [[ ]] in bash command

* add sudo

* add chmod

* add correct path

* change another way to revert onnx-tensorrt

* change docker image to manylinux build
2021-06-21 20:57:04 -07:00
baijumeswani
7701c8703e
Add module attribute to ORTModule to support HuggingFace Trainer save_model (#8088) 2021-06-18 13:13:45 -07:00
Suffian Khan
35ca3c99d1
Fix ROCm wheels pipeline after changes to manylinux scripts (#8026)
* update

* try fix rocm pipeline

* avoid already isntalled error

* ignore python3.10 since build fails

* fix

* try setting user

* try again

* try again

* try again

* fix script

* disable inference docs generation

* try print device id

* fix name qual

* try again

* try again

* try again

* provider_options

* add device verify

* rty again

* try again

* try aggain

* print video/render gid

* try again

* run as root

* try again with uid, gid

* cleanup

* run as root

* temp fix

* add /bin/bash

Co-authored-by: Changming Sun <chasun@microsoft.com>
2021-06-10 21:01:28 -07:00
pengwa
cb5f411da3
Fix Python Packaging Pipeline && Build Clean Up (#7993)
* remove link to python

* revert orttraining-linux-ci build env change introduced by pr
https://github.com/microsoft/onnxruntime/pull/7993.

* fix builds

* fix builds

* clean up

* fix builds

* Fix unused params

* fix some comments.
2021-06-09 17:35:17 +08:00
George Wu
47d8977741
add missing provider_options.h in packages (#7995)
* consolidate copy binary script for gpu/trt tarball package

* add provider_options.h

* add provider_options.h
2021-06-08 16:37:05 -07:00
Changming Sun
4ecbae43b2
Use GCC 10 in Linux CPU CI pipeline (#7985) 2021-06-08 11:53:29 -07:00
pengwa
9e4dc08483
training with custom autograd Functions (#7513)
* Register Torch Custom autograd.Function

* Add flag to supress pybind11 warning

* Avoid unnecessary include in cmake

* Add missing reference

* Add getter for registerred functions

* Format for making subsquent changes cleaner

* Fix interop feature build failure

* Forward pass, run PyOP on CPU EP

* clean up the code

* Fix build

* Define new ops

* refactor pyop - extract PyOpLibProxy class

* Hacks to run example

* implement the kernel compute func

* add back PyOP for comparision experiments

* debug info - thread id

* refine the kernels

* Polish code

(cherry picked from commit 4ed606f9a0)

* Fix a the Tensor address mismatch in C++ side

* PythonOpGrad compute

* add distributed test case

* refine test cases

* get dist.get_rank() in Autograd forward pass

* Add CUDA kernels

* Store float, int, and tuple of them as PythonOp's attributes

* Populate local changes

* Fix bugs

* PythonOp/PythonOpGrad CUDA kernels

* Support non-tensor inputs

* Single GPU FP16 Run Pass

(cherry picked from commit e539989e91e18ee997900292d3493b97d3eafa8a)

* Fix segement

* add basic test cases

* Save progress

* fix gradient builder for a Add op who have same inputs

* add test cases for auto grad fallback feature

* fix ref cnt issue. add thread id for debugging

* POC: remove interface class

* Remove interface classes

* Clean a bit

* Coarse-grained clean up after rebase master

* reset pyop and language_interop_ops to latest master

* Fix missing part during merge

* re-structure torch related language interop files

* Fix build

* Fix tests and build

* Fix build and basic unit tests

* Fix most of uts

* remove unnecessary import

* clean up and fix build when enabling language_interop_ops

* Fix single-GPU UTs

* Move runner register into ORT package

* Update dist UTs to new style

* Also fix distributed UTs and leaf gradient problem

* Static generation for constant args

* Move arg_positions_ to static field

* Rename some functions

* Move arg ceration into a function

* Clean output logic in PythonOp

* Move PythonOp's ctor

* Revise PythonOpGrad

* Fix "ORT only supports contiguous tensor for now" for inputs

* Fix evaulation mode error, add test & clean up

* clean up codes

* Fix issues introduced by recent master change (enabled symbolic shape infer)

* automatically register forward/backward function pointers && clean up

* Fix multi-output case

* Add a test back

* fix build and clean up

* RAII for function params PyObject

* Use new exporter

* Clean full name in new exporter

* Fix UTs

* Format a file

* Add "inplace" back

Remove a legacy comment

* Refine TorchProxy
1. Make TorchProxy a formal singleton class.
2. Remove unused Scope class.
3. Simplify the call to Forward and Backward. The two functions now
   automatically acquire and release GIL state, so user doesn't need
   any GIL-related calls.

* Format

* Add lock to avoid racing condition when registering Python objs

* Fix Python call param ref issues && Add RefcountTracker for debug build && Clean up

* clean up print

* Resolve part of comments && clean up

* Fix a potential bug

* track pyobject consistently

* move kernels to cpu provider as base class

* Refactor - 1. Extract PythonOpBase/PythonOpGradBase 2. Implement CPU kernels 3. Test coverage for CPU kernels

* Refine register code

* Add a missing macro

* Release python call result objects with PythonObjectPtr && Add UnRegisterContext && Track PyObject for Debugging && Clena up

* Fix random segfault issue - relasing a wrong ctx pointer for inplace cases

* put ref count in debug macro

* Move GIL out

* Refine tests

* Fix memory leak issue && forward output lifecycle issue:
1. Unregister the OrtValue PythonObject. Currently, the OrtValue shared same buffer with PythonOp/PythonOpGrad's output. So after those kernels outputs are released, the "leaked" OrtValue caused the shared buffer cannot be released.
2. According PyTorch forward+backward execution. The forward outputs (e.g. torch tensors) maintains the context/saved variables/dirty inputs, etc, which are used for backward execution, so its life should be after the backward runs. This change added such a depencencies between PythonOpGrad on PythonOp.

* Move dlpack->ortvalue into C++ to avoid temp object registration

* Fix the over released Py_False/Py_True && refine tests

* Clean up unused functions

* Always assume the first forward output is context so we don't need to test unused cases.

* Fix a memory leak

* move-copy unique_ptr & avoid C-style casting

* Use inplace attribute to determine if input tensors are copied

* Move DlpackCapsuleDestructor's to a common place

* Thread-safe TorchProxy

* Use OrtValue instead of OrtValue*

* Only keep checks for Debug build

* Wrap some long line per comment

* onnx_export_type --> kwargs

* Use requires_grads to create PythonOpGrad's inputs

* add missing files during master merge

* Fix build issue after merge

* Address two comments.
1. Internalize DlpackCapsuleDestructor
2. Change "(" to "]" for describing closed interval.

* Address some comments.
1. "override" -> "overwrite" to avoid using reserved keyword.
2. Call DLPack's helper to create OrtValue for avoiding repeated code.

* Address comments.
1. Pass std::mutex to registeration helpers so their callers don't
   have to lock the mutex expclicitly.
2. Rename "func_context_pool_mutex_" to "mutex_". This mutex is the global mutex for OrtTorchFunctionPool.

* Add bridging code to make cuda kernels work with merged master

* put debue macro check within RefCountTracker && use default logger for debug info && remove useless ortvalue_ptr interface && typos && revert unncessary blank line changes

* fix some comments

* Resolve more comments

* Capitalize a word

* use unique_ptr instead of ObjectPointer for PyObject management && add converntion

* Support symbolic shape

* Remove unused variable

* fix build

* Enable function registration for training only && rectify ToDlpack/FromDlpack merge with master.

* Don't add context for non-PythonOp opeartors (for example AtenOp)

* Fix build error

* Polish frontend part.
1. Avoid adding kwargs to ORTModule's ctor
2. Use onnx_export_type rather than kwargs for type safty
3. Fix some build bugs.

* Resolve simpler comments

* Resolve export related comments

* sync master && fix tests && fix non-training build error

* Fix build errors

* add target link lib

* windows build error

* Fix orttraining-linux-ci build

* disable autograd test && clean up

* fix linux orttraining ci build

* try fixing win build error

* Revise append calls in runner

* Enable custom function using a function

* Rename to avoid using reservied keyword

* Use list comprehension

* Set ORT random seed in tests

* Remove print code and fix ctx shape

* [] -> list()

* Move autograd.Function and nn.Module into corresponding functions

* Move test helpers

* Polish dist test a bit. Tried move helpers to helper file but it causes a deadlock.

* trying fix undefined reference

* Context is not managed by global pool

* Polish dist test

* Polish dist test

* Add enable_custom_autograd_function

* Remove enable_custom_autograd_function from ctors

* Add doc strings

* Shorter code

* Address comments

* Add one empty line

* revert a minor and not needed change

* Address comments

* Back to reference

* Fix windows builds

* Fix windows debug build fail to find "'python39_d.lib'"

* fix mac build error

* revert _to_contiguous change

* add debugging tag for orttraining-cpu-ci

* Fix the wrong PYTHON_LIBRARIES which is affected by PYTHON_LIBRARY given in build command

* add debugging info

* Fix the build in this case: PYTHON_LIBDIR: /opt/_internal/cpython-3.7.10/lib, PYTHON_EXECUTABLE: /opt/python/cp37-cp37m/bin/python3, PYTHON_MULTIARCH: x86_64-linux-gnu
PYTHON_LIBRARY_PATH python3.7m

* fix build error due to python lib not found

* Fixes
1. Release PyObject's
2. Not useing deepcopy because we assume autograd.Function's
   non-tensor inputs are static (constants) so there should
   be no side effect after calling any autograd.Function
   multiple times.

* Revert dtoc for decreasing refcnt

* add debugging log

* add debugging tag

* Fix a small leak

* Remove ONNX_FALLTHROUGH flag

* debug tag

* debug tag

* fix builds

* remove debug tag

* fix build

* fix builds

* fix build

* install python3 in centos, in case there is no libpython3.xm.so

* build python so for redhat

* add training cpu specific docker, build python so inside

* revert build-cpython change

* try fixing numpy include issue

* install_deps after re-installing cpython

* fix build && remove debug tag

* install openssl before cpython

* let's say: builds pass!

* add build flag for torch iterop, only enable it when training+Python is enabled

* skip ComputeBroadcastBackwardAxesDynamic for the shared inputs

* fix build

* add debug info for padgrad test

* Fix builds

* Split dlpack_converter into C++ and Python interfaces respecitively. Then different build use them as needed.

* clean up the changes

* fix addsubgradient builder

* Fix builds

* clean up

* clean up

* Address some comments.
1. Use pointer wraper to avoid calling Py_DECREF
2. Remove unregister_* functions
3. Allow repeated registration by skipping those with existing keys
4. Unregister context in PythonOpGrad

* Fix over-released Py_Boolean

Co-authored-by: Wei-Sheng Chin <wschin@outlook.com>
2021-06-07 13:01:21 -07:00
Changming Sun
5a7f65b831
Fix training e2e pipeline (#7942)
1. Fix training e2e pipeline. The failure was caused by my recent change #7632. The fix is adding "--cmake_extra_defines CMAKE_CUDA_ARCHITECTURES=70" to the build parameters because the machines are with V100 GPUs.
2. Simplify Nuphar pipeline. It doesn't need to install a separated ONNX version(1.5.0)
3. Fix a problem that run_dockerbuild.sh ignored OS version parameter. Now because it starts to take effect, I also set python version to the system default one(3.8 for ubuntu 20.04)
2021-06-04 09:37:09 -07:00
Changming Sun
b854f2399d
Update manylinux build scripts and GPU CUDA version from 11.0 to 11.1 (#7632)
1. Update manylinux build scripts. This will add [PEP600](https://www.python.org/dev/peps/pep-0600/)(manylinux2 tags) support. numpy has adopted this new feature, we should do the same. The old build script files were copied from https://github.com/pypa/manylinux, but they has been deleted and replaced in the upstream repo. The manylinux repo doesn't have a manylinux2014 branch anymore. So I'm removing the obsolete code, sync the files with the latest master.
2. Update GPU CUDA version from 11.0 to 11.1(after a discussion with PMs). 
3. Delete tools/ci_build/github/linux/docker/Dockerfile.manylinux2014_cuda10_2.  (Merged the content to tools/ci_build/github/linux/docker/Dockerfile.manylinux2014_cuda11)
4. Modernize the cmake code of how to locate python devel files. It was suggested in https://github.com/onnx/onnx/pull/1631 .
5. Remove `onnxruntime_MSVC_STATIC_RUNTIME` and `onnxruntime_GCC_STATIC_CPP_RUNTIME` build options. Now cmake has builtin support for it. Starting from cmake 3.15, we can use `CMAKE_MSVC_RUNTIME_LIBRARY` cmake variable to choose which MSVC runtime library we want to use. 
6. Update Ubuntu docker images that used in our CI build from Ubuntu 18.04 to Ubuntu 20.04.
7. Update GCC version in CUDA 11.1 pipelines from 8.x to 9.3.1
8. Split Linux GPU CI pipeline to two jobs: build the code on a CPU machine then run the tests on another GPU machines.  In the past we didn't test our python packages. We only tested the pre-packed files. So we didn't catch the rpath issue in CI build. 
9. Add a CentOS machine pool and test our Linux GPU build on real CentOS machines. 
10. Rework ARM64 Linux GPU python packaging pipeline. Previously it uses cross-compiling therefore we must static link to C Runtime. But now have pluggable EP API and it doesn't support static link. So I changed to use qemu emulation instead. Now the build is 10x slower than before. But it is more extensible.
2021-06-02 23:36:49 -07:00
Thiago Crepaldi
c45ac166d3
Add graphviz into Dockerfile images for Python API documentation (#7819) 2021-06-02 16:12:54 -07:00
George Wu
1c6b6f696e
fixes for cuda centos/manylinux (#7830)
* fixes for cuda centos/manylinux

* remove providers_shared.so dep processing.
2021-05-25 19:38:59 -07:00
Suffian Khan
02c78a8aa8
test migration to rocm4.2 (#7800) 2021-05-24 11:48:44 -07:00
Changming Sun
ee29330cab
Delete unused file: Dockerfile.ubuntu_gpu (#7797) 2021-05-21 17:05:35 -07:00
liqunfu
f6eb0f76ae
to used cudnn7 to build onnxruntime-training wheel with Cuda 10.2 support (#7760) 2021-05-20 09:18:41 -07:00
Ryan Hill
c99aa3a3f3
Ryanunderhill/cuda shared (#7626)
* First iteration of making cuda a shared provider.
Separated out shared OpKernel change, so doing this to merge with that change.

* More cuda shared library refactoring

* More cuda shared library refactoring

* More build options tested, converted the training ops over.

* Fix merge breaks

* Fix submodules

* Fix submodules

* Fix submodules

* Fix python

* Fix compile errors

* Duplicate symbol fix

* Test fix for ROCM provider

* Another ROCM test workaround

* ROCM Build Test

* ROCM build fix

* ROCM

* ROCM

* ROCM

* ROCM

* ROCM

* ROCM test

* Reduce header dependencies

* Remove redundant namespace

* Test fix for linux

* Fix linux build

* Fix Eigen build error

* Fix unused parameter warning

* Test link error

* Another linker test

* Linker test

* Linker test

* Another test

* Another build test

* Fix linux link error

* Build test

* Fix control flow ops to use common base class with core code

* Remove extra qualifiers

* Fix template syntax for linux

* Fix cuda memory leak

* Fix pybind

* Test disabling cast

* Cleanup

* Restore cuda in test

* Remove more header dependencies

* Test not adding cuda provider to session

* Make GetProviderInfo_CUDA throw

* No-op cuda provider creation

* Fix some setup issues

* Fix memory cleanup on unload

* Diagnostics

* Don't unload library

* Add diagnostics

* Fix deleting registry at right time.

* Test disabling profiler

* Fix merge break

* Revert profiler change

* Move unloading of shared providers into Environment

* Free more global allocations before library unloads

* Add more diagnostics

* Move unloading back to the OrtEnv as there are multiple Environments created during a session.

Remove some library dependencies for tests.

* Fix more cmake files

* ERROR -> WARNING

* Fix python shutdown

* Test not using dml in pipeline

* Change python version and disable dml

* Update python version

* Test adding unload method for shared providers

* Disable DLL test

* Python test

* Revert "Python test"

This reverts commit c7ec2cfe98.

* Revert "Disable DLL test"

This reverts commit e901cb93aa.

* Revert "Test adding unload method for shared providers"

This reverts commit c427b78799.

* Point to RyanWinGPU

* Revert python version

* Fix id_to_allocator_map

* Another python exit test

* Remove extra debug messages
Try a more clean python shutdown through DllMain

* Revert DllMain idea, it didn't work

* Merge conflicts

* Fix merge with master issues.

* Comments

* Undo edit to file

* Cleanup + new training ops

* Revert yml changes

* Fix another merge error

* ROCM fix

* ROCM fix v2

* Put back Linux hack, it is necessary

* Stupid fixes

* Fix submodule out of sync

* ROCM fix 3

* ROCM 4

* Test java fix

* Fix typos

* Java test on my VM

* Fix build error

* Spotless fix

* Leave temp file around to load properly

* Fix cleanup on exit

* Fix break

* Java comments

* Remove LongformerAttentionBase workaround

* Spotless fix

* Switch yml back to regular build pool

* Revert "Switch yml back to regular build pool"

This reverts commit be35fc2a5a.

* Code review feedback

* Fix errors due to merge

* Spotless fix

* Fix minimal build

* Java fix for non cuda case

* Java fix for CPU build

* Fix Nuphar?

* Fix nuphar 2

* Fix formatting

* Revert "Remove LongformerAttentionBase workaround"

This reverts commit 648679b370.

* Training fix

* Another java fix

* Formatting

* Formatting

* For orttraining

* Last orttraining build fix...

* training fixes

* Fix test provider error

* Missing pass command

* Removed in wrong spot

* Python typo

* Python typos

* Python crash on exit, possibly due to unloading of libraries.

* Remove test_execution_provider from training build
Only enable python atexit on windows
Remove assert on provider library exit

* Still can't unload providers in python, alas.

* Disable Nvtx temporarily

* MPI Kernels for Training

* MPI Kernels part 2

* Patch through INcclService

* Oops, wrong CMakeLists

* Missing namespace

* Fix missing ()

* Move INcclService::GetInstance around to link nicer

* Missing }

* Missing MPI libraries for Cuda

* Add extra GetType functions used by MPI

* Missing Nccl library

* Remove LOGS statements as a test

* Add in a couple more missing GetType methods

* Update comments

* Missed a logging reference in mpi_context.h

* Convert aten_op to shared (due to marge with master)

* Test moving DistributedRunContext instance into shared provider layer
(with purpose error to verify it's being built properly)

* Test passed, now with fix

* Missing static

* Oops, scope DistributedRunContext to just NCCL

* Merge related issues and code review feedback.

* Merge error

* Bump to rel-1.9.1 (#7684)

* Formatting

* Code review feedback for Java build on non Windows

* Remove cupti library dependency from core library

* Test Java pipeline fix

* Linux build fix

* Revert "Linux build fix"

This reverts commit a73a811516.

* Revert "Remove cupti library dependency from core library"

This reverts commit 6a889ee8bf.

* Packaging pipeline fixes to copy cuda shared provider for tensorrt & standard packages

* Add cuda to Tensorrt nuget package

* onnxruntime_common still has a cuda header dependency

Co-authored-by: ashbhandare <ash.bhandare@gmail.com>
2021-05-20 07:53:47 -07:00
Changming Sun
3a68c389d9
Add version lock to manylinux build scripts (#7755) 2021-05-19 09:28:40 -07:00
Changming Sun
38d90b0f15
Cleanup install_deps.sh (#7734) 2021-05-17 19:27:47 -07:00
liqunfu
d604281a86
Liqun/training pkg to run tests (#7662) 2021-05-16 09:10:57 -07:00
liqunfu
3ead2f2f39
update pt lightning version (#7711)
Co-authored-by: liqun <liqun@OrtTrainingDev4.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>
2021-05-15 21:46:16 -07:00
liqunfu
359fe1d197
Liqun/ort training version (#7620) 2021-05-14 09:54:19 -07:00
ashbhandare
56e993a434
Bump to rel-1.9.1 (#7684) 2021-05-13 18:41:28 -07:00
Hariharan Seshadri
4b691a5c0d
Add ability for memory arenas to "shrink" periodically (#7284) 2021-05-08 07:53:21 -07:00
Changming Sun
41e370c2b3
Update protobuf to 3.16 (#7616) 2021-05-07 14:09:23 -07:00