* 2021.4.1 Docker and ci changes
* OV version change
* Removing Imagescaler op from the op's list
Reverting this change which was added in last
PR. Imagescaler is now deprecated. so removing
it from the supported list. Also this
op is causing regression in the performance
of the FP16 models.
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Re-writing the help message for num_of_threads
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
Co-authored-by: Aravind Gunda <aravindx.gunda@intel.com>
* install protobuf from source
* fix rm command in Dockerfile
* fix options on rm command
* fix cd into protobuf source directory
* try again
* remove strip step
* debug list the files
* ls on /usr
* more debug
* more debug
* adjust LD_LIBRARY_PATH
* try remove protobuf before ORT build
* updates for picking pnnx commit
* add tests filter to c# tests
* plus test fixes
* fix versioning for contrib ops
* fix tests
* test filter for optional ops
* more versioning related updates
* fix test
* fix layernorm spec
* more updates
* update docs
* add more test filters
* more filters
* update binary size threshold
* update docs
* draft - enable model local function
* enable model local functions in ORT
* update to latest rel onnx commit
* plus tests
* plus more updates
* plus updates
* test updates
* Fix for nested functions + shape inference
* plus bug fix and updates per review
* plus fixes per review
* plus test updates
* plus updates per review
* plus fixes
* fix a test
* copy changes from trt_and_mem
* second edits
* Update linux-gpu-tensorrt-ci-perf-pipeline.yml for Azure Pipelines
* Update linux-gpu-tensorrt-ci-perf-pipeline.yml for Azure Pipelines
* Update linux-gpu-tensorrt-ci-perf-pipeline.yml for Azure Pipelines
* change to cuda 11.4
* build with cuda 11.4
* Update Dockerfile.ubuntu_cuda11_1_tensorrt7_2
* add cmake extra defines
* cmake architectures
* fix cmake arch
* Delete ubuntu-18.04.Dockerfile
* Rename Dockerfile.ubuntu_cuda11_1_tensorrt7_2 to Dockerfile.ubuntu_cuda11_4_tensorrt7_2
* Update linux-gpu-tensorrt-ci-perf-pipeline.yml
* Update linux-gpu-tensorrt-ci-perf-pipeline.yml for Azure Pipelines
* removing previous ort args
* rename to cuda 11.4
* remove cuda 10_2
* delete trt 7.1
* remove 7.1
* Passing in cuda architecture to reduce build time
* always add submodule sync due to recursive cloning
* fix run command
* add and
* take away unused arms and share python installation script
* Update linux-gpu-tensorrt-ci-perf-pipeline.yml
* Update Dockerfile.tensorrt
* cleanup file
* install python directly on dockerfile - move to scripts in future
* Update Dockerfile.custom-trt-perf
* adding cuda 11.1 for missing Libnvrtc.so.11.1
* Delete install_python.sh
* initial update from 11.1 to 11.4
* change 11.4.1 to 11.4.0
* adjusting to match nvidia/cuda image tags
* adjusting to match nvidia/cuda image tags centos7
* correction to 11.4.0
* correction to 11.4.0
* update to cuda 11.4
* change training back to 11.1
* change training back to 11.1
* point to correct nvcr.io/nvidia/cuda 11.4.1 image
* change centos8 to centos7
* correct cudnn path
* Update linux-gpu-ci-pipeline.yml for Azure Pipelines
* Update c-api-noopenmp-packaging-pipelines.yml
* need to resolve centos images but remove space and change to 11.4
* Update linux-gpu-ci-pipeline.yml
* add cudnn to docker image
* bump devtoolset to 10
* revert cuda 11.4 change to setup_env_trt
* orttraining back to 11.1
* use nvcr.io
* Fix previous change back to cuda 11.1
* update cudnn path
* use cudnn image (revert if failure)
* updates for picking pnnx commit
* add tests filter to c# tests
* plus test fixes
* fix versioning for contrib ops
* fix tests
* test filter for optional ops
* more versioning related updates
* fix test
* fix layernorm spec
* more updates
* update docs
* add more test filters
* more filters
* update binary size threshold
* update docs
* plus more fixes
* updates per review
* update to release commit
* add filters for optional type tests
* plus updates
* update onnx-tensorrt parser to master
* disable unsupported tests
* add cuda sm 75 for T4
* update tensorrt pipeline
* update trt pipelines
* update trt pipelines
* Update linux-gpu-tensorrt-ci-pipeline.yml
* update trt cid pipeline
* Update linux-gpu-tensorrt-ci-pipeline.yml
* Update Tensorrt Windows build pool and TensorRT/CUDA/CuDNN version
* update to cuda11.4 in trt ci pipeline
* update base image to cuda11.4
* update packaging pipeline to cuda11.4
* clean up
* remove cuda11.1 and cuda11.3 docker file
* disable unsupported tensorrt tests at runtime
* Update linux-multi-gpu-tensorrt-ci-pipeline.yml
1. Update SDLNativeRules from v2 to v3. The new one allows us setting excluded paths.
2. Update TSAUpload from v1 to v2. And add a config file ".gdn/.gdntsa" for it.
3. Fix some parentheses warnings
4. Update cmake to the latest.
5. Remove "--x86" build option from pipeline yaml files. Now we can auto-detect cpu architecture from python. So we don't need to ask user to specify it.
* Changes to ensure the openvino-ep-2021.4 branch is created
* Fix failing cpp and python unit tests
* Fixed Myriad Tests for Ov_2021.4
* Disabled failing python tests for myriad
* Fixes models which were breaking w.r.t 2021.4
* Added fixes to Fix tinyyolov3 working on Myriad
and MaskRcnn, FasterRcnn using GPU_FP32
* Added FP16 output data type support for ngraph
* Implemented ReadNetwork() method
->Using Core::ReadNetwork() method for reading and creating a CNNNework
->Since OpenVINO™ 2020.4 version, Inference Engine enables reading ONNX models
via the Inference Engine Core API and there is no need to use directly the low-level
ONNX* Importer API anymore. To read ONNX* models, it's recommended to use the
Core::ReadNetwork() method that provide a uniform way to read models from ONNX format.
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Fixed ngraph f16 supported output type
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Added comments in data_ops.cc
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Fixed broken windows build
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Disable failing CPP tests on CPU
Some of the convtranspose tests are failing on
OpenVINO-EP CPU due to accuracy mismatch w.r.t
default CPU. so currently we are disbaling
these tests.
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Updated for ov version 2021.4
* Changes to include qdq ops in code
* Disabled failing python tests on GPU
Disabled two maxpool python tests on
GPU as they were passing but throwing
segfault
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Fix the backward compatibility issue
ReadNetwork() API has a bug and will only work
starting from OpenVINO 2021.4 version.
The previous versions will still have to use
onnx importer route
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Fix CMakeLists.txt for OpenVINO EP
If a directory with OpenVINO is sourced,
the latest OpenVINO settings have to
be imported.
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
Co-authored-by: sfatimar <sahar.fatima@intel/com>
Co-authored-by: sfatimar <64512376+sfatimar@users.noreply.github.com>
Co-authored-by: Aravind Gunda <aravindx.gunda@intel.com>
* first attempt share docker image across python and torch versons
* set dependency between jobs
* fix yaml grammer
* remove python version from first stage
* clean deepspeed directroy
* split into two images according torch version
* fix yaml syntax
* invalidate cache
* remove DS to prevent torch 1.9.0 upgrade
ORTModule requires two PyTorch CPP extensions that are currently JIT compiled. The runtime compilation can cause issues in some environments without all build requirements or in environments with multiple instances of ORTModule running in parallel
This PR creates a custom command to compile such extensions that must be manually executed before ORTModule is executed for the first time. When users try to use ORTModule before the extensions are compiled, an error with instructions are raised
PyTorch CPP Extensions for ORTModule can be compiled by running:
python -m onnxruntime.training.ortmodule.torch_cpp_extensions.install
Full build environment is needed for this
1. Remove some unused code and simplify tools/ci_build/github/linux/run_dockerbuild.sh.
2. Enable Nuget CUDA tests. The original design was we could leverage Directory.Build.props and let cmake generate the required properties(USE_CUDA/...) there. However, in nuget packaging pipeline we test the package on a different host that doesn't run cmake command and doesn't have the auto-generated Directory.Build.props file.
* Revert for testing TensorRT 7.1
* change to origianl googletest version
* change machine
* remove build arg
* change back machine
* revert back googletest version
* Make it ready to merge to master
* revert onnx-tensorrt to v7.1
* rename yml
* use [[ ]] in bash command
* add sudo
* add chmod
* add correct path
* change another way to revert onnx-tensorrt
* change docker image to manylinux build
* Register Torch Custom autograd.Function
* Add flag to supress pybind11 warning
* Avoid unnecessary include in cmake
* Add missing reference
* Add getter for registerred functions
* Format for making subsquent changes cleaner
* Fix interop feature build failure
* Forward pass, run PyOP on CPU EP
* clean up the code
* Fix build
* Define new ops
* refactor pyop - extract PyOpLibProxy class
* Hacks to run example
* implement the kernel compute func
* add back PyOP for comparision experiments
* debug info - thread id
* refine the kernels
* Polish code
(cherry picked from commit 4ed606f9a0)
* Fix a the Tensor address mismatch in C++ side
* PythonOpGrad compute
* add distributed test case
* refine test cases
* get dist.get_rank() in Autograd forward pass
* Add CUDA kernels
* Store float, int, and tuple of them as PythonOp's attributes
* Populate local changes
* Fix bugs
* PythonOp/PythonOpGrad CUDA kernels
* Support non-tensor inputs
* Single GPU FP16 Run Pass
(cherry picked from commit e539989e91e18ee997900292d3493b97d3eafa8a)
* Fix segement
* add basic test cases
* Save progress
* fix gradient builder for a Add op who have same inputs
* add test cases for auto grad fallback feature
* fix ref cnt issue. add thread id for debugging
* POC: remove interface class
* Remove interface classes
* Clean a bit
* Coarse-grained clean up after rebase master
* reset pyop and language_interop_ops to latest master
* Fix missing part during merge
* re-structure torch related language interop files
* Fix build
* Fix tests and build
* Fix build and basic unit tests
* Fix most of uts
* remove unnecessary import
* clean up and fix build when enabling language_interop_ops
* Fix single-GPU UTs
* Move runner register into ORT package
* Update dist UTs to new style
* Also fix distributed UTs and leaf gradient problem
* Static generation for constant args
* Move arg_positions_ to static field
* Rename some functions
* Move arg ceration into a function
* Clean output logic in PythonOp
* Move PythonOp's ctor
* Revise PythonOpGrad
* Fix "ORT only supports contiguous tensor for now" for inputs
* Fix evaulation mode error, add test & clean up
* clean up codes
* Fix issues introduced by recent master change (enabled symbolic shape infer)
* automatically register forward/backward function pointers && clean up
* Fix multi-output case
* Add a test back
* fix build and clean up
* RAII for function params PyObject
* Use new exporter
* Clean full name in new exporter
* Fix UTs
* Format a file
* Add "inplace" back
Remove a legacy comment
* Refine TorchProxy
1. Make TorchProxy a formal singleton class.
2. Remove unused Scope class.
3. Simplify the call to Forward and Backward. The two functions now
automatically acquire and release GIL state, so user doesn't need
any GIL-related calls.
* Format
* Add lock to avoid racing condition when registering Python objs
* Fix Python call param ref issues && Add RefcountTracker for debug build && Clean up
* clean up print
* Resolve part of comments && clean up
* Fix a potential bug
* track pyobject consistently
* move kernels to cpu provider as base class
* Refactor - 1. Extract PythonOpBase/PythonOpGradBase 2. Implement CPU kernels 3. Test coverage for CPU kernels
* Refine register code
* Add a missing macro
* Release python call result objects with PythonObjectPtr && Add UnRegisterContext && Track PyObject for Debugging && Clena up
* Fix random segfault issue - relasing a wrong ctx pointer for inplace cases
* put ref count in debug macro
* Move GIL out
* Refine tests
* Fix memory leak issue && forward output lifecycle issue:
1. Unregister the OrtValue PythonObject. Currently, the OrtValue shared same buffer with PythonOp/PythonOpGrad's output. So after those kernels outputs are released, the "leaked" OrtValue caused the shared buffer cannot be released.
2. According PyTorch forward+backward execution. The forward outputs (e.g. torch tensors) maintains the context/saved variables/dirty inputs, etc, which are used for backward execution, so its life should be after the backward runs. This change added such a depencencies between PythonOpGrad on PythonOp.
* Move dlpack->ortvalue into C++ to avoid temp object registration
* Fix the over released Py_False/Py_True && refine tests
* Clean up unused functions
* Always assume the first forward output is context so we don't need to test unused cases.
* Fix a memory leak
* move-copy unique_ptr & avoid C-style casting
* Use inplace attribute to determine if input tensors are copied
* Move DlpackCapsuleDestructor's to a common place
* Thread-safe TorchProxy
* Use OrtValue instead of OrtValue*
* Only keep checks for Debug build
* Wrap some long line per comment
* onnx_export_type --> kwargs
* Use requires_grads to create PythonOpGrad's inputs
* add missing files during master merge
* Fix build issue after merge
* Address two comments.
1. Internalize DlpackCapsuleDestructor
2. Change "(" to "]" for describing closed interval.
* Address some comments.
1. "override" -> "overwrite" to avoid using reserved keyword.
2. Call DLPack's helper to create OrtValue for avoiding repeated code.
* Address comments.
1. Pass std::mutex to registeration helpers so their callers don't
have to lock the mutex expclicitly.
2. Rename "func_context_pool_mutex_" to "mutex_". This mutex is the global mutex for OrtTorchFunctionPool.
* Add bridging code to make cuda kernels work with merged master
* put debue macro check within RefCountTracker && use default logger for debug info && remove useless ortvalue_ptr interface && typos && revert unncessary blank line changes
* fix some comments
* Resolve more comments
* Capitalize a word
* use unique_ptr instead of ObjectPointer for PyObject management && add converntion
* Support symbolic shape
* Remove unused variable
* fix build
* Enable function registration for training only && rectify ToDlpack/FromDlpack merge with master.
* Don't add context for non-PythonOp opeartors (for example AtenOp)
* Fix build error
* Polish frontend part.
1. Avoid adding kwargs to ORTModule's ctor
2. Use onnx_export_type rather than kwargs for type safty
3. Fix some build bugs.
* Resolve simpler comments
* Resolve export related comments
* sync master && fix tests && fix non-training build error
* Fix build errors
* add target link lib
* windows build error
* Fix orttraining-linux-ci build
* disable autograd test && clean up
* fix linux orttraining ci build
* try fixing win build error
* Revise append calls in runner
* Enable custom function using a function
* Rename to avoid using reservied keyword
* Use list comprehension
* Set ORT random seed in tests
* Remove print code and fix ctx shape
* [] -> list()
* Move autograd.Function and nn.Module into corresponding functions
* Move test helpers
* Polish dist test a bit. Tried move helpers to helper file but it causes a deadlock.
* trying fix undefined reference
* Context is not managed by global pool
* Polish dist test
* Polish dist test
* Add enable_custom_autograd_function
* Remove enable_custom_autograd_function from ctors
* Add doc strings
* Shorter code
* Address comments
* Add one empty line
* revert a minor and not needed change
* Address comments
* Back to reference
* Fix windows builds
* Fix windows debug build fail to find "'python39_d.lib'"
* fix mac build error
* revert _to_contiguous change
* add debugging tag for orttraining-cpu-ci
* Fix the wrong PYTHON_LIBRARIES which is affected by PYTHON_LIBRARY given in build command
* add debugging info
* Fix the build in this case: PYTHON_LIBDIR: /opt/_internal/cpython-3.7.10/lib, PYTHON_EXECUTABLE: /opt/python/cp37-cp37m/bin/python3, PYTHON_MULTIARCH: x86_64-linux-gnu
PYTHON_LIBRARY_PATH python3.7m
* fix build error due to python lib not found
* Fixes
1. Release PyObject's
2. Not useing deepcopy because we assume autograd.Function's
non-tensor inputs are static (constants) so there should
be no side effect after calling any autograd.Function
multiple times.
* Revert dtoc for decreasing refcnt
* add debugging log
* add debugging tag
* Fix a small leak
* Remove ONNX_FALLTHROUGH flag
* debug tag
* debug tag
* fix builds
* remove debug tag
* fix build
* fix builds
* fix build
* install python3 in centos, in case there is no libpython3.xm.so
* build python so for redhat
* add training cpu specific docker, build python so inside
* revert build-cpython change
* try fixing numpy include issue
* install_deps after re-installing cpython
* fix build && remove debug tag
* install openssl before cpython
* let's say: builds pass!
* add build flag for torch iterop, only enable it when training+Python is enabled
* skip ComputeBroadcastBackwardAxesDynamic for the shared inputs
* fix build
* add debug info for padgrad test
* Fix builds
* Split dlpack_converter into C++ and Python interfaces respecitively. Then different build use them as needed.
* clean up the changes
* fix addsubgradient builder
* Fix builds
* clean up
* clean up
* Address some comments.
1. Use pointer wraper to avoid calling Py_DECREF
2. Remove unregister_* functions
3. Allow repeated registration by skipping those with existing keys
4. Unregister context in PythonOpGrad
* Fix over-released Py_Boolean
Co-authored-by: Wei-Sheng Chin <wschin@outlook.com>
1. Fix training e2e pipeline. The failure was caused by my recent change #7632. The fix is adding "--cmake_extra_defines CMAKE_CUDA_ARCHITECTURES=70" to the build parameters because the machines are with V100 GPUs.
2. Simplify Nuphar pipeline. It doesn't need to install a separated ONNX version(1.5.0)
3. Fix a problem that run_dockerbuild.sh ignored OS version parameter. Now because it starts to take effect, I also set python version to the system default one(3.8 for ubuntu 20.04)
1. Update manylinux build scripts. This will add [PEP600](https://www.python.org/dev/peps/pep-0600/)(manylinux2 tags) support. numpy has adopted this new feature, we should do the same. The old build script files were copied from https://github.com/pypa/manylinux, but they has been deleted and replaced in the upstream repo. The manylinux repo doesn't have a manylinux2014 branch anymore. So I'm removing the obsolete code, sync the files with the latest master.
2. Update GPU CUDA version from 11.0 to 11.1(after a discussion with PMs).
3. Delete tools/ci_build/github/linux/docker/Dockerfile.manylinux2014_cuda10_2. (Merged the content to tools/ci_build/github/linux/docker/Dockerfile.manylinux2014_cuda11)
4. Modernize the cmake code of how to locate python devel files. It was suggested in https://github.com/onnx/onnx/pull/1631 .
5. Remove `onnxruntime_MSVC_STATIC_RUNTIME` and `onnxruntime_GCC_STATIC_CPP_RUNTIME` build options. Now cmake has builtin support for it. Starting from cmake 3.15, we can use `CMAKE_MSVC_RUNTIME_LIBRARY` cmake variable to choose which MSVC runtime library we want to use.
6. Update Ubuntu docker images that used in our CI build from Ubuntu 18.04 to Ubuntu 20.04.
7. Update GCC version in CUDA 11.1 pipelines from 8.x to 9.3.1
8. Split Linux GPU CI pipeline to two jobs: build the code on a CPU machine then run the tests on another GPU machines. In the past we didn't test our python packages. We only tested the pre-packed files. So we didn't catch the rpath issue in CI build.
9. Add a CentOS machine pool and test our Linux GPU build on real CentOS machines.
10. Rework ARM64 Linux GPU python packaging pipeline. Previously it uses cross-compiling therefore we must static link to C Runtime. But now have pluggable EP API and it doesn't support static link. So I changed to use qemu emulation instead. Now the build is 10x slower than before. But it is more extensible.