### Description
For compilation in container, ADO Cache task doesn't work directly.
The workaround is to mount the cache directory to the container, and let
CCache in container to read/write cache data.
In short, we just leverage ADO API to download/upload cache data.
The Post-jobs works in stack-mode, So the PostBuildCleanUp Tasks should
be defined first.
Thus, The PostBuildCleanUp would be executed lastly.
Else, Cache Task would fail to upload cache because the Agent Directory
is cleaned.
Integrate TensorRT 8.5
- Update TensorRT EP to support TensorRT 8.5
- Update relevant CI pipelines
- Disable known non-supported ops for TensorRT
- Make timeout configurable.
We observe more than [20
hours](https://aiinfra.visualstudio.com/Lotus/_build/results?buildId=256729&view=logs&j=71ce39d8-054f-502a-dcd0-e89fa9931f40)
of running unit tests with TensorRT 8.5 in package pipelines. Because we
can't use placeholder to significantly reduce testing time (c-api
application test will deadlock) in package pipelines, we only run
subsets of model tests and unit tests that are related to TRT (add new
build flag--test_all_timeout and set it to 72000 seconds by package
pipelines). Just to remember, we still run all the tests in TensorRT CI
pipelines to have full test coverage.
- include https://github.com/microsoft/onnxruntime/pull/13918 to fix
onnx-tensorrt compile error.
Co-authored-by: George Wu <jywu@microsoft.com>
### Description
<!-- Describe your changes. -->
Update protobuf version to 3.18.3 in
tools/ci_build/github/linux/docker/scripts/requirements.txt.
### Motivation and Context
Address component governance alert CVE-2022-1941
### Description
- Adds a dockerfile for Ubuntu with TensorRT 8.5.1.1.
- Adds option to run EP Perf pipeline with TensorRT 8.5
### Motivation and Context
Necessary to benchmark models with TensorRT 8.5
### Description
<!-- Describe your changes. -->
1. Remove ROCm5.3 pipeline because it has rocblas bug, we don't need it.
2. We removed the dependency on centos docker image provided by
AMD(https://hub.docker.com/r/rocm/dev-centos-7) and build ROCm centos
base image by ourselves. The reference
dockerfile(https://github.com/RadeonOpenCompute/ROCm-docker/blob/master/dev/Dockerfile-centos-7)
is very redundant for our need. We simplified the ROCm manylinux
dockerfile.
3. Different versions of rocm use the same dockerfile
`Dockerfile.manylinux2014_rocm`.
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
Co-authored-by: peixuanzuo <peixuanzuo@linmif39a000004.zvflicr54joexhdgnhvmxrxygg.phxx.internal.cloudapp.net>
## Description
1. Convert some git submodules to cmake external projects
2. Update nsync from
[1.23.0](https://github.com/google/nsync/releases/tag/1.23.0) to
[1.25.0](https://github.com/google/nsync/releases/tag/1.25.0)
3. Update re2 from 2021-06-01 to 2022-06-01
4. Update wil from an old commit to 1.0.220914.1 tag
5. Update gtest to a newer commit so that it can optionally leverage
absl/re2 for parsing command line flags.
The following git submodules are deleted:
1. FP16
2. safeint
3. XNNPACK
4. cxxopts
5. dlpack
7. flatbuffers
8. googlebenchmark
9. json
10. mimalloc
11. mp11
12. pthreadpool
More will come.
## Motivation and Context
There are 3 ways of integrating 3rd party C/C++ libraries into ONNX
Runtime:
1. Install them to a system location, then use cmake's find_package
module to locate them.
2. Use git submodules
6. Use cmake's external projects(externalproject_add).
At first when this project was just started, we considered both option 2
and option 3. We preferred option 2 because:
1. It's easier to handle authentication. At first this project was not
open source, and it had some other non-public dependencies. If we use
git submodule, ADO will handle authentication smoothly. Otherwise we
need to manually pass tokens around and be very careful on not exposing
them in build logs.
2. At that time, cmake fetched dependencies after "cmake" finished
generating vcprojects/makefiles. So it was very difficult to make cflags
consistent. Since cmake 3.11, it has a new command: FetchContent, which
fetches dependencies when it generates vcprojects/makefiles just before
add_subdirectories, so the parent project's variables/settings can be
easily passed to the child projects.
And when the project went on, we had some new concerns:
1. As we started to have more and more EPs and build configs, the number
of submodules grew quickly. For more developers, most ORT submodules are
not relevant to them. They shouldn't need to download all of them.
2. It is impossible to let two different build configs use two different
versions of the same dependency. For example, right now we have protobuf
3.18.3 in the submodules. Then every EP must use the same version.
Whenever we have a need to upgrade protobuf, we need to coordinate
across the whole team and many external developers. I can't manage it
anymore.
3. Some projects want to manage the dependencies in a different way,
either because of their preference or because of compliance
requirements. For example, some Microsoft teams want to use vcpkg, but
we don't want to force every user of onnxruntime using vcpkg.
7. Someone wants to dynamically link to protobuf, but our build script
only does static link.
8. Hard to handle security vulnerabilities. For example, whenever
protobuf has a security patch, we have a lot of things to do. But if we
allowed people to build ORT with a different version of protobuf without
changing ORT"s source code, the customer who build ORT from source will
be able to act on such things in a quicker way. They will not need to
wait ORT having a patch release.
9. Every time we do a release, github will also publish a source file
zip file and a source file tarball for us. But they are not usable,
because they miss submodules.
### New features
After this change, users will be able to:
1. Build the dependencies in the way they want, then install them to
somewhere(for example, /usr or a temp folder).
2. Or download the dependencies by using cmake commands from these
dependencies official website
3. Similar to the above, but use your private mirrors to migrate supply
chain risks.
4. Use different versions of the dependencies, as long as our source
code is compatible with them. For example, you may use you can't use
protobuf 3.20.x as they need code changes in ONNX Runtime.
6. Only download the things the current build needs.
10. Avoid building external dependencies again and again in every build.
### Breaking change
The onnxruntime_PREFER_SYSTEM_LIB build option is removed you could think from now
it is default ON. If you don't like the new behavior, you can set FETCHCONTENT_TRY_FIND_PACKAGE_MODE to NEVER.
Besides, for who relied on the onnxruntime_PREFER_SYSTEM_LIB build
option, please be aware that this PR will change find_package calls from
Module mode to Config mode. For example, in the past if you have
installed protobuf from apt-get from ubuntu 20.04's official repo,
find_package can find it and use it. But after this PR, it won't. This
is because that protobuf version provided by Ubuntu 20.04 is too old to
support the "config mode". It can be resolved by getting a newer version
of protobuf from somewhere.
### Description
Update protobuf-java to version 3.21.7. This change only impact tests.
### Motivation and Context
The current version exhibits CVE-2022-3509
### Description
<!-- Describe your changes. -->
Add ROCm5.3.2 to python package pipeline
we build rocm/dev-centos-7:x.x.x stage by ourselves to avoid dependence
on AMD's release.
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
Co-authored-by: peixuanzuo <peixuanzuo@linmif39a000004.zvflicr54joexhdgnhvmxrxygg.phxx.internal.cloudapp.net>
Pytorch was added to inference pipelines in PR #8027. But, actually
these pipelines do not use PyTorch. PyTorch is huge, here we need to
install it for 4 different Python versions. If we remove PyTorch, we
will significantly reduce the image size. And, now downloading a pytorch
package often takes more than 1 hour. If we do it 4 times, it may take 4
hours.
Valgrind was added by me long time back, and it was not used too. Now we
run Linux tests outside of docker containers. So, when we have the need,
we could install it through apt-get on Ubuntu instead of doing it in the
CentOS container.
### Description
Upgrade cmake version to 3.24 because I need to use a new feature that
is only provided in that version and later. Starting from cmake 3.24,
the
[FetchContent](https://cmake.org/cmake/help/latest/module/FetchContent.html#module:FetchContent)
module and the
[find_package()](https://cmake.org/cmake/help/latest/command/find_package.html#command:find_package)
command now support integration capabilities, which means calls to
"FetchContent" can be implicitly redirected to "find_package", and vice
versa. Users can use a cmake variable to control the behavior. So, we
don't need to provide such a build option. We can delete our
"onnxruntime_PREFER_SYSTEM_LIB" build option and let cmake handle it.
And it would be easier for who wants to use vcpkg.
### Motivation and Context
Provide a unified package management method, and get aligned with the
community. This change is split from #13523 for easier review.
This PR enables ORT to execute graphs captured by TorchDynamo. Major compilation code is in `OrtBackend.compile` in ort_backend.py. `register_backend.py` is for plugging `OrtBackend` into TorchDynamo as a compiler.
Updates EP perf benchmarking scripts to upload new data with an improved table schema. In order to preserve compatibility with the current benchmarking pipeline, we still upload data that uses the old schema as well. These changes are required in order to improve data filtering capabilities and general UX in dashboards that visualize this data.
Details:
- EP names no longer hardcoded as columns for tables that store inference latency, session creation times, memory usage, and model/EP status.
- Add explicit branch, commit ID, and commit date columns to all tables
- Improvements to the docker image building scripts (simplify docker image build; support installing binary TensorRT packages)
- Remove use of deprecated DataFrame.append in favor of pandas.concat.
`python setup.py develop` doesn't install PyTorch as a normal package in
site-packages anymore, and the user must stay at PyTorch's root
directory to call `import torch`. This will break LORT tests because
LORT tests contains `import torch` and are called outside PyTorch root
directory. To make PyTorch a normal package again, this PR build PyTorch
with `python setup.py install`.
### Description
<!-- Describe your changes. -->
1. Remove ROCm5.1.1 and ROCm5.2 from ROCm python package pipeline
2. Add ROCm5.3 to ROCm python package pipeline
pipeline:
https://aiinfra.visualstudio.com/Lotus/_build/results?buildId=237172&view=results
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
Update for ROCm CI before reland tunable GEMM #12853. This PR also update
composable kernel to use CMakes's HIP language support so that we can
mix C/C++ compiler with HIP compiler instead of locking to hip-clang
### Description
We fix iGPU Unit and Python tests with this PR
We add packaging pip pkg to build Many Linux DockerFile
### Motivation and Context
This change is required to make sure iGPU Unit Test/Python Tests with OV
are fixed
- If it fixes an open issue, please link to the issue here. -->
Co-authored-by: shamaksx <shamax.kshirsagar@intel.com>
Co-authored-by: mayavijx <mayax.vijayan@intel.com>
Co-authored-by: pratiksha <pratikshax.bapusaheb.vanse@intel.com>
Co-authored-by: pratiksha <mohsinx.mohammad@intel.com>
Co-authored-by: Sahar Fatima <sfatima.3001@gmail.com>
Co-authored-by: Preetha Veeramalai <preetha.veeramalai@intel.com>
Co-authored-by: nmaajidk <n.maajid.khan@intel.com>
Co-authored-by: Mateusz Tabaka <mateusz.tabaka@intel.com>
1. Update CUDA version from 11.4 to 11.6.
2. Update Manylinux version
3. Upgrade GCC version from 10 to 11 for most x86_64 pipelines. CentOS 7 ARM64 doesn't have GCC 11 yet.
4. Refactor python packaging pipeline:
a. Split Linux GPU build job to two parts, build and test, so that the
build part doesn't need to use a GPU machine
b. Make the Linux GPU build job and Linux CPU build job more similar: share the same bash script and yaml file.
5. Temporarily disable Attention_Mask1D_Fp16_B2_FusedNoPadding because it is causing one of our packaging pipeline to fail. I have created an ADO task for this.
This changes are to align OV 2022.2 Release with ORT . Changes
CPU FP16 Support, dGPU Support, RHEL Dockerfile, Ubuntu 20 Dockerfile
**Motivation and Context**
- This change is required to ensure ORT-OpenVINO Execution Provider is
aligned with latest changes.
- If it fixes an open issue, please link to the issue here.
Co-authored-by: mayavijx <mayax.vijayan@intel.com>
Co-authored-by: shamaksx <shamax.kshirsagar@intel.com>
Co-authored-by: pratiksha <pratikshax.bapusaheb.vanse@intel.com>
Co-authored-by: pratiksha <mohsinx.mohammad@intel.com>
Co-authored-by: Sahar Fatima <sfatima.3001@gmail.com>
Co-authored-by: Preetha Veeramalai <preetha.veeramalai@intel.com>
Co-authored-by: nmaajidk <n.maajid.khan@intel.com>
Co-authored-by: Mateusz Tabaka <mateusz.tabaka@intel.com>
Co-authored-by: intel <intel@iotgecsp-nuc04.iind.intel.com>
# Motivation
Currently, ORT minimal builds use kernel def hashes to map from nodes to
kernels to execute when loading the model. As the kernel def hashes must
be known ahead of time, this works for statically registered kernels.
This works well for the CPU EP.
For this approach to work, the kernel def hashes must also be known at
ORT format model conversion time, which means the EP with statically
registered kernels must also be enabled then. This is not an issue for
the always-available CPU EP. However, we do not want to require that any
EP which statically registers kernels is always available too.
Consequently, we explore another approach to match nodes to kernels that
does not rely on kernel def hashes. An added benefit of this is the
possibility of moving away from kernel def hashes completely, which
would eliminate the maintenance burden of keeping the hashes stable.
# Approach
In a full build, ORT uses some information from the ONNX op schema to
match a node to a kernel. We want to avoid including the ONNX op schema
in a minimal build to reduce binary size. Essentially, we take the
necessary information from the ONNX op schema and make it available in a
minimal build.
We decouple the ONNX op schema from the kernel matching logic. The
kernel matching logic instead relies on per-op information which can
either be obtained from the ONNX op schema or another source.
This per-op information must be available in a minimal build when there
are no ONNX op schemas. We put it in the ORT format model.
Existing uses of kernel def hashes to look up kernels are replaced
with the updated kernel matching logic. We no longer store
kernel def hashes in the ORT format model’s session state and runtime
optimization representations. We no longer keep the logic to
generate and ensure stability of kernel def hashes.
1. Move the Linux ARM64 part of python packaging pipeline to a real ARM64 machine pool
2. Refactor the Linux CPU build jobs of python packaging pipeline to two parts: build and test. The test part will be exempted from Cyber EO compliance requirements as it won't affect the final bits we publish. This refactoring is to reduce dependencies in the build part. For example, this PR remove pytorch from the build dependencies.
3. Combine DML nuget packaging pipeline with "Zip-Nuget-Java-Nodejs Packaging Pipeline" as they all produce ORT nuget packages. Also, publish DML nuget packages and ORT GPU nuget packages to https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly feed.
* upgrade cuda version on ci pipelines
* keeping folder name same
* keeping folder name same
* setting manual seed for primitive test case
* resolving comments
* changing atol and rtrol only for test case
Co-authored-by: Adam Louly <adamlouly@microsoft.com@orttrainingdev7.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
* moving training pipelines from cuda 11.5 to 11.6 and deprecating cuda 11.3
* change to cuda 11.6.2
* change pytorch's & torchvision's cuda version to 11.6
* specify deps version to 11.6.2
* update pytorch and torch text version
* torch 1.12.1
* change torchvision and torchtext version to be compatible with torch 1.12.1
* change cuda to 11.6 for cuda_home comaptibility
Co-authored-by: Adam Louly <adamlouly@microsoft.com@orttrainingdev7.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
* Make ORT as Pytorch JIT backend
LORT likely doesn't work with aten fallback so we only test LORT in its own CI.
* Revert changes to enable external CUDA allocator. Will add it later.
Revert "Revert changes to enable external CUDA allocator. Will add it later."
This reverts commit d5487f2e193014c805505afae8fb577c53667658.
Fix external allocator
* Relax tolerance and remove commented code
* Print more information in CI
* Fix pointer
* Address comments.
1. Reuse ORT-eager mode's environment.
2. Remove unused ctor.
* Use Pytorch master branch as all PRs are merged
Fix
* Refine based on cpplint feedbacks
* Revert changes to allow custom CUDA allocator in public APIs
* Use torch.testing.assert_close
* Use unittest framework
* Switch docker repo
* Rename *.cpp to *.cc
* Address comments
* Add comment
* Use same pipeline file for eager and lort pipelines
* Address comments
* Add yaml comment
* Fix cmake files
* Address comments
* Rename flags, remove printing code, remove dead comment
1. Delete the build scripts that were copied from manylinux project. Use "git checkout" instead.
2. Update manylinux version to get python 3.11. Related issue: Python 3.11 support #12343
3. Change the cuda version of linux gpu build job of nuget packaging pipeline from cuda 11.4 to cuda 11.6 to match the TRT job within the same pipeline.. (A lot other places need be updated as well, but I'd prefer to put them in another PR)
4. Make dockerfile names static. For example, replace tools/ci_build/github/linux/docker/$(DockerFile) to tools/ci_build/github/linux/docker/Dockerfile.manylinux2014_cpu . The former one relies on a runtime variable $(DockerFile), Template Parameters are expanded early in processing a pipeline run when most variables are not available. It like C++ macros vs variables.
* update to 2022
* Update the VS version
* Rolling back to gcc 10
* Rolling back
* Update cuda home
* remove "CMAKE_CUDA_ARCHITECTURES=52"
* update cuda Architure to 70
* Delete cuda 10.2 training pipeline
* rolling back a mistake
* Update win-gpu-reduce-op-ci-pipeline.yml
* Update win-gpu-reduce-op-ci-pipeline.yml
* Update win-gpu-reduce-op-ci-pipeline.yml
* Delete tools/ci_build/github/linux/docker/scripts/training/ortmodule/stage1/requirements_torch1.10.0_cu10.2 directory
* Delete tools/ci_build/github/linux/docker/scripts/training/ortmodule/stage1/requirements_torch1.11.0_cu10.2 directory
* Add tests for all uniary aten ops supported in eager mode
* fixing the PR draft
* fixing the merge
* changing eval to be at compile time
* adding requirements for eager
* 1.adding function to {ops}_out
2.cleaning the code
and adding comments
* editing the code according to code review
Co-authored-by: root <root@AHA-LIRONKESE-1>
* Try manually installing trt8.4 in multi-gpu pipeline
* Remove stmts that clean up cmake, ctest. Update tensorrt repository name passed to get_docker_image.py
* Update trt and cudnn home
* Don't install trtexec cli tool.
* Increase job timeout
* Revert timeout change and use trt placeholder builder build option
* update trt 8.4ga
* trt 8.4 linux ci pipeline
* fix cmake
* placeholder_builder
* trt 8.4 windows pipeline
* gpu package pipeline
* trt 8.4.1.5 , packaging pipeline updates
* python packaging
* ctest timeout
* python packaging test
* bump timeout
* python format
* format
* revert
* newline
* enable trt python tests
* typo
* python format
* disable on windows
* aten op for inference
* fix build error
* more some code to training only
* remove domain from operator name
* move aten_op_executor ext out from ortmodule
* add pipeline
* add exec mode
* fix script
* fix ut script
* fix test pipeline
* failure test
* rollback
* bugfix
* resolve comments
* enable aten for python build only
* fix win build
* use target_compile_definitions
* support io binding
* turn off aten by default
* fix ut
Co-authored-by: Vincent Wang <weicwang@microsoft.com>
Co-authored-by: zhijxu <zhijxu@microsoft.com>
* update TVM
* get alignment constant from TVM
* update TVM_VM_SetInputs to upstream with TVM API
* fix CI issue: update TVM EP dependencies
* add sudo
* revert changes needed to install missing package
* add package for TVM EP CI
Co-authored-by: Valery Chernov <valery.chernov@deelvin.com>
Co-authored-by: KJlaccHoeUM9l <wotpricol@mail.ru>
Description:
Add the extra param to match gelu in PyTorch in the contrib symbolic function
Motivation and Context
Why is this change required? What problem does it solve?
The symbolic function in /onnxruntime/python/tools/pytorch_export_contrib_ops.py is missing a recently added parameter approximate. We add this parameter and use the exporter defined gelu if approximate is "tanh".
* move all logic for ubuntu dockerfiles
* pass in trt version
* update trt 8.0 file
* downgrade protobuf
* uncomment
* and
* change to 8.0
* update dockerfiles
* checkout protobuf based on version
* adding last dockerfile:
:
* checkout 3.10 protobuf
* fix checkout version
* update to 8.2
* keep only one submodule sync
* cleanup
* Delete Dockerfile.custom-trt-perf
* create checkout submodules script
* properly compare decimals in bin/sh
* combine build ort paths
* deprecate TRT 7.2
* only checkout protobuf if we checkout older onnx-tensorrt
* only pull nvidia container if true, update image
* downgrade protobuf only if we checkout onnx-trt
* Update linux-gpu-tensorrt-daily-perf-pipeline.yml for Azure Pipelines
* Update linux-gpu-tensorrt-daily-perf-pipeline.yml for Azure Pipelines
* Add quotes to avoid path splitting
* address shellcheck
* use shellcheck suggestions
Description: Format all python files under onnxruntime with black and isort.
After checking in, we can use .git-blame-ignore-revs to ignore the formatting PR in git blame.
#11315, #11316
* remove rocm42 CI
* update torch to v1.11.0
Co-authored-by: Ethan Tao <ettao@microsoft.com@orttrainingdev7.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
* Enabling ov-ep for 2022.1 Release
->Added ov-ep 2022.1 flow
->Validated CPU Unit tests with OV
Master using onnxruntime_test_all unit
tests.
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Fix for output mismatch b/w OpenVINO and ONNX
Refer:
https://jira.devtools.intel.com/browse/CVS-60310
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Enabling Adobe ops
->Enable Resize op for iGPU
->Enable Add op for iGPU
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Removing irrelevant conditions
->Removing some conditions from
GetCapability() which are now not
required. (Removed conditions for
OV version support less than 2021.2)
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Enable upsample op
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Enable Adobe proxy-e model
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Removing any extra conditions for Opset13 ops
* Opset13 changes
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Exception handling for devices
* Added comments
* Implement GPU Throttling feature
*Added GPU Throttling feature for iGPU's.
when user enables it as a runtime option,
it helps in reducing overall CPU usage
of the application
*Added changes to exercise this option
using onnxruntime_perf_test application.
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Renaming the runtime config option
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Added the user to video and users group
* Handling_GPU.0_GPU.1
* Handling special conditions
->Handling corner cases for
device_type checks
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Modification to include new api 2.0 changes in the code
* Added opset13 changes
->Enabled Few ops
->Added Debug info for case 3b in getcapability()
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Enabling ov-ep for 2022.1 Release
->Added ov-ep 2022.1 flow
->Validated CPU Unit tests with OV
Master using onnxruntime_test_all unit
tests.
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Fix for output mismatch b/w OpenVINO and ONNX
Refer:
https://jira.devtools.intel.com/browse/CVS-60310
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Enabling Adobe ops
->Enable Resize op for iGPU
->Enable Add op for iGPU
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Removing irrelevant conditions
->Removing some conditions from
GetCapability() which are now not
required. (Removed conditions for
OV version support less than 2021.2)
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Enable upsample op
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Enable Adobe proxy-e model
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Removing any extra conditions for Opset13 ops
* Opset13 changes
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Exception handling for devices
* Added comments
* Implement GPU Throttling feature
*Added GPU Throttling feature for iGPU's.
when user enables it as a runtime option,
it helps in reducing overall CPU usage
of the application
*Added changes to exercise this option
using onnxruntime_perf_test application.
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Renaming the runtime config option
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Added the user to video and users group
* Handling_GPU.0_GPU.1
* Handling special conditions
->Handling corner cases for
device_type checks
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Added opset13 changes
->Enabled Few ops
->Added Debug info for case 3b in getcapability()
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Log comments updated
* Changes to enable 2.0 api
* Enabling ov-ep for 2022.1 Release
->Added ov-ep 2022.1 flow
->Validated CPU Unit tests with OV
Master using onnxruntime_test_all unit
tests.
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Fix for output mismatch b/w OpenVINO and ONNX
Refer:
https://jira.devtools.intel.com/browse/CVS-60310
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Enabling Adobe ops
->Enable Resize op for iGPU
->Enable Add op for iGPU
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Removing irrelevant conditions
->Removing some conditions from
GetCapability() which are now not
required. (Removed conditions for
OV version support less than 2021.2)
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Enable upsample op
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Enable Adobe proxy-e model
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Removing any extra conditions for Opset13 ops
* Opset13 changes
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Exception handling for devices
* Added comments
* Implement GPU Throttling feature
*Added GPU Throttling feature for iGPU's.
when user enables it as a runtime option,
it helps in reducing overall CPU usage
of the application
*Added changes to exercise this option
using onnxruntime_perf_test application.
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Renaming the runtime config option
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Added the user to video and users group
* Handling_GPU.0_GPU.1
* Handling special conditions
->Handling corner cases for
device_type checks
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Added opset13 changes
->Enabled Few ops
->Added Debug info for case 3b in getcapability()
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Fix build issue
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Fixes issues
*Fixes compiler warnings c4458 on windows.
*Fixes the bug in device_type check logic
*Adds print info for enable_opencl_throttling
option in onnxruntime_perf_test
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* commit to make openvino_2021.4 compatible
* Fixed IO Buffer Optimization
* Fix output names issue
* Fix 2021.3 branch
* Bug Fix for Multiple inputs/outputs
- Assigns the right output_name and
input_name for the graph when
returned by CompiledModel::inputs()
OV function.
- Also takex care of output mismatch
issue b/w openvino output and onnx
output
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Add comments for the changes made
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* IO Buffer Changes
* Commit for Disabling GPU Throttling for 2021.4
* Updated branch
* Fix windows build
->Fixed windows build in debug mode
->Disabled scatternd3_tensor_int64
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Fixed CPP Unit tests for CPU
-Fixed shrink, MVN, ReduceL2, Maxpool,
upsample, scatter, slice, reshape,
unsqueeze.
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Fixed first set of GPU Tests
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Fixed additional failing tests on GPU
->Added conditions to disable certain ops
under certain conditions
->Disabled certain tests
->Added some op supports for no_dimension
supported
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Added Expand op support for CPU
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Added condition for squeeze op
->Shape can't have empty axes attribute
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Add support for LessOrEqual op function
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* OV Interface wait for replaced by indefinite wait call
* use names from ONNX model to access OV tensors
This chnage is to use the input/output names
retrieved from original onnx model to access
OV tensors and to check if there's any input
or output names mismatch b/w ONNX naming
and OV naming.
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Fixes Myriad unit tests and other issues
->Fixes Myriad CPP unit tests
->Fixes output mismatch issue with models with
sub graph partitioning
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Fix segfault issue
->Fixed case 3b condition in get_capability()
which was causing the segfault issue
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Fixed build isuse with ov 2021.4 with I/O buffer
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Disables performance counters for I/O Buffer
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Fixed inputs/outputs mismatch for HDDL with 2022.1
Signed-off-by: Mohammad Amir Aqeel <mohammadx.amir.aqeel@intel.com>
* Fix to enable GPU FP16
* Enabled mlperf_ssd_mobilenet_300 model fully on CPU
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Added ov version specific dll packaging for nuget
* Fixed conditions for few ops
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Dockerfile updates
* Updated License Info
-Updated the copyrights License Info
-modified FP16 transformations with OV 2022.1
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Disabling mlperf_ssd_mobilenet_300 model
->Disabled this model for openvino. The
test is failing in Internal_CI pipelines.
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Disabling failing python CPU Tests
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Fixed flake8 python errors
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
Co-authored-by: hdgx <harinix.d.g@intel.com>
Co-authored-by: mayavijx <mayax.vijayan@intel.com>
Co-authored-by: sfatimar <sahar.fatima@intel.com>
Co-authored-by: mohsinmx <mohsinx.mohammad@intel.com>
Co-authored-by: Mohammad Amir Aqeel <mohammadx.amir.aqeel@intel.com>
* Update orttraining release pipelines to use torch 1.11.0
* Change requirements_torch...txt to requirements.txt
* Update cuda cmake architectures and clean up old files
* add support for bool type
* add TVM EP support for tests
* include TVM EP in python test pool
* fix pylint
* moved technical imports to a separate file
* clean up post build actions & move _ld_preload.py extension to CMake level
* add files for include TVM EP into CI
* implement custom logger for TVM
* replace TVM logging with ONNX RT logging
* update link for TVM EP tutorial
* clean up TVM EP cmake
* add pybind auto enabling for TVM EP
* fix blank spaces
* code review fixes
* replace print with comment
* add list of EP without TVM EP
* enable onnx tests
* disable contrib ops and ml ops
* reuse Dockerfile.ubuntu
* Move install_tvm_test_dependencies.sh out of Docker context dir, update build definition.
Co-authored-by: Edward Chen <18449977+edgchen1@users.noreply.github.com>
Move binary size check(s) to a separate pipeline. In the future, other binary size-related builds can go here.
Add publishing of build artifacts for easier analysis.
Add optional build with debug info.
* migrate to 1ES Hosted Pool
* migrate to Kusto database
* refactor and organize ep names with ORT prefix
* standardize TRT benchmarking with save/load engine, input binding, and workspace
* Add TRT 8.2 to ep perf pipeline
* update model_list.json with full onnx zoo
* add anubis credentials
* add anubis credentials
* clarify trt variables
* get system info from docker image
* remove unwanted commenting
In a reduced ops build, some source files get updated. This change moves the updated files into the build directory. This way, it is easier to simultaneously manage different build directories (with possibly different reduced ops configurations) based on a single source directory.
* update base image from 11.4.0 to 11.4.2
* update Linux TRT GPU pipeline to TRT 8.2
* update onnx-tensorrt to 8.2-GA
* disable failing TensorRT 8.2 tests.
* update pad test.
* fix
* update win trt ci pipeline to trt 8.2
* test run with cuda 11.4 and cudnn 8.2
* increase timeout
* revert
* revert
* update packaging pipelines to use trt 8.2
* fix typo
* update trt gpu perf pipeline to trt 8.2
* increase timeout
* delete deprecated ci-perf-pipeline.yml
* bump timeout
* adjust timeout packaging
* update to torch 1.10
* update torchvision version
* update torchtext version
* remove deprecated option enable_onnx_checker
* add unit test to test gradient of GatherElements
* add ORTMODULE_ONNX_OPSET_VERSION in a docker file
* add ortmodule and eager mode test
* add ortmodule dependency
* fix eager pipeline
* skip tthe ortmodule test for windows due to win ci issue
* remove useless win ci change
* add torch
Co-authored-by: Abhishek Jindal <abjindal@microsoft.com>
* Changes to ensure openvino build go through in Windows
* Modified Hetero plugin Logic
*Modified Hetero Feature logic. In Hetero,
if the operator to be marked true in getcapability(),
it should be supported by either of the devices
specified with HETERO in the device_type.
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* OV updated to 2021.4.2 version
* OV updated to 2021.4.2 version
* Updated OV to 2021.4.2 version, mono download link and dotnet version
* Copying Managed nugets in openvino c# docker file
*Copying Managed nuget to nugets artifacts
directory
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
Co-authored-by: saharfraza <sfatima.3001@gmail.com>
Co-authored-by: mayavijx <mayax.vijayan@intel.com>
Co-authored-by: Aravind Gunda <aravindx.gunda@intel.com>
Adding ARM64 depthwise convolution kernel for symmetric quantization
Motivation and Context
Two improvements against current kernel code :
1. Signed int8 based instructions, no need to extend from 8b to 16b before multiplication.
2. Unrolled loop with manual software pipelining
Co-authored-by: Chen Fu <fuchen@microsoft.com>
ORT format model runtime optimization implementation is in progress.
This change adds a build.py option to disable the partial runtime optimization implementation, adds CI builds to test it, and disables runtime optimizations in mobile package builds.
Add Xamarin support to the ORT nuget packages.
- Update C# code to support Xamarin builds for iOS and Android
- refactor some things to split out common code
- include iOS and Android ORT native shared library in native nuget package
* make work for both rocm 4.2 and rocm 4.3.1
* fix rocm 4.3.1 docker image reference
* fix CUDA_VERSION to ROCM_VERSION
* fix ReduceConsts conflict def
* add ifdef to miopen_common.h as well
* trailing ws
* 2021.4.1 Docker and ci changes
* OV version change
* Removing Imagescaler op from the op's list
Reverting this change which was added in last
PR. Imagescaler is now deprecated. so removing
it from the supported list. Also this
op is causing regression in the performance
of the FP16 models.
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Re-writing the help message for num_of_threads
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
Co-authored-by: Aravind Gunda <aravindx.gunda@intel.com>
* install protobuf from source
* fix rm command in Dockerfile
* fix options on rm command
* fix cd into protobuf source directory
* try again
* remove strip step
* debug list the files
* ls on /usr
* more debug
* more debug
* adjust LD_LIBRARY_PATH
* try remove protobuf before ORT build
* updates for picking pnnx commit
* add tests filter to c# tests
* plus test fixes
* fix versioning for contrib ops
* fix tests
* test filter for optional ops
* more versioning related updates
* fix test
* fix layernorm spec
* more updates
* update docs
* add more test filters
* more filters
* update binary size threshold
* update docs
* draft - enable model local function
* enable model local functions in ORT
* update to latest rel onnx commit
* plus tests
* plus more updates
* plus updates
* test updates
* Fix for nested functions + shape inference
* plus bug fix and updates per review
* plus fixes per review
* plus test updates
* plus updates per review
* plus fixes
* fix a test
* copy changes from trt_and_mem
* second edits
* Update linux-gpu-tensorrt-ci-perf-pipeline.yml for Azure Pipelines
* Update linux-gpu-tensorrt-ci-perf-pipeline.yml for Azure Pipelines
* Update linux-gpu-tensorrt-ci-perf-pipeline.yml for Azure Pipelines
* change to cuda 11.4
* build with cuda 11.4
* Update Dockerfile.ubuntu_cuda11_1_tensorrt7_2
* add cmake extra defines
* cmake architectures
* fix cmake arch
* Delete ubuntu-18.04.Dockerfile
* Rename Dockerfile.ubuntu_cuda11_1_tensorrt7_2 to Dockerfile.ubuntu_cuda11_4_tensorrt7_2
* Update linux-gpu-tensorrt-ci-perf-pipeline.yml
* Update linux-gpu-tensorrt-ci-perf-pipeline.yml for Azure Pipelines
* removing previous ort args
* rename to cuda 11.4
* remove cuda 10_2
* delete trt 7.1
* remove 7.1
* Passing in cuda architecture to reduce build time
* always add submodule sync due to recursive cloning
* fix run command
* add and
* take away unused arms and share python installation script
* Update linux-gpu-tensorrt-ci-perf-pipeline.yml
* Update Dockerfile.tensorrt
* cleanup file
* install python directly on dockerfile - move to scripts in future
* Update Dockerfile.custom-trt-perf
* adding cuda 11.1 for missing Libnvrtc.so.11.1
* Delete install_python.sh
* modify for test
* modify for test
* modify for test
* modify for test
* modify for test
* modify for test
* prepare for PR
* Rename cuda directory to gpu directory in tarball
* Fix gpu java package
* fix bug
* fix small bug
* Add onnxruntime_providers_shared.dll into gpu nuget package
* Modify for test
* Temporarily remove for test
* Modify for test
* Modify for test
* Test packging Windows combined GPU
* Test packging Windows combined GPU
* Test packging Windows combined GPU
* Test packging Windows combined GPU
* modify for test
* modify for test
* fix bug
* Modify for test
* Modify for test
* Modify for test
* Modify for test
* Modify for test
* Modify for test
* Modify for test
* Modify for test
* Prepare for PR
* Prepare for PR
* Code refactor
* Rename proper Artifact name
* Rename intermediate Artifact names
* Revert Artifact Names
* Rename Artifact Names
* Modify Artifact name
* Modify Artifact name
* Modify Artifact name
* Update Java package
* Update Java package
* fix bug to change artifact name
* Fix bug for the wrong file path
* Fix no fetching correct artifact and test
* temporarily modify for test
* undo the change for test
* fix build - python.h not found
* disable --build_shared_lib for ortmodule tests
* fix
* fix the build flag
* disable --build_shared_lib for training path (not only for ortmodule)
* fix missing test model files
* disable test CApiTest.test_custom_op_library when ENABLE_TRAINING_TORCH_INTEROP is ON
* enable custom_op_library build
* fix build
* fix
* merge master and fix build failure
* build onnx_test_runner when onnxruntime_ENABLE_TRAINING_TORCH_INTEROP is ON
* resolve comments
* use --enable_training_torch_interop to replace "onnxruntime_ENABLE_TRAINING_TORCH_INTEROP=ON"
* initial update from 11.1 to 11.4
* change 11.4.1 to 11.4.0
* adjusting to match nvidia/cuda image tags
* adjusting to match nvidia/cuda image tags centos7
* correction to 11.4.0
* correction to 11.4.0
* update to cuda 11.4
* change training back to 11.1
* change training back to 11.1
* point to correct nvcr.io/nvidia/cuda 11.4.1 image
* change centos8 to centos7
* correct cudnn path
* Update linux-gpu-ci-pipeline.yml for Azure Pipelines
* Update c-api-noopenmp-packaging-pipelines.yml
* need to resolve centos images but remove space and change to 11.4
* Update linux-gpu-ci-pipeline.yml
* add cudnn to docker image
* bump devtoolset to 10
* revert cuda 11.4 change to setup_env_trt
* orttraining back to 11.1
* use nvcr.io
* Fix previous change back to cuda 11.1
* update cudnn path
* use cudnn image (revert if failure)
Add IsSparseTensor
Add CreateSparseTensor
Add utilities and test fully sparse instantiation
Fully sparse blocksparse
Add test and docs for fully sparse tensor instantiation
Rework creation API
Use API
Non string API
Retrofit of existing String API
Add tests
Add documentation
Address build issues (Winml pending)
Add inference test
Bump binary size
Add ifdef DISABLE CONTRIB
* updates for picking pnnx commit
* add tests filter to c# tests
* plus test fixes
* fix versioning for contrib ops
* fix tests
* test filter for optional ops
* more versioning related updates
* fix test
* fix layernorm spec
* more updates
* update docs
* add more test filters
* more filters
* update binary size threshold
* update docs
* plus more fixes
* updates per review
* update to release commit
* add filters for optional type tests
* plus updates
* update onnx-tensorrt parser to master
* disable unsupported tests
* add cuda sm 75 for T4
* update tensorrt pipeline
* update trt pipelines
* update trt pipelines
* Update linux-gpu-tensorrt-ci-pipeline.yml
* update trt cid pipeline
* Update linux-gpu-tensorrt-ci-pipeline.yml
* Update Tensorrt Windows build pool and TensorRT/CUDA/CuDNN version
* update to cuda11.4 in trt ci pipeline
* update base image to cuda11.4
* update packaging pipeline to cuda11.4
* clean up
* remove cuda11.1 and cuda11.3 docker file
* disable unsupported tensorrt tests at runtime
* Update linux-multi-gpu-tensorrt-ci-pipeline.yml
1. Update SDLNativeRules from v2 to v3. The new one allows us setting excluded paths.
2. Update TSAUpload from v1 to v2. And add a config file ".gdn/.gdntsa" for it.
3. Fix some parentheses warnings
4. Update cmake to the latest.
5. Remove "--x86" build option from pipeline yaml files. Now we can auto-detect cpu architecture from python. So we don't need to ask user to specify it.
SparseTensor support
Implement Builder pattern
Fix support for 1-D and 2-D COO indices
Implement and test CSR support.
Handle shape inference for SparseTensors
Implement conversion for COO, CSR and tests.
Address the case where constant sparse initializer is the output.
Implement test infra for SparseTensors
Implement SparseDenseMatMul for Csr and COO and tested it.
Add hash for SparseToDenseMatMul
Finish shared provider refactor
Refactor GetOrCreate to Create
Working on py interface
Expose OrtDevice and use it in allocate_numpy
Adjust Sparse interfaces, add support for string SparseTensor. Add tests.
Add and test to_cuda()
Add accessors to format specific indices
Test values and indices views, read-only flag, after GC access
Add sparse related methods to OrtValue
Re-work SparseTensor wrapper, add OrtValue methods
Rework numpy_array_to_cuda/to_cpu
Add run_with_ort_values
Add models and test sparse_mat_mul with run_with_ort_values
Refactor sparse tensor to use a single buffer
Ifdef x86 Eigen CSR sparse matmul implementation
Exclude broken test, check for string type when copying cross device
Split pybind schema, regenerate docs, add exclusion
Conditionally exclude schema module
Update docs fix cuda build
Add test to a filter and renerate JS docs
Add conversion and test string support for sparse tensors
Exclude conversion utils from minimal build
Add CUDA Memcpy and adjust provider interfaces
* Changes to ensure the openvino-ep-2021.4 branch is created
* Fix failing cpp and python unit tests
* Fixed Myriad Tests for Ov_2021.4
* Disabled failing python tests for myriad
* Fixes models which were breaking w.r.t 2021.4
* Added fixes to Fix tinyyolov3 working on Myriad
and MaskRcnn, FasterRcnn using GPU_FP32
* Added FP16 output data type support for ngraph
* Implemented ReadNetwork() method
->Using Core::ReadNetwork() method for reading and creating a CNNNework
->Since OpenVINO™ 2020.4 version, Inference Engine enables reading ONNX models
via the Inference Engine Core API and there is no need to use directly the low-level
ONNX* Importer API anymore. To read ONNX* models, it's recommended to use the
Core::ReadNetwork() method that provide a uniform way to read models from ONNX format.
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Fixed ngraph f16 supported output type
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Added comments in data_ops.cc
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Fixed broken windows build
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Disable failing CPP tests on CPU
Some of the convtranspose tests are failing on
OpenVINO-EP CPU due to accuracy mismatch w.r.t
default CPU. so currently we are disbaling
these tests.
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Updated for ov version 2021.4
* Changes to include qdq ops in code
* Disabled failing python tests on GPU
Disabled two maxpool python tests on
GPU as they were passing but throwing
segfault
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Fix the backward compatibility issue
ReadNetwork() API has a bug and will only work
starting from OpenVINO 2021.4 version.
The previous versions will still have to use
onnx importer route
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Fix CMakeLists.txt for OpenVINO EP
If a directory with OpenVINO is sourced,
the latest OpenVINO settings have to
be imported.
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
Co-authored-by: sfatimar <sahar.fatima@intel/com>
Co-authored-by: sfatimar <64512376+sfatimar@users.noreply.github.com>
Co-authored-by: Aravind Gunda <aravindx.gunda@intel.com>
* Add ability to generate ios static framework
* Fix typos
* Add pod cache clean, update some comments of previous commit
* Fix CI failure with newly added cpuinfo library
* Update test model (CoreML requires node has a name)
* Addressed CR comments
Pytorch cpuinfo library allows us to query current cpu features, micro-architecture and cache size, etc. These information is needed for targeted performance optimizations.
Unfortunately it does not work under Windows/ARM. We need to develop our own later
* Add metadata_props to ORT model
* Minor update
* Update python binding, and increase the minimal pipeline size threshold
* Fixed a small bug in serializing ir_version
* Remove temp ort.py.fbs and add it to .gitignore
* first attempt share docker image across python and torch versons
* set dependency between jobs
* fix yaml grammer
* remove python version from first stage
* clean deepspeed directroy
* split into two images according torch version
* fix yaml syntax
* invalidate cache
* remove DS to prevent torch 1.9.0 upgrade
ORTModule requires two PyTorch CPP extensions that are currently JIT compiled. The runtime compilation can cause issues in some environments without all build requirements or in environments with multiple instances of ORTModule running in parallel
This PR creates a custom command to compile such extensions that must be manually executed before ORTModule is executed for the first time. When users try to use ORTModule before the extensions are compiled, an error with instructions are raised
PyTorch CPP Extensions for ORTModule can be compiled by running:
python -m onnxruntime.training.ortmodule.torch_cpp_extensions.install
Full build environment is needed for this
1. Remove some unused code and simplify tools/ci_build/github/linux/run_dockerbuild.sh.
2. Enable Nuget CUDA tests. The original design was we could leverage Directory.Build.props and let cmake generate the required properties(USE_CUDA/...) there. However, in nuget packaging pipeline we test the package on a different host that doesn't run cmake command and doesn't have the auto-generated Directory.Build.props file.
* Revert for testing TensorRT 7.1
* change to origianl googletest version
* change machine
* remove build arg
* change back machine
* revert back googletest version
* Make it ready to merge to master
* revert onnx-tensorrt to v7.1
* rename yml
* use [[ ]] in bash command
* add sudo
* add chmod
* add correct path
* change another way to revert onnx-tensorrt
* change docker image to manylinux build
* Register Torch Custom autograd.Function
* Add flag to supress pybind11 warning
* Avoid unnecessary include in cmake
* Add missing reference
* Add getter for registerred functions
* Format for making subsquent changes cleaner
* Fix interop feature build failure
* Forward pass, run PyOP on CPU EP
* clean up the code
* Fix build
* Define new ops
* refactor pyop - extract PyOpLibProxy class
* Hacks to run example
* implement the kernel compute func
* add back PyOP for comparision experiments
* debug info - thread id
* refine the kernels
* Polish code
(cherry picked from commit 4ed606f9a0)
* Fix a the Tensor address mismatch in C++ side
* PythonOpGrad compute
* add distributed test case
* refine test cases
* get dist.get_rank() in Autograd forward pass
* Add CUDA kernels
* Store float, int, and tuple of them as PythonOp's attributes
* Populate local changes
* Fix bugs
* PythonOp/PythonOpGrad CUDA kernels
* Support non-tensor inputs
* Single GPU FP16 Run Pass
(cherry picked from commit e539989e91e18ee997900292d3493b97d3eafa8a)
* Fix segement
* add basic test cases
* Save progress
* fix gradient builder for a Add op who have same inputs
* add test cases for auto grad fallback feature
* fix ref cnt issue. add thread id for debugging
* POC: remove interface class
* Remove interface classes
* Clean a bit
* Coarse-grained clean up after rebase master
* reset pyop and language_interop_ops to latest master
* Fix missing part during merge
* re-structure torch related language interop files
* Fix build
* Fix tests and build
* Fix build and basic unit tests
* Fix most of uts
* remove unnecessary import
* clean up and fix build when enabling language_interop_ops
* Fix single-GPU UTs
* Move runner register into ORT package
* Update dist UTs to new style
* Also fix distributed UTs and leaf gradient problem
* Static generation for constant args
* Move arg_positions_ to static field
* Rename some functions
* Move arg ceration into a function
* Clean output logic in PythonOp
* Move PythonOp's ctor
* Revise PythonOpGrad
* Fix "ORT only supports contiguous tensor for now" for inputs
* Fix evaulation mode error, add test & clean up
* clean up codes
* Fix issues introduced by recent master change (enabled symbolic shape infer)
* automatically register forward/backward function pointers && clean up
* Fix multi-output case
* Add a test back
* fix build and clean up
* RAII for function params PyObject
* Use new exporter
* Clean full name in new exporter
* Fix UTs
* Format a file
* Add "inplace" back
Remove a legacy comment
* Refine TorchProxy
1. Make TorchProxy a formal singleton class.
2. Remove unused Scope class.
3. Simplify the call to Forward and Backward. The two functions now
automatically acquire and release GIL state, so user doesn't need
any GIL-related calls.
* Format
* Add lock to avoid racing condition when registering Python objs
* Fix Python call param ref issues && Add RefcountTracker for debug build && Clean up
* clean up print
* Resolve part of comments && clean up
* Fix a potential bug
* track pyobject consistently
* move kernels to cpu provider as base class
* Refactor - 1. Extract PythonOpBase/PythonOpGradBase 2. Implement CPU kernels 3. Test coverage for CPU kernels
* Refine register code
* Add a missing macro
* Release python call result objects with PythonObjectPtr && Add UnRegisterContext && Track PyObject for Debugging && Clena up
* Fix random segfault issue - relasing a wrong ctx pointer for inplace cases
* put ref count in debug macro
* Move GIL out
* Refine tests
* Fix memory leak issue && forward output lifecycle issue:
1. Unregister the OrtValue PythonObject. Currently, the OrtValue shared same buffer with PythonOp/PythonOpGrad's output. So after those kernels outputs are released, the "leaked" OrtValue caused the shared buffer cannot be released.
2. According PyTorch forward+backward execution. The forward outputs (e.g. torch tensors) maintains the context/saved variables/dirty inputs, etc, which are used for backward execution, so its life should be after the backward runs. This change added such a depencencies between PythonOpGrad on PythonOp.
* Move dlpack->ortvalue into C++ to avoid temp object registration
* Fix the over released Py_False/Py_True && refine tests
* Clean up unused functions
* Always assume the first forward output is context so we don't need to test unused cases.
* Fix a memory leak
* move-copy unique_ptr & avoid C-style casting
* Use inplace attribute to determine if input tensors are copied
* Move DlpackCapsuleDestructor's to a common place
* Thread-safe TorchProxy
* Use OrtValue instead of OrtValue*
* Only keep checks for Debug build
* Wrap some long line per comment
* onnx_export_type --> kwargs
* Use requires_grads to create PythonOpGrad's inputs
* add missing files during master merge
* Fix build issue after merge
* Address two comments.
1. Internalize DlpackCapsuleDestructor
2. Change "(" to "]" for describing closed interval.
* Address some comments.
1. "override" -> "overwrite" to avoid using reserved keyword.
2. Call DLPack's helper to create OrtValue for avoiding repeated code.
* Address comments.
1. Pass std::mutex to registeration helpers so their callers don't
have to lock the mutex expclicitly.
2. Rename "func_context_pool_mutex_" to "mutex_". This mutex is the global mutex for OrtTorchFunctionPool.
* Add bridging code to make cuda kernels work with merged master
* put debue macro check within RefCountTracker && use default logger for debug info && remove useless ortvalue_ptr interface && typos && revert unncessary blank line changes
* fix some comments
* Resolve more comments
* Capitalize a word
* use unique_ptr instead of ObjectPointer for PyObject management && add converntion
* Support symbolic shape
* Remove unused variable
* fix build
* Enable function registration for training only && rectify ToDlpack/FromDlpack merge with master.
* Don't add context for non-PythonOp opeartors (for example AtenOp)
* Fix build error
* Polish frontend part.
1. Avoid adding kwargs to ORTModule's ctor
2. Use onnx_export_type rather than kwargs for type safty
3. Fix some build bugs.
* Resolve simpler comments
* Resolve export related comments
* sync master && fix tests && fix non-training build error
* Fix build errors
* add target link lib
* windows build error
* Fix orttraining-linux-ci build
* disable autograd test && clean up
* fix linux orttraining ci build
* try fixing win build error
* Revise append calls in runner
* Enable custom function using a function
* Rename to avoid using reservied keyword
* Use list comprehension
* Set ORT random seed in tests
* Remove print code and fix ctx shape
* [] -> list()
* Move autograd.Function and nn.Module into corresponding functions
* Move test helpers
* Polish dist test a bit. Tried move helpers to helper file but it causes a deadlock.
* trying fix undefined reference
* Context is not managed by global pool
* Polish dist test
* Polish dist test
* Add enable_custom_autograd_function
* Remove enable_custom_autograd_function from ctors
* Add doc strings
* Shorter code
* Address comments
* Add one empty line
* revert a minor and not needed change
* Address comments
* Back to reference
* Fix windows builds
* Fix windows debug build fail to find "'python39_d.lib'"
* fix mac build error
* revert _to_contiguous change
* add debugging tag for orttraining-cpu-ci
* Fix the wrong PYTHON_LIBRARIES which is affected by PYTHON_LIBRARY given in build command
* add debugging info
* Fix the build in this case: PYTHON_LIBDIR: /opt/_internal/cpython-3.7.10/lib, PYTHON_EXECUTABLE: /opt/python/cp37-cp37m/bin/python3, PYTHON_MULTIARCH: x86_64-linux-gnu
PYTHON_LIBRARY_PATH python3.7m
* fix build error due to python lib not found
* Fixes
1. Release PyObject's
2. Not useing deepcopy because we assume autograd.Function's
non-tensor inputs are static (constants) so there should
be no side effect after calling any autograd.Function
multiple times.
* Revert dtoc for decreasing refcnt
* add debugging log
* add debugging tag
* Fix a small leak
* Remove ONNX_FALLTHROUGH flag
* debug tag
* debug tag
* fix builds
* remove debug tag
* fix build
* fix builds
* fix build
* install python3 in centos, in case there is no libpython3.xm.so
* build python so for redhat
* add training cpu specific docker, build python so inside
* revert build-cpython change
* try fixing numpy include issue
* install_deps after re-installing cpython
* fix build && remove debug tag
* install openssl before cpython
* let's say: builds pass!
* add build flag for torch iterop, only enable it when training+Python is enabled
* skip ComputeBroadcastBackwardAxesDynamic for the shared inputs
* fix build
* add debug info for padgrad test
* Fix builds
* Split dlpack_converter into C++ and Python interfaces respecitively. Then different build use them as needed.
* clean up the changes
* fix addsubgradient builder
* Fix builds
* clean up
* clean up
* Address some comments.
1. Use pointer wraper to avoid calling Py_DECREF
2. Remove unregister_* functions
3. Allow repeated registration by skipping those with existing keys
4. Unregister context in PythonOpGrad
* Fix over-released Py_Boolean
Co-authored-by: Wei-Sheng Chin <wschin@outlook.com>
1. Fix training e2e pipeline. The failure was caused by my recent change #7632. The fix is adding "--cmake_extra_defines CMAKE_CUDA_ARCHITECTURES=70" to the build parameters because the machines are with V100 GPUs.
2. Simplify Nuphar pipeline. It doesn't need to install a separated ONNX version(1.5.0)
3. Fix a problem that run_dockerbuild.sh ignored OS version parameter. Now because it starts to take effect, I also set python version to the system default one(3.8 for ubuntu 20.04)
1. Update manylinux build scripts. This will add [PEP600](https://www.python.org/dev/peps/pep-0600/)(manylinux2 tags) support. numpy has adopted this new feature, we should do the same. The old build script files were copied from https://github.com/pypa/manylinux, but they has been deleted and replaced in the upstream repo. The manylinux repo doesn't have a manylinux2014 branch anymore. So I'm removing the obsolete code, sync the files with the latest master.
2. Update GPU CUDA version from 11.0 to 11.1(after a discussion with PMs).
3. Delete tools/ci_build/github/linux/docker/Dockerfile.manylinux2014_cuda10_2. (Merged the content to tools/ci_build/github/linux/docker/Dockerfile.manylinux2014_cuda11)
4. Modernize the cmake code of how to locate python devel files. It was suggested in https://github.com/onnx/onnx/pull/1631 .
5. Remove `onnxruntime_MSVC_STATIC_RUNTIME` and `onnxruntime_GCC_STATIC_CPP_RUNTIME` build options. Now cmake has builtin support for it. Starting from cmake 3.15, we can use `CMAKE_MSVC_RUNTIME_LIBRARY` cmake variable to choose which MSVC runtime library we want to use.
6. Update Ubuntu docker images that used in our CI build from Ubuntu 18.04 to Ubuntu 20.04.
7. Update GCC version in CUDA 11.1 pipelines from 8.x to 9.3.1
8. Split Linux GPU CI pipeline to two jobs: build the code on a CPU machine then run the tests on another GPU machines. In the past we didn't test our python packages. We only tested the pre-packed files. So we didn't catch the rpath issue in CI build.
9. Add a CentOS machine pool and test our Linux GPU build on real CentOS machines.
10. Rework ARM64 Linux GPU python packaging pipeline. Previously it uses cross-compiling therefore we must static link to C Runtime. But now have pluggable EP API and it doesn't support static link. So I changed to use qemu emulation instead. Now the build is 10x slower than before. But it is more extensible.
* First iteration of making cuda a shared provider.
Separated out shared OpKernel change, so doing this to merge with that change.
* More cuda shared library refactoring
* More cuda shared library refactoring
* More build options tested, converted the training ops over.
* Fix merge breaks
* Fix submodules
* Fix submodules
* Fix submodules
* Fix python
* Fix compile errors
* Duplicate symbol fix
* Test fix for ROCM provider
* Another ROCM test workaround
* ROCM Build Test
* ROCM build fix
* ROCM
* ROCM
* ROCM
* ROCM
* ROCM
* ROCM test
* Reduce header dependencies
* Remove redundant namespace
* Test fix for linux
* Fix linux build
* Fix Eigen build error
* Fix unused parameter warning
* Test link error
* Another linker test
* Linker test
* Linker test
* Another test
* Another build test
* Fix linux link error
* Build test
* Fix control flow ops to use common base class with core code
* Remove extra qualifiers
* Fix template syntax for linux
* Fix cuda memory leak
* Fix pybind
* Test disabling cast
* Cleanup
* Restore cuda in test
* Remove more header dependencies
* Test not adding cuda provider to session
* Make GetProviderInfo_CUDA throw
* No-op cuda provider creation
* Fix some setup issues
* Fix memory cleanup on unload
* Diagnostics
* Don't unload library
* Add diagnostics
* Fix deleting registry at right time.
* Test disabling profiler
* Fix merge break
* Revert profiler change
* Move unloading of shared providers into Environment
* Free more global allocations before library unloads
* Add more diagnostics
* Move unloading back to the OrtEnv as there are multiple Environments created during a session.
Remove some library dependencies for tests.
* Fix more cmake files
* ERROR -> WARNING
* Fix python shutdown
* Test not using dml in pipeline
* Change python version and disable dml
* Update python version
* Test adding unload method for shared providers
* Disable DLL test
* Python test
* Revert "Python test"
This reverts commit c7ec2cfe98.
* Revert "Disable DLL test"
This reverts commit e901cb93aa.
* Revert "Test adding unload method for shared providers"
This reverts commit c427b78799.
* Point to RyanWinGPU
* Revert python version
* Fix id_to_allocator_map
* Another python exit test
* Remove extra debug messages
Try a more clean python shutdown through DllMain
* Revert DllMain idea, it didn't work
* Merge conflicts
* Fix merge with master issues.
* Comments
* Undo edit to file
* Cleanup + new training ops
* Revert yml changes
* Fix another merge error
* ROCM fix
* ROCM fix v2
* Put back Linux hack, it is necessary
* Stupid fixes
* Fix submodule out of sync
* ROCM fix 3
* ROCM 4
* Test java fix
* Fix typos
* Java test on my VM
* Fix build error
* Spotless fix
* Leave temp file around to load properly
* Fix cleanup on exit
* Fix break
* Java comments
* Remove LongformerAttentionBase workaround
* Spotless fix
* Switch yml back to regular build pool
* Revert "Switch yml back to regular build pool"
This reverts commit be35fc2a5a.
* Code review feedback
* Fix errors due to merge
* Spotless fix
* Fix minimal build
* Java fix for non cuda case
* Java fix for CPU build
* Fix Nuphar?
* Fix nuphar 2
* Fix formatting
* Revert "Remove LongformerAttentionBase workaround"
This reverts commit 648679b370.
* Training fix
* Another java fix
* Formatting
* Formatting
* For orttraining
* Last orttraining build fix...
* training fixes
* Fix test provider error
* Missing pass command
* Removed in wrong spot
* Python typo
* Python typos
* Python crash on exit, possibly due to unloading of libraries.
* Remove test_execution_provider from training build
Only enable python atexit on windows
Remove assert on provider library exit
* Still can't unload providers in python, alas.
* Disable Nvtx temporarily
* MPI Kernels for Training
* MPI Kernels part 2
* Patch through INcclService
* Oops, wrong CMakeLists
* Missing namespace
* Fix missing ()
* Move INcclService::GetInstance around to link nicer
* Missing }
* Missing MPI libraries for Cuda
* Add extra GetType functions used by MPI
* Missing Nccl library
* Remove LOGS statements as a test
* Add in a couple more missing GetType methods
* Update comments
* Missed a logging reference in mpi_context.h
* Convert aten_op to shared (due to marge with master)
* Test moving DistributedRunContext instance into shared provider layer
(with purpose error to verify it's being built properly)
* Test passed, now with fix
* Missing static
* Oops, scope DistributedRunContext to just NCCL
* Merge related issues and code review feedback.
* Merge error
* Bump to rel-1.9.1 (#7684)
* Formatting
* Code review feedback for Java build on non Windows
* Remove cupti library dependency from core library
* Test Java pipeline fix
* Linux build fix
* Revert "Linux build fix"
This reverts commit a73a811516.
* Revert "Remove cupti library dependency from core library"
This reverts commit 6a889ee8bf.
* Packaging pipeline fixes to copy cuda shared provider for tensorrt & standard packages
* Add cuda to Tensorrt nuget package
* onnxruntime_common still has a cuda header dependency
Co-authored-by: ashbhandare <ash.bhandare@gmail.com>