### Description
Added CUDNN Frontend and used it for NHWC convolutions, and optionally
fuse activation.
#### Backward compatible
- For model existed with FusedConv, model can still run.
- If ORT is built with cuDNN 8, cuDNN frontend will not be built into
binary. Old kernels (using cudnn backend APIs) are used.
#### Major Changes
- For cuDNN 9, we will enable cudnn frontend to fuse convolution and
bias when a provider option `fuse_conv_bias=1`.
- Remove the fusion of FusedConv from graph transformer for CUDA
provider, so there will not be FusedConv be added to graph for CUDA EP
in the future.
- Update cmake files regarding to cudnn settings. The search order of
CUDNN installation in build are like the following:
* environment variable `CUDNN_PATH`
* `onnxruntime_CUDNN_HOME` cmake extra defines. If a build starts from
build.py/build.sh, user can pass it through `--cudnn_home` parameter, or
by environment variable `CUDNN_HOME` if `--cudnn_home` not used.
* cudnn python package installation directory like
python3.xx/site-packages/nvidia/cudnn
* CUDA installation path
#### Potential Issues
- If ORT is built with cuDNN 8, FusedConv fusion is no longer done
automatically, so some model might have performance regression. If user
still wants FusedConv operator for performance reason, they can still
have multiple ways to walkaround: like use older version of onnxruntime;
or use older version of ORT to save optimized onnx, then run with latest
version of ORT. We believe that majority users have moved to cudnn 9
when 1.20 release (since the default in ORT and PyTorch is cudnn 9 for 3
months when 1.20 release), so the impact is small.
- cuDNN graph uses TF32 by default, and user cannot disable TF32 through
the use_tf32 cuda provider option. If user encounters accuracy issue
(like in testing), user has to set environment variable
`NVIDIA_TF32_OVERRIDE=0` to disable TF32. Need update the document of
use_tf32 later.
#### Follow ups
This is one of PRs that target to enable NHWC convolution in CUDA EP by
default if device supports it. There are other changes will follow up to
make it possible.
(1) Enable `prefer_nhwc` by default for device with sm >= 70.
(2) Change `fuse_conv_bias=1` by default after more testing.
(3) Add other NHWC operators (like Resize or UpSample).
### Motivation and Context
The new CUDNN Frontend library provides the functionality to fuse
operations and provides new heuristics for kernel selection. Here it
fuses the convolution with the pointwise bias operation. On the [NVIDIA
ResNet50](https://pytorch.org/hub/nvidia_deeplearningexamples_resnet50/)
we get a performance boost from 49.1144 ms to 42.4643 ms per inference
on a 2560x1440 input (`onnxruntime_perf_test -e cuda -I -q -r 100-d 1 -i
'prefer_nhwc|1' resnet50.onnx`).
---------
Co-authored-by: Tianlei Wu <tlwu@microsoft.com>
Co-authored-by: Maximilian Mueller <maximilianm@nvidia.com>
### Description
Before this change, copy_strip_binary.sh manually copies each file from
onnx runtime's build folder to an artifact folder. It can be hard when
dealing with symbolic link for shared libraries.
This PR will change the packaging pipelines to run "make install" first,
before packaging shared libs .
### Motivation and Context
Recently because of feature request #21281 , we changed
libonnxruntime.so's SONAME. Now every package that contains this shared
library must also contains libonnxruntime.so.1. Therefore we need to
change the packaging scripts to include this file. Instead of manually
construct the symlink layout, using `make install` is much easier and
will make things more consistent because it is a standard way of making
packages.
**Breaking change:**
After this change, our **inference** tarballs that are published to our
Github release pages will be not contain ORT **training** headers.
### Description
Enablement of onnxruntime for AIX and fixing issues related to
big-endian platform.
### Motivation and Context
changes in this PR contains:
1. Enablement code for building onnxruntime on AIX operating system.
2. while testing the build on AIX, we found issues related to big endian
platform . More details about few of those issues can be found in [Big
endian issue: Graph Transformation Attention Fusion tests are failing
#12921](https://github.com/microsoft/onnxruntime/issues/12921)
Below are list of files and the description about the change.
1. cmake/CMakeLists.txt
[BUILDING on AIX issue] check for "IBMClang" is added for handling
-Wno-unused-parameter
2. cmake/external/onnxruntime_external_deps.cmake
[BUILDING on AIX issue]Enabling gtest_disable_pthreads for AIX
3. cmake/onnxruntime.cmake
[BUILDING on AIX issue]
o Blocking codes for AIX which generates generated_source.c and further
requires some symbol files.
o Putting NO AIX check for non-supported linker flags like --Xlinker
o iconv linking
4. cmake/onnxruntime_framework.cmake
[BUILDING on AIX issue]Putting NO AIX check for -Wl,-rpath='$ORIGIN'
5. cmake/onnxruntime_mlas.cmake
[BUILDING on AIX issue]POWER10 releated macro/function definition .
6. cmake/onnxruntime_providers_cpu.cmake
[BUILDING on AIX issue]Putting NO AIX check for non-supported linker
flags like --Xlinker
7. cmake/onnxruntime_unittests.cmake
[BUILDING on AIX issue]
o Putting NO AIX check for non-supported linker flags like --Xlinker
o Adding required libraries for AIX linker under applicatiion like
onnxruntime_shared_lib_test ,onnxruntime_logging_apis etc
8. cmake/patches/flatbuffers/flatbuffers.patch
[BUILDING on AIX issue] Handling of TypeCode in
include/flatbuffers/flatbuffers.h under AIX + clang
9. onnxruntime/contrib_ops/cpu/murmur_hash3.cc
[Big endian issue] Byte-Conversion handlling in compute() and getblock()
routines
10. onnxruntime/contrib_ops/cpu/quantization/matmul_nbits_impl.cc
[Big endian issue] Handling of test failures . Byte swapping for
quant_value.
11. onnxruntime/core/framework/tensorprotoutils.cc
[Big endian issue]
Implementation of SetRawDataInTensorProto , ConvertRawDataInTensorProto
.
o SetRawDataInTensorProto : Wrapper for set_raw_data(). Calling
ConvertRawDataInTensorProto() in big-endian system
o ConvertRawDataInTensorProto : function used mainly on big-endian
system for byte-swapping of tensor raw_data
12. onnxruntime/core/framework/tensorprotoutils.h
[Big endian issue]
Declaration of SetRawDataInTensorProto, ConvertRawDataInTensorProto
13. onnxruntime/core/graph/graph.cc
[Big endian issue]
o Call ConvertRawDataInTensorProto for SPARSE_TENSOR type
o Call ConvertRawDataInTensorProto for SaveToOrtFormat
14. onnxruntime/core/mlas/lib/platform.cpp
[BUILDING on AIX issue] POWER10 released enablement for AIX
15. onnxruntime/core/mlas/lib/power/qgemm_kernel_power10.cpp
[BUILDING on AIX issue]Handling of __vector under AIX+clang
16. onnxruntime/core/mlas/lib/qgemm.h
[BUILDING on AIX issue] Adding _AIX flag
17. onnxruntime/core/mlas/lib/qlmul.cpp
[BUILDING on AIX issue] Handling of __vector under AIX+clang
18. onnxruntime/core/optimizer/attention_fusion.cc
[Big endian issue] Use util function SetRawDataInTensorProto, instead of
set_raw_data
19. onnxruntime/core/optimizer/compute_optimizer/shared_utils.cc
[Big endian issue] Use util function SetRawDataInTensorProto, instead of
set_raw_data
20. onnxruntime/core/optimizer/constant_folding.cc
[Big endian issue] Use util function SetRawDataInTensorProto, instead of
set_raw_data
21. onnxruntime/core/optimizer/embed_layer_norm_fusion.cc
[Big endian issue] Use util function SetRawDataInTensorProto, instead of
set_raw_data
22. onnxruntime/core/optimizer/nchwc_transformer.cc
[Big endian issue] Use util function SetRawDataInTensorProto, instead of
set_raw_data
23. onnxruntime/core/optimizer/qdq_transformer/avx2_weight_s8_to_u8.cc
[Big endian issue] Use util function SetRawDataInTensorProto, instead of
set_raw_data
24. onnxruntime/core/optimizer/qdq_transformer/qdq_s8_to_u8.cc
[Big endian issue] Use util function SetRawDataInTensorProto, instead of
set_raw_data
25. onnxruntime/core/optimizer/qdq_transformer/s8_to_u8.h
[Big endian issue] Use util function SetRawDataInTensorProto, instead of
set_raw_data
26.
onnxruntime/core/optimizer/qdq_transformer/selectors_actions/qdq_actions.cc
[Big endian issue] Use util function SetRawDataInTensorProto, instead of
set_raw_data
27. onnxruntime/core/optimizer/reshape_fusion.cc
[Big endian issue] Use util function SetRawDataInTensorProto, instead of
set_raw_data
28. onnxruntime/core/optimizer/stft_decomposition.cc
[Big endian issue] Use util function SetRawDataInTensorProto, instead of
set_raw_data
29.
onnxruntime/core/optimizer/transpose_optimization/ort_optimizer_api_impl.cc
[Big endian issue] Use util function SetRawDataInTensorProto, instead of
set_raw_data
30. onnxruntime/core/platform/path_lib.h
[BUILDING on AIX issue] Moving to normal function call, instead of
template
31. onnxruntime/core/platform/posix/env.cc
[BUILDING on AIX issue]Blocking syscall.h in AIX
32. onnxruntime/core/session/inference_session.cc
[Big endian issue] Removing ORT_RETURN_IF_NOT, FLATBUFFERS_LITTLEENDIAN
33. onnxruntime/test/flatbuffers/flatbuffer_utils_test.cc
[Big endian issue] Call ConvertRawDataInTensorProto in CreateInitializer
and ExternalWriteReadWithLoadInitializers
34. onnxruntime/test/framework/sparse_kernels_test.cc
[Big endian issue] Use util function SetRawDataInTensorProto, instead of
set_raw_data
35. onnxruntime/test/framework/tensorutils_test.cc
[Big endian issue] Helper method ConvertEndianessForVector and call this
from required place.
36. onnxruntime/test/framework/test_tensor_loader.cc
o. [BUILDING on AIX issue] Handling of getcwd for AIX
o. [Big endian issue] Bytes Swapping in run_external_data_test
37. onnxruntime/test/onnx/main.cc
[Big endian issue] including <thread> for AIX
38. onnxruntime/test/onnx/tensorprotoutils.cc
[Big endian issue] Bytes swapping in UnpackTensorWithRawData
39. onnxruntime/test/optimizer/graph_transform_test.cc
[Big endian issue] Use util function SetRawDataInTensorProto, instead of
set_raw_data
40. onnxruntime/test/optimizer/graph_transform_test_builder.cc
[Big endian issue] Use util function SetRawDataInTensorProto, instead of
set_raw_data
41. onnxruntime/test/optimizer/graph_transform_test_builder.h
[Big endian issue] Use util function SetRawDataInTensorProto, instead of
set_raw_data
42. onnxruntime/test/optimizer/initializer_test.cc
[Big endian issue] Use util function SetRawDataInTensorProto, instead of
set_raw_data
43. onnxruntime/test/optimizer/nchwc_optimizer_test.cc
[Big endian issue] Use util function SetRawDataInTensorProto, instead of
set_raw_data
44. onnxruntime/test/providers/base_tester.cc
[Big endian issue] Use util function SetRawDataInTensorProto, instead of
set_raw_data
45. onnxruntime/test/providers/cpu/generator/random_test.cc
[BUILDING on AIX issue] Adding AIX check in MultinomialGoodCase
---------
Co-authored-by: Vamshikrishna Thatikonda <vamshikrishna@in.ibm.com>
### Description
Introduce `Float16/BFloat16` support for C# and C++ APIs.
User should be able to perform conversions from `float` to/from
`Float16/BFloat16`, compare values and tests for `NaN, Inifnity, and
whether the number is denormalized.`
### Motivation and Context
User filed issues such as:
https://github.com/microsoft/onnxruntime/issues/14303
**Description**:
Adds support for cmake find_package.
**Motivation and Context**
As mentioned in issue #7150 onnxruntime doesn't have support for CMake
find_package, this PR adds that and also adds the CMake package version
file. Now anyone can link onnxruntime like this:
```cmake
find_package(onnxruntime)
add_executable(test Source.cpp)
target_link_libraries(test PRIVATE onnxruntime::onnxruntime)
```
this also simplifies #3124
### Description
Remove the "onnxruntime_BUILD_WEBASSEMBLY" cmake option. Use `if
(CMAKE_SYSTEM_NAME STREQUAL "Emscripten")` instead. It makes some code
look more nature.
For example,
```cmake
if (CMAKE_SYSTEM_NAME STREQUAL "iOS" OR CMAKE_SYSTEM_NAME STREQUAL "Android" OR onnxruntime_BUILD_WEBASSEMBLY)
```
becomes
```cmake
if (CMAKE_SYSTEM_NAME STREQUAL "iOS" OR CMAKE_SYSTEM_NAME STREQUAL "Android" OR CMAKE_SYSTEM_NAME STREQUAL "Emscripten")
```
### Description
Introduce collective ops into onnxruntime inference build, including
1) AllReduce and AllGather schema in contrib op, controlled by USE_MPI
flag
2) AllReduce and AllGather kernel in cuda EP, controlled by ORT_USE_NCCL
flag
### Motivation and Context
Enable the collective ops in onnxruntime inference build so we have the
ability to run distributed inference with multiple GPUs.
The original ncclAllReduce ops in training build require quite complex
configurations, which is not suitable for inference case, and it already
broken. so we introduce a new implementation.
---------
Co-authored-by: Cheng Tang <chenta@microsoft.com@orttrainingdev9.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
Use json format to save and load partition config, previously it was
csv, which brought issues among windows and posix due to different line
breaks.
Co-authored-by: Randy Shuai <rashuai@microsoft.com>
Implement CloudEP for hybrid inferencing.
The PR introduces zero new API, customers could configure session and
run options to do inferencing with Azure [triton
endpoint.](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-deploy-with-triton?tabs=azure-cli%2Cendpoint)
Sample configuration in python be like:
```
sess_opt.add_session_config_entry('cloud.endpoint_type', 'triton');
sess_opt.add_session_config_entry('cloud.uri', 'https://cloud.com');
sess_opt.add_session_config_entry('cloud.model_name', 'detection2');
sess_opt.add_session_config_entry('cloud.model_version', '7'); // optional, default 1
sess_opt.add_session_config_entry('cloud.verbose', '1'); // optional, default '0', meaning no verbose
...
run_opt.add_run_config_entry('use_cloud', '1') # 0 for local inferencing, 1 for cloud endpoint.
run_opt.add_run_config_entry('cloud.auth_key', '...')
...
sess.run(None, {'input':input_}, run_opt)
```
Co-authored-by: Randy Shuai <rashuai@microsoft.com>
### Description
Use target name for flatbuffers.
Add version range for flatbuffers. It is similar to #13870
### Motivation and Context
To fix a build error:
```
CMake Error at onnxruntime_graph.cmake:88 (add_dependencies):
The dependency target "flatbuffers" of target "onnxruntime_graph" does not
exist.
Call Stack (most recent call first):
CMakeLists.txt:1490 (include)
```
It happens when flatbuffers library is already installed. For example,
on Ubuntu people may get it from apt-get. But, the one provided by
Ubuntu 20.04 is not compatible with our code. The one in Ubuntu 22.04
works fine.
### Description
Fix usage of enable_training_ops and reduce ifdef complexity for
training builds.
### Motivation and Context
This is the second refactoring PR towards creating a dedicated build for
on device training. This PR aims to reduce some complexity. We can set
ENABLE_TRAINING_OPS in cmake when either ENABLE_TRAINING or
ENABLE_TRAINING_ON_DEVICE is selected, this way we dont have to use if
defined(ENABLE_TRAINING) || defined(ENABLE_TRAINING_ON_DEVICE )
everywhere in the code.
- If it fixes an open issue, please link to the issue here. -->
## Description
1. Convert some git submodules to cmake external projects
2. Update nsync from
[1.23.0](https://github.com/google/nsync/releases/tag/1.23.0) to
[1.25.0](https://github.com/google/nsync/releases/tag/1.25.0)
3. Update re2 from 2021-06-01 to 2022-06-01
4. Update wil from an old commit to 1.0.220914.1 tag
5. Update gtest to a newer commit so that it can optionally leverage
absl/re2 for parsing command line flags.
The following git submodules are deleted:
1. FP16
2. safeint
3. XNNPACK
4. cxxopts
5. dlpack
7. flatbuffers
8. googlebenchmark
9. json
10. mimalloc
11. mp11
12. pthreadpool
More will come.
## Motivation and Context
There are 3 ways of integrating 3rd party C/C++ libraries into ONNX
Runtime:
1. Install them to a system location, then use cmake's find_package
module to locate them.
2. Use git submodules
6. Use cmake's external projects(externalproject_add).
At first when this project was just started, we considered both option 2
and option 3. We preferred option 2 because:
1. It's easier to handle authentication. At first this project was not
open source, and it had some other non-public dependencies. If we use
git submodule, ADO will handle authentication smoothly. Otherwise we
need to manually pass tokens around and be very careful on not exposing
them in build logs.
2. At that time, cmake fetched dependencies after "cmake" finished
generating vcprojects/makefiles. So it was very difficult to make cflags
consistent. Since cmake 3.11, it has a new command: FetchContent, which
fetches dependencies when it generates vcprojects/makefiles just before
add_subdirectories, so the parent project's variables/settings can be
easily passed to the child projects.
And when the project went on, we had some new concerns:
1. As we started to have more and more EPs and build configs, the number
of submodules grew quickly. For more developers, most ORT submodules are
not relevant to them. They shouldn't need to download all of them.
2. It is impossible to let two different build configs use two different
versions of the same dependency. For example, right now we have protobuf
3.18.3 in the submodules. Then every EP must use the same version.
Whenever we have a need to upgrade protobuf, we need to coordinate
across the whole team and many external developers. I can't manage it
anymore.
3. Some projects want to manage the dependencies in a different way,
either because of their preference or because of compliance
requirements. For example, some Microsoft teams want to use vcpkg, but
we don't want to force every user of onnxruntime using vcpkg.
7. Someone wants to dynamically link to protobuf, but our build script
only does static link.
8. Hard to handle security vulnerabilities. For example, whenever
protobuf has a security patch, we have a lot of things to do. But if we
allowed people to build ORT with a different version of protobuf without
changing ORT"s source code, the customer who build ORT from source will
be able to act on such things in a quicker way. They will not need to
wait ORT having a patch release.
9. Every time we do a release, github will also publish a source file
zip file and a source file tarball for us. But they are not usable,
because they miss submodules.
### New features
After this change, users will be able to:
1. Build the dependencies in the way they want, then install them to
somewhere(for example, /usr or a temp folder).
2. Or download the dependencies by using cmake commands from these
dependencies official website
3. Similar to the above, but use your private mirrors to migrate supply
chain risks.
4. Use different versions of the dependencies, as long as our source
code is compatible with them. For example, you may use you can't use
protobuf 3.20.x as they need code changes in ONNX Runtime.
6. Only download the things the current build needs.
10. Avoid building external dependencies again and again in every build.
### Breaking change
The onnxruntime_PREFER_SYSTEM_LIB build option is removed you could think from now
it is default ON. If you don't like the new behavior, you can set FETCHCONTENT_TRY_FIND_PACKAGE_MODE to NEVER.
Besides, for who relied on the onnxruntime_PREFER_SYSTEM_LIB build
option, please be aware that this PR will change find_package calls from
Module mode to Config mode. For example, in the past if you have
installed protobuf from apt-get from ubuntu 20.04's official repo,
find_package can find it and use it. But after this PR, it won't. This
is because that protobuf version provided by Ubuntu 20.04 is too old to
support the "config mode". It can be resolved by getting a newer version
of protobuf from somewhere.
* aten op for inference
* fix build error
* more some code to training only
* remove domain from operator name
* move aten_op_executor ext out from ortmodule
* add pipeline
* add exec mode
* fix script
* fix ut script
* fix test pipeline
* failure test
* rollback
* bugfix
* resolve comments
* enable aten for python build only
* fix win build
* use target_compile_definitions
* support io binding
* turn off aten by default
* fix ut
Co-authored-by: Vincent Wang <weicwang@microsoft.com>
Co-authored-by: zhijxu <zhijxu@microsoft.com>
Add abseil and inlined containers typedefs
Introduce TensorShapeVector for shape building.
Use gsl::span<const T> to make interfaces accept different types of vector like args.
Introduce InineShapeVectorT for shape capacity typed instantiations
Refactor cuda slice along with provider shared interfaces
Refactor Concat, Conv, Pad
Build with Conv Einsum and ConvTranspose refactored.
Remove TesnorShape::GetDimsAsVector()
Refactor SliceIterator and SliceIteratorBase
Refactor broadcast
Refactor Pads for twice as long
Remove memory planner intermediate shapes vector
Refactor orttraining
Fix passing TenshroShapeVector to tests
Remove abseil copy and submodule, use FetchContent_Declare/Fetch
Path with separate command
Make RocmAsyncBuffer accept anything convertible to span. Adjust Linux GPU pipeline.
* adding support for tracing to sqldb instead of files
* use compiled statements
* script to pull tensors from db
* link sqlite3
* remove node info redundant with onnx graph
* addressing PR comments
* address PR comments and include program counter
* third party notice
* use find_pacakge
* add to cgmanifests.json
* address thread safety and add pid suffix
* build fi
* python script to select on devicetype
* remove unpopulated and redundant Shape and Type fields
* comment
* comment
* PR comments
* add graph execution counter to session state
* move increment to inference session
* std::endl to \n
* ifdef on graph execution counter
* add ifdef to inference session
* move DEBUG_NODE_INPUTS_OUTPUTS to CMakeLists.txt
* atenop for inference
* assert if dtype mismatch
* atenop config in frontend
* fix orttrainer test
* gradient def not only for ATenOp
* bugfix
* fix gradient input shape and type issue
* fix after merge master
Switched the code to C++17. To build ONNX Runtime on old distros like CentOS 7, you need to install a newer GCC from additionary repos. If you build onnxruntime with the newer GCC, typically the result binary can't be distributed to other places because it depends on the new GCC's runtime libraries, something that the stock OS doesn't have. But on RHEL/CentOS, it can be better. We use Red Hat devtoolset 8/9/10 with CentOS7 building our code. The new library features(like std::filesystem) that not exists in the old C++ runtime will be statically linked into the applications with some restrictions:
1. GCC has dual ABI, but we can only use the old one. It means std::string is still copy-on-write and std::list::size() is still O(n). Also, if you build onnxruntime on CentOS 7 and link it with some binaries that were built on CentOS 8 or Ubuntu with the new ABI and export C++ symbols directly(instead of using a C API), the it won't work.
2. We still can't use std::optional. It is a limitation coming from macOS. We will solve it when we got macOS 11 build machines. It won't be too long.
3. Please avoid to use C++17 in CUDA files(*.cu). Also, the *.h files that they include(like core/framework/float16.h). This is Because CUDA 10.2 doesn't support C++17. You are welcome to use the new features in any *.cc files.
* clean up builds for interop_torch
* add python dependency for executables
* disable onnxruntime_ENABLE_TRAINING_TORCH_INTEROP by default; enable it in ortmodule GPU training pipeline only
* disable training unrelated tests when torch interop is enabled
* simplify the python dependency.
* clean up and fix
* Register Torch Custom autograd.Function
* Add flag to supress pybind11 warning
* Avoid unnecessary include in cmake
* Add missing reference
* Add getter for registerred functions
* Format for making subsquent changes cleaner
* Fix interop feature build failure
* Forward pass, run PyOP on CPU EP
* clean up the code
* Fix build
* Define new ops
* refactor pyop - extract PyOpLibProxy class
* Hacks to run example
* implement the kernel compute func
* add back PyOP for comparision experiments
* debug info - thread id
* refine the kernels
* Polish code
(cherry picked from commit 4ed606f9a0)
* Fix a the Tensor address mismatch in C++ side
* PythonOpGrad compute
* add distributed test case
* refine test cases
* get dist.get_rank() in Autograd forward pass
* Add CUDA kernels
* Store float, int, and tuple of them as PythonOp's attributes
* Populate local changes
* Fix bugs
* PythonOp/PythonOpGrad CUDA kernels
* Support non-tensor inputs
* Single GPU FP16 Run Pass
(cherry picked from commit e539989e91e18ee997900292d3493b97d3eafa8a)
* Fix segement
* add basic test cases
* Save progress
* fix gradient builder for a Add op who have same inputs
* add test cases for auto grad fallback feature
* fix ref cnt issue. add thread id for debugging
* POC: remove interface class
* Remove interface classes
* Clean a bit
* Coarse-grained clean up after rebase master
* reset pyop and language_interop_ops to latest master
* Fix missing part during merge
* re-structure torch related language interop files
* Fix build
* Fix tests and build
* Fix build and basic unit tests
* Fix most of uts
* remove unnecessary import
* clean up and fix build when enabling language_interop_ops
* Fix single-GPU UTs
* Move runner register into ORT package
* Update dist UTs to new style
* Also fix distributed UTs and leaf gradient problem
* Static generation for constant args
* Move arg_positions_ to static field
* Rename some functions
* Move arg ceration into a function
* Clean output logic in PythonOp
* Move PythonOp's ctor
* Revise PythonOpGrad
* Fix "ORT only supports contiguous tensor for now" for inputs
* Fix evaulation mode error, add test & clean up
* clean up codes
* Fix issues introduced by recent master change (enabled symbolic shape infer)
* automatically register forward/backward function pointers && clean up
* Fix multi-output case
* Add a test back
* fix build and clean up
* RAII for function params PyObject
* Use new exporter
* Clean full name in new exporter
* Fix UTs
* Format a file
* Add "inplace" back
Remove a legacy comment
* Refine TorchProxy
1. Make TorchProxy a formal singleton class.
2. Remove unused Scope class.
3. Simplify the call to Forward and Backward. The two functions now
automatically acquire and release GIL state, so user doesn't need
any GIL-related calls.
* Format
* Add lock to avoid racing condition when registering Python objs
* Fix Python call param ref issues && Add RefcountTracker for debug build && Clean up
* clean up print
* Resolve part of comments && clean up
* Fix a potential bug
* track pyobject consistently
* move kernels to cpu provider as base class
* Refactor - 1. Extract PythonOpBase/PythonOpGradBase 2. Implement CPU kernels 3. Test coverage for CPU kernels
* Refine register code
* Add a missing macro
* Release python call result objects with PythonObjectPtr && Add UnRegisterContext && Track PyObject for Debugging && Clena up
* Fix random segfault issue - relasing a wrong ctx pointer for inplace cases
* put ref count in debug macro
* Move GIL out
* Refine tests
* Fix memory leak issue && forward output lifecycle issue:
1. Unregister the OrtValue PythonObject. Currently, the OrtValue shared same buffer with PythonOp/PythonOpGrad's output. So after those kernels outputs are released, the "leaked" OrtValue caused the shared buffer cannot be released.
2. According PyTorch forward+backward execution. The forward outputs (e.g. torch tensors) maintains the context/saved variables/dirty inputs, etc, which are used for backward execution, so its life should be after the backward runs. This change added such a depencencies between PythonOpGrad on PythonOp.
* Move dlpack->ortvalue into C++ to avoid temp object registration
* Fix the over released Py_False/Py_True && refine tests
* Clean up unused functions
* Always assume the first forward output is context so we don't need to test unused cases.
* Fix a memory leak
* move-copy unique_ptr & avoid C-style casting
* Use inplace attribute to determine if input tensors are copied
* Move DlpackCapsuleDestructor's to a common place
* Thread-safe TorchProxy
* Use OrtValue instead of OrtValue*
* Only keep checks for Debug build
* Wrap some long line per comment
* onnx_export_type --> kwargs
* Use requires_grads to create PythonOpGrad's inputs
* add missing files during master merge
* Fix build issue after merge
* Address two comments.
1. Internalize DlpackCapsuleDestructor
2. Change "(" to "]" for describing closed interval.
* Address some comments.
1. "override" -> "overwrite" to avoid using reserved keyword.
2. Call DLPack's helper to create OrtValue for avoiding repeated code.
* Address comments.
1. Pass std::mutex to registeration helpers so their callers don't
have to lock the mutex expclicitly.
2. Rename "func_context_pool_mutex_" to "mutex_". This mutex is the global mutex for OrtTorchFunctionPool.
* Add bridging code to make cuda kernels work with merged master
* put debue macro check within RefCountTracker && use default logger for debug info && remove useless ortvalue_ptr interface && typos && revert unncessary blank line changes
* fix some comments
* Resolve more comments
* Capitalize a word
* use unique_ptr instead of ObjectPointer for PyObject management && add converntion
* Support symbolic shape
* Remove unused variable
* fix build
* Enable function registration for training only && rectify ToDlpack/FromDlpack merge with master.
* Don't add context for non-PythonOp opeartors (for example AtenOp)
* Fix build error
* Polish frontend part.
1. Avoid adding kwargs to ORTModule's ctor
2. Use onnx_export_type rather than kwargs for type safty
3. Fix some build bugs.
* Resolve simpler comments
* Resolve export related comments
* sync master && fix tests && fix non-training build error
* Fix build errors
* add target link lib
* windows build error
* Fix orttraining-linux-ci build
* disable autograd test && clean up
* fix linux orttraining ci build
* try fixing win build error
* Revise append calls in runner
* Enable custom function using a function
* Rename to avoid using reservied keyword
* Use list comprehension
* Set ORT random seed in tests
* Remove print code and fix ctx shape
* [] -> list()
* Move autograd.Function and nn.Module into corresponding functions
* Move test helpers
* Polish dist test a bit. Tried move helpers to helper file but it causes a deadlock.
* trying fix undefined reference
* Context is not managed by global pool
* Polish dist test
* Polish dist test
* Add enable_custom_autograd_function
* Remove enable_custom_autograd_function from ctors
* Add doc strings
* Shorter code
* Address comments
* Add one empty line
* revert a minor and not needed change
* Address comments
* Back to reference
* Fix windows builds
* Fix windows debug build fail to find "'python39_d.lib'"
* fix mac build error
* revert _to_contiguous change
* add debugging tag for orttraining-cpu-ci
* Fix the wrong PYTHON_LIBRARIES which is affected by PYTHON_LIBRARY given in build command
* add debugging info
* Fix the build in this case: PYTHON_LIBDIR: /opt/_internal/cpython-3.7.10/lib, PYTHON_EXECUTABLE: /opt/python/cp37-cp37m/bin/python3, PYTHON_MULTIARCH: x86_64-linux-gnu
PYTHON_LIBRARY_PATH python3.7m
* fix build error due to python lib not found
* Fixes
1. Release PyObject's
2. Not useing deepcopy because we assume autograd.Function's
non-tensor inputs are static (constants) so there should
be no side effect after calling any autograd.Function
multiple times.
* Revert dtoc for decreasing refcnt
* add debugging log
* add debugging tag
* Fix a small leak
* Remove ONNX_FALLTHROUGH flag
* debug tag
* debug tag
* fix builds
* remove debug tag
* fix build
* fix builds
* fix build
* install python3 in centos, in case there is no libpython3.xm.so
* build python so for redhat
* add training cpu specific docker, build python so inside
* revert build-cpython change
* try fixing numpy include issue
* install_deps after re-installing cpython
* fix build && remove debug tag
* install openssl before cpython
* let's say: builds pass!
* add build flag for torch iterop, only enable it when training+Python is enabled
* skip ComputeBroadcastBackwardAxesDynamic for the shared inputs
* fix build
* add debug info for padgrad test
* Fix builds
* Split dlpack_converter into C++ and Python interfaces respecitively. Then different build use them as needed.
* clean up the changes
* fix addsubgradient builder
* Fix builds
* clean up
* clean up
* Address some comments.
1. Use pointer wraper to avoid calling Py_DECREF
2. Remove unregister_* functions
3. Allow repeated registration by skipping those with existing keys
4. Unregister context in PythonOpGrad
* Fix over-released Py_Boolean
Co-authored-by: Wei-Sheng Chin <wschin@outlook.com>
1. Update manylinux build scripts. This will add [PEP600](https://www.python.org/dev/peps/pep-0600/)(manylinux2 tags) support. numpy has adopted this new feature, we should do the same. The old build script files were copied from https://github.com/pypa/manylinux, but they has been deleted and replaced in the upstream repo. The manylinux repo doesn't have a manylinux2014 branch anymore. So I'm removing the obsolete code, sync the files with the latest master.
2. Update GPU CUDA version from 11.0 to 11.1(after a discussion with PMs).
3. Delete tools/ci_build/github/linux/docker/Dockerfile.manylinux2014_cuda10_2. (Merged the content to tools/ci_build/github/linux/docker/Dockerfile.manylinux2014_cuda11)
4. Modernize the cmake code of how to locate python devel files. It was suggested in https://github.com/onnx/onnx/pull/1631 .
5. Remove `onnxruntime_MSVC_STATIC_RUNTIME` and `onnxruntime_GCC_STATIC_CPP_RUNTIME` build options. Now cmake has builtin support for it. Starting from cmake 3.15, we can use `CMAKE_MSVC_RUNTIME_LIBRARY` cmake variable to choose which MSVC runtime library we want to use.
6. Update Ubuntu docker images that used in our CI build from Ubuntu 18.04 to Ubuntu 20.04.
7. Update GCC version in CUDA 11.1 pipelines from 8.x to 9.3.1
8. Split Linux GPU CI pipeline to two jobs: build the code on a CPU machine then run the tests on another GPU machines. In the past we didn't test our python packages. We only tested the pre-packed files. So we didn't catch the rpath issue in CI build.
9. Add a CentOS machine pool and test our Linux GPU build on real CentOS machines.
10. Rework ARM64 Linux GPU python packaging pipeline. Previously it uses cross-compiling therefore we must static link to C Runtime. But now have pluggable EP API and it doesn't support static link. So I changed to use qemu emulation instead. Now the build is 10x slower than before. But it is more extensible.
* First iteration of making cuda a shared provider.
Separated out shared OpKernel change, so doing this to merge with that change.
* More cuda shared library refactoring
* More cuda shared library refactoring
* More build options tested, converted the training ops over.
* Fix merge breaks
* Fix submodules
* Fix submodules
* Fix submodules
* Fix python
* Fix compile errors
* Duplicate symbol fix
* Test fix for ROCM provider
* Another ROCM test workaround
* ROCM Build Test
* ROCM build fix
* ROCM
* ROCM
* ROCM
* ROCM
* ROCM
* ROCM test
* Reduce header dependencies
* Remove redundant namespace
* Test fix for linux
* Fix linux build
* Fix Eigen build error
* Fix unused parameter warning
* Test link error
* Another linker test
* Linker test
* Linker test
* Another test
* Another build test
* Fix linux link error
* Build test
* Fix control flow ops to use common base class with core code
* Remove extra qualifiers
* Fix template syntax for linux
* Fix cuda memory leak
* Fix pybind
* Test disabling cast
* Cleanup
* Restore cuda in test
* Remove more header dependencies
* Test not adding cuda provider to session
* Make GetProviderInfo_CUDA throw
* No-op cuda provider creation
* Fix some setup issues
* Fix memory cleanup on unload
* Diagnostics
* Don't unload library
* Add diagnostics
* Fix deleting registry at right time.
* Test disabling profiler
* Fix merge break
* Revert profiler change
* Move unloading of shared providers into Environment
* Free more global allocations before library unloads
* Add more diagnostics
* Move unloading back to the OrtEnv as there are multiple Environments created during a session.
Remove some library dependencies for tests.
* Fix more cmake files
* ERROR -> WARNING
* Fix python shutdown
* Test not using dml in pipeline
* Change python version and disable dml
* Update python version
* Test adding unload method for shared providers
* Disable DLL test
* Python test
* Revert "Python test"
This reverts commit c7ec2cfe98.
* Revert "Disable DLL test"
This reverts commit e901cb93aa.
* Revert "Test adding unload method for shared providers"
This reverts commit c427b78799.
* Point to RyanWinGPU
* Revert python version
* Fix id_to_allocator_map
* Another python exit test
* Remove extra debug messages
Try a more clean python shutdown through DllMain
* Revert DllMain idea, it didn't work
* Merge conflicts
* Fix merge with master issues.
* Comments
* Undo edit to file
* Cleanup + new training ops
* Revert yml changes
* Fix another merge error
* ROCM fix
* ROCM fix v2
* Put back Linux hack, it is necessary
* Stupid fixes
* Fix submodule out of sync
* ROCM fix 3
* ROCM 4
* Test java fix
* Fix typos
* Java test on my VM
* Fix build error
* Spotless fix
* Leave temp file around to load properly
* Fix cleanup on exit
* Fix break
* Java comments
* Remove LongformerAttentionBase workaround
* Spotless fix
* Switch yml back to regular build pool
* Revert "Switch yml back to regular build pool"
This reverts commit be35fc2a5a.
* Code review feedback
* Fix errors due to merge
* Spotless fix
* Fix minimal build
* Java fix for non cuda case
* Java fix for CPU build
* Fix Nuphar?
* Fix nuphar 2
* Fix formatting
* Revert "Remove LongformerAttentionBase workaround"
This reverts commit 648679b370.
* Training fix
* Another java fix
* Formatting
* Formatting
* For orttraining
* Last orttraining build fix...
* training fixes
* Fix test provider error
* Missing pass command
* Removed in wrong spot
* Python typo
* Python typos
* Python crash on exit, possibly due to unloading of libraries.
* Remove test_execution_provider from training build
Only enable python atexit on windows
Remove assert on provider library exit
* Still can't unload providers in python, alas.
* Disable Nvtx temporarily
* MPI Kernels for Training
* MPI Kernels part 2
* Patch through INcclService
* Oops, wrong CMakeLists
* Missing namespace
* Fix missing ()
* Move INcclService::GetInstance around to link nicer
* Missing }
* Missing MPI libraries for Cuda
* Add extra GetType functions used by MPI
* Missing Nccl library
* Remove LOGS statements as a test
* Add in a couple more missing GetType methods
* Update comments
* Missed a logging reference in mpi_context.h
* Convert aten_op to shared (due to marge with master)
* Test moving DistributedRunContext instance into shared provider layer
(with purpose error to verify it's being built properly)
* Test passed, now with fix
* Missing static
* Oops, scope DistributedRunContext to just NCCL
* Merge related issues and code review feedback.
* Merge error
* Bump to rel-1.9.1 (#7684)
* Formatting
* Code review feedback for Java build on non Windows
* Remove cupti library dependency from core library
* Test Java pipeline fix
* Linux build fix
* Revert "Linux build fix"
This reverts commit a73a811516.
* Revert "Remove cupti library dependency from core library"
This reverts commit 6a889ee8bf.
* Packaging pipeline fixes to copy cuda shared provider for tensorrt & standard packages
* Add cuda to Tensorrt nuget package
* onnxruntime_common still has a cuda header dependency
Co-authored-by: ashbhandare <ash.bhandare@gmail.com>
* Simplified version of WebAssembly support to keep most of existing data structures and add cmake using Ninja and emcmake
* Clean up CMakeLists.txt and add an example to create and compute a kernel
* Load a model from bytes and remove graph building steps
* Add all cpu and contrib ops with mlas library
* WebAssembly build with Onnxruntime C/CXX API
* Use protobuf cmakefile directory instead of adding every necessary source file
* Fix invalid output at example
* add missing files
* Change an example to use Teams model and support ort mobile format
* add API for javascript
* fix input releasing in _ort_run()
* update API
* Let onnxruntime cmake build WebAssembly with option '--wasm'
* allow one-step building for wasm
* Make build script working on Linux and MacOS
* Fix broken build from Windows command
* Enable unit test on building WebAssembly
* Resolve comments
* update build flags
* wasm conv improvement from: 1) GemmV; 2) Depthwise direct convolution 3x3; 3) Direct convolution 3x3
* Cleaned mlas unittest.
* use glob
* update comments
* Update baseline due to loss scale fix (#6948)
* fix stream sync issue (#6954)
* Enable type reduction in EyeLike, Mod, random.cc CPU kernels. (#6960)
* Update EyeLike CPU kernel.
* Update Mod CPU kernel.
* Update Multinomial CPU kernel.
* Slight improvement to Pad CPU kernel binary size.
* Update RandomNormal[Like], RandomUniform[Like] CPU kernels.
* Fix warning from setting multiple MSVC warning level options. (#6917)
Fix warning from setting multiple MSVC warning level options. Replace an existing /Wn flag instead of always appending a new one.
* MLAS: quantized GEMM update (#6916)
Various updates to the int8_t GEMMs:
1) Add ARM64 udot kernel to take advantage of dot product instructions available in newer cores. Some models run 4x faster than the stock implementation we used before.
2) Refactor the x64 kernels to share common code for AVX2(u8u8/u8s8/avxvnni) vs AVX512(u8u8/u8s8/avx512vnni) to reduce binary size.
3) Extend kernels to support per-column zero points for matrix B. This is not currently wired to an operator.
* Implement QLinearAveragePool with unit tests. (#6896)
Implement QLinearAveragePool with unit tests.
* Attention fusion detect num_heads and hidden_size automatically (#6920)
* fixed type to experimental session constructor (#6950)
* fixed type to experimental session constructor
Co-authored-by: David Medine <david.medine@brainproducts.com>
* Update onnxruntime_perf_test.exe to accept free dimension overrides (#6962)
Co-authored-by: Ori Levari <orlevari@microsoft.com>
* Fix possible fd leak in NNAPI (#6966)
* Release buffers for prepacked tensors (#6820)
Unsolved problems:
1. One test failure was caused by a bug in Cudnn rnn kernels, when they can allocate a buffer and partially initialize it, the garbage data near tail of the buffer caused problem in some of the hardware. To attack this problem in a broader sense, should we add code in our allocators, and during a memory fuzzing test, fill an allocated buffer with garbage before returning to the caller?
2. Prepacking is used more widely than we know. For instance, Cudnn rnn kernels also cache their weights. They mix several weight tensors together into a single buffer, and never touch the original weight tensor anymore. This is the same idea with pre-pack, but they didn't override the virtual function, and they never tried to release those weight tensors, leading to memory waste. It also seems to me that there are some other kernels have similar behavior. Wonder how much memory we can save if we try to cleanup those too.
3. Turning off memory pattern planning does increase memory fragmentation, leading to out of memory error in some training test cases. Perhaps we can revisit the idea of pushing kernels-creation stage earlier, and then during initializer deserialization, we only avoid tracing those that will be prepacked.
* Enable type reduction for Range, ReverseSequence, ScatterND, Split, and Unique CPU kernels. (#6963)
* add CI
* fix test in ci
* fix flags for nsync in wasm build
* add copyright banner
* fix wasm source glob
* add missing exports
* resolve comments
* Perf gain by make packb wide to 4 from 16 on GEMM for WASM.
Remove no need direct conv in previous perf tuning.
* fix buildbreak introduced from latest master merge
* fix buildbreak in mlasi.h
* resolve all comments except MLAS
* rewrite packb related 3 functions for WASM_SCALAR seperately rather than using #ifdef in each.
and other changes according to PR feedback in mlas.
* More complete scalar path in sgemm from Tracy.
* Fix edge case handling in depthwise conv2d kernel 3x3. where:
*) support input W==1 and H==1
*) recalc in accurate pad_right and pad_bottom
*) support hidden pad_right == 2 or pad_bottom == 2 when W == 1 or H==1 and no pad left/top
* Add more test coverage for conv depthwise from Tracy.
Fix one typo according to PR.
* resolve comments
* replace typedef by using
* do not use throw in OrtRun()
* output error message
Co-authored-by: Sunghoon <35605090+hanbitmyths@users.noreply.github.com>
Co-authored-by: Lei Zhang <zhang.huanning@hotmail.com>
Co-authored-by: Wei-Sheng Chin <wschin@outlook.com>
Co-authored-by: Tianlei Wu <tlwu@microsoft.com>
Co-authored-by: Edward Chen <18449977+edgchen1@users.noreply.github.com>
Co-authored-by: Tracy Sharpe <42477615+tracysh@users.noreply.github.com>
Co-authored-by: David Medine <david.eric.medine@gmail.com>
Co-authored-by: David Medine <david.medine@brainproducts.com>
Co-authored-by: Ori Levari <ori.levari@microsoft.com>
Co-authored-by: Ori Levari <orlevari@microsoft.com>
Co-authored-by: Guoyu Wang <62914304+gwang-msft@users.noreply.github.com>
Co-authored-by: Chen Fu <chenfucs@gmail.com>
* Remove support from custom ops from the base minimal build as they contribute too much binary growth to an Android build.
Add ability to explicitly enable custom op support in a minimal build.
Change one minimal build CI to test adding custom op support (unit tests are run in that build to validate)
1. Merge Nuget CPU pipeline, Java CPU pipeline, C-API pipeline into a single one.
2. Enable compile warnings for cuda files(*.cu) on Windows.
3. Enable static code analyze for the Windows builds in these jobs. For example, this is our first time scanning the JNI code.
4. Fix some warnings in the training code.
5. Enable code sign for Java. Previously we forgot it.
6. Update TPN.txt to remove Jemalloc.
Move CudaKernel from cuda_common.h to a new separate header, cuda_kernel.h. Update include sites to use cuda_kernel.h instead if they need CudaKernel. Inclusions of cuda_common.h are now more lightweight.
Make corresponding changes for ROCM execution provider code.
Other minor cleanup.
* Next round of changes.
Remove inclusion of ONNX schema header
Exclude custom registry related things
Move IsConstantInitializer from graph_utils to Graph as it's needed in a minimal build and graph_utils is excluded.
* Initial set of changes to start disabling code in the minimal build. Breaking changes into multiple PRs so they're more easily reviewed. Focus on InferenceSession, Model and Graph here. SessionState will be next.
Needs to be integrated with de/serialization code before being testable so changes are all off by default.
Changes are limited to
- #ifdef'ing out code
- moving some things around so there are fewer #ifdef statements
- moving definition of some one-line methods into the header so we don't need to #ifdef out in a .cc as well
- exclude some things in the cmake setup
* Update session state and a few other places.
The core code builds if ORT_MINIMAL_BUILD is specified.
* test
* test
* add missing CUDA header include
* debug
* fix
* fix python package for dnnl and tensorrt.
* fix
* fix windows build.
* revert
* target_link_directories for tensorrt shared lib.