Commit graph

260 commits

Author SHA1 Message Date
Yulong Wang
f4b2d3af2b
Upgrade emsdk to 3.1.3 (#10577) 2022-02-28 23:52:41 -08:00
Dmitri Smirnov
2679711bee
Refactor transformers and other code to reduce memory allocation calls (#10523)
Work on minimizing memory management calls by
  reducing number of allocations and copies.
  Replace std::unordered_set to InlinedHashSet
  and add usage of InlinedVector.
  Employ std::move() to minimize copying and memory allocations.
  Remove copying of the const shared data into each of the
  PropagateCast transformer instances.
  Move inlined_containers.h header to include/common
  Adjust AsSpan imlementation for C++ < 17
2022-02-24 16:17:14 -08:00
Alexey Gladyshev
7dc7529ec8
[TVM EP] Integrate tests for TVM EP into public onnxruntime CI (#10505)
* add support for bool type

* add TVM EP support for tests

* include TVM EP in python test pool

* fix pylint

* moved technical imports to a separate file

* clean up post build actions & move _ld_preload.py extension to CMake level

* add files for include TVM EP into CI

* implement custom logger for TVM

* replace TVM logging with ONNX RT logging

* update link for TVM EP tutorial

* clean up TVM EP cmake

* add pybind auto enabling for TVM EP

* fix blank spaces

* code review fixes

* replace print with comment

* add list of EP without TVM EP

* enable onnx tests

* disable contrib ops and ml ops

* reuse Dockerfile.ubuntu

* Move install_tvm_test_dependencies.sh out of Docker context dir, update build definition.

Co-authored-by: Edward Chen <18449977+edgchen1@users.noreply.github.com>
2022-02-24 16:24:23 +01:00
Valery Chernov
1cdc23aba4
[TVM EP] Rename Standalone TVM (STVM) Execution Provider to TVM EP (#10260)
* update java API for STVM EP. Issue is from PR#10019

* use_stvm -> use_tvm

* rename stvm worktree

* STVMAllocator -> TVMAllocator

* StvmExecutionProviderInfo -> TvmExecutionProviderInfo

* stvm -> tvm for cpu_targets. resolve onnxruntime::tvm and origin tvm namespaces conflict

* STVMRunner -> TVMRunner

* StvmExecutionProvider -> TvmExecutionProvider

* tvm::env_vars

* StvmProviderFactory -> TvmProviderFactory

* rename factory funcs

* StvmCPUDataTransfer -> TvmCPUDataTransfer

* small clean

* STVMFuncState -> TVMFuncState

* USE_TVM -> NUPHAR_USE_TVM

* USE_STVM -> USE_TVM

* python API: providers.stvm -> providers.tvm. clean TVM_EP.md

* clean build scripts #1

* clean build scripts, java frontend and others #2

* once more clean #3

* fix build of nuphar tvm test

* final transfer stvm namespace to onnxruntime::tvm

* rename stvm->tvm

* NUPHAR_USE_TVM -> USE_NUPHAR_TVM

* small fixes for correct CI tests

* clean after rebase. Last renaming stvm to tvm, separate TVM and Nuphar in cmake and build files

* update CUDA support for TVM EP

* roll back CudaNN home check

* ERROR for not positive input shape dimension instead of WARNING

* update documentation for CUDA

* small corrections after review

* update GPU description

* update GPU description

* misprints were fixed

* cleaned up error msgs

Co-authored-by: Valery Chernov <valery.chernov@deelvin.com>
Co-authored-by: KJlaccHoeUM9l <wotpricol@mail.ru>
Co-authored-by: Thierry Moreau <tmoreau@octoml.ai>
2022-02-15 10:21:02 +01:00
Edward Chen
c43c1691ad
Enable transpose optimizer in minimal extended build (#10349)
Enable transpose optimizer and infrastructure it depends on in a minimal extended build.
2022-01-31 09:41:04 -08:00
Guoyu Wang
5f0ba31890
Remove coremltools submodule *security vulnerability* and copy the coreml model schema (#10424)
* remove coremltools submodule

* update cgmanifest

* Copy proto files directly from coremltools
2022-01-28 12:48:48 -08:00
Xavier Dupré
481b96d32a
STVM, NUPHAR, remove tvm from submodules list, checks pointers are not null. (#10211)
* STVM, checks pointers are not null.
* removes submodules tvm
* add missing include(FetchContent)
* add target tvm
* fix stvm test
* extend cgmanifest with dependencies of tvm
2022-01-27 20:31:13 +01:00
Yulong Wang
847801f5be
[wasm] update emscripten v2.0.34 (#10391) 2022-01-26 14:46:02 -08:00
Dmitri Smirnov
7e092a7e3f
Reduce number of memory allocations based on a customer profiling case (#10193)
Add abseil and inlined containers typedefs
Introduce TensorShapeVector for shape building.
Use gsl::span<const T> to make interfaces accept different types of vector like args.
Introduce InineShapeVectorT for shape capacity typed instantiations
Refactor cuda slice along with provider shared interfaces
Refactor Concat, Conv, Pad
Build with Conv Einsum and ConvTranspose refactored.
Remove TesnorShape::GetDimsAsVector()
Refactor SliceIterator and SliceIteratorBase
Refactor broadcast
Refactor Pads for twice as long
Remove memory planner intermediate shapes vector
Refactor orttraining
Fix passing TenshroShapeVector to tests
Remove abseil copy and submodule, use FetchContent_Declare/Fetch
Path with separate command
Make RocmAsyncBuffer accept anything convertible to span. Adjust Linux GPU pipeline.
2022-01-24 10:40:46 -08:00
Tongliang Liao
1d3b34cc92 Add .git suffix to github URL.
Although github works with both, this is more precise.
Having an extension also makes it easy to match with regex, when we want to inject code to reroute traffic to our own git mirror.
2022-01-03 14:38:35 -08:00
Valery Chernov
b327e89efa
Standalone TVM Executor Provider (#10019)
* squashed commit for standalone tvm execution provider

* critical fix for correct python build with stvm ep

* get tuning log file from ep options. It has priority over AUTOTVM_TUNING_LOG

* updates and fixes

* update parsing of stvm provider options

* add support of external data for onnx model

* add conditional dump of subgraphs

* remove unused code

* get input tensor shapes through provider options. get output shapes for fixed input ones by TVM API

* support AUTO_TVM tuning log file inside ORT. Selector for Ansor and Auto_TVM is provider option (tuning_type)

* add fp16

* add functionality of conversion of model layout to NHWC if need. Necessary parameter was added to STVM provider options

* fix license text in header. fix log format

* small fixes

* fix issues from flake8

* remove model proto construction from GetCapability

* reserve memory for vector of DLTensors

* add simple tutorial for STVM EP

* STVM docs

* jroesch/tvm -> apache/tvm

* remove dead code, unneccessary logs and comments

* fix in readme

* improve tutorial notebook

* tvm update

* update STVM_EP.md

* fix default value

* update STVM_EP.md

* some TODOs for the future development

* shorten long lines

* add hyperlink to STVM_EP.md

* fix Linux CI error

* fix error in csharp test

Co-authored-by: Jared Roesch <jroesch@octoml.ai>
Co-authored-by: Valery Chernov <valery.chernov@deelvin.com>
Co-authored-by: KJlaccHoeUM9l <wotpricol@mail.ru>
2021-12-15 16:59:20 -08:00
George Wu
16274beb6f
update TensorRT EP to use TensorRT 8.2 (#9981)
* update base image from 11.4.0 to 11.4.2

* update Linux TRT GPU pipeline to TRT 8.2

* update onnx-tensorrt to 8.2-GA

* disable failing TensorRT 8.2 tests.

* update pad test.

* fix

* update win trt ci pipeline to trt 8.2

* test run with cuda 11.4 and cudnn 8.2

* increase timeout

* revert

* revert

* update packaging pipelines to use trt 8.2

* fix typo

* update trt gpu perf pipeline to trt 8.2

* increase timeout

* delete deprecated ci-perf-pipeline.yml

* bump timeout

* adjust timeout packaging
2021-12-15 15:59:31 -08:00
George Nash
d0b08af37a
Implementation of QAttention for the DNNL execution provider (#10004)
* Add QAttention to DNNL EP

Add QAttention to DNNL EP (limited support and disable for gpu)

update ONEDNN version to 2.4.4

bug fix in getcapability

add memory debug print

Signed-off-by: Wang <zhaoyang.wang@intel.com>

* Address Code Review + MatMulInteger Fix

clean up code and add comments

fix matmulinteger and add fusion rule to enable initialized vector weight zero
points of 0s

update DNNL_TAG to v2.5

Signed-off-by: Wang <zhaoyang.wang@intel.com>

* Linux Compile Fix + rollback ONEDNN to 2.4.4

Signed-off-by: Zhaoyang Wang <zhaoyang.wang@intel.com>

* Fix QAttention Debug build

Signed-off-by: Wang <zhaoyang.wang@intel.com>

* Fix QAttention build if USE_DNNL not specified

Signed-off-by: George Nash <george.nash@intel.com>

Co-authored-by: Wang <zhaoyang.wang@intel.com>
Co-authored-by: MTC <63478620+jeyblu@users.noreply.github.com>
2021-12-10 21:50:13 -08:00
Dmitri Smirnov
a7f649db7c
Enable proper override using MIMalloc (#9944)
Redirect memory allocations to MiMalloc and advance its version to v2.0.3
Refactor for a universal ifdef
2021-12-07 17:56:58 -08:00
Dwayne Robinson
e0ffc30a0b Update to 1.8.0 2021-11-19 04:44:32 -08:00
Dwayne Robinson
99afb87a02 Update DirectML 1.5.1 to 1.8.0 for ORT1.10 2021-11-15 21:17:25 -08:00
Hariharan Seshadri
b5f7bb7d10
Update ONNX (#9462) 2021-10-29 10:33:40 -07:00
Changming Sun
d83adaaf9f
Remove optional-lite (#9424) 2021-10-22 16:45:45 -07:00
Changming Sun
406f1629c1
Remove Featurizers code (#9300) 2021-10-20 10:20:35 -07:00
Vincent Wang
39dc6ea8a3
Fix to_dlpack Failure on PyTorch-1.10 (#9151)
* workaround to_dlpack fail in new pt version

* add torch code link
2021-09-24 09:48:07 +08:00
ke1337
6e83392ff1
Bump up TVM version to avoid conflict with existing one (#9159)
* Bump up tvm version

* Bump up onnxruntime-tvm version

There are some c++17 related fixes in TVM

Co-authored-by: KeDengMS <kedeng@microsoft.com>
2021-09-22 17:39:19 -07:00
Zuwei Zhao
ff66cfdfa6
Enable linking in exception throwing support library when build onnxruntime wasm. (#8973)
* Enable linking in exception throwing support library when build onnxruntime webassembly containing onnxruntime-extensions.

* Add flag in build.py to enable linking exceptions throwing library.

* Update onnxruntime-extensions document and bind custom_ops build flag with use_extensions.

* Update doc.

* Update cgmanifest.json.

Co-authored-by: Zuwei Zhao <zuzhao@microsoft.com>
2021-09-10 22:09:16 +08:00
stevenlix
a9776d1c70
Add QDQ model support in TensorRT EP (#8969)
* disable setting dynamic range for QDQ model

* update cgmanifest

* Update cgmanifest.json
2021-09-03 19:33:34 -07:00
Zuwei Zhao
89e8bff121
Enable selecting custom ops in onnxruntime-extensions. (#8826)
* Enable selecting custom ops in onnxruntime-extensions.

* Move cmake_helper.py.

* Remove over-indented spaces.

* Add doc.

* Remove onnxruntime-extensions from git submodules, and user should pass path of onnxruntime-extensions for build.

* Modify doc.

* Remove argument --enable_onnxruntime_extensions and use --onnxruntime_extensions_path.

* Fix build error.

* Fix build error.

* Use onnxruntime_extensions_path.

* support both submodule and external source folders

* refinement

* Update cgmanifest.json

* Support building onnxruntime-extensions from either git submodule or pre-pulled path.

* Update doc.

* more standard name

* update docs

* add the copyright header

Co-authored-by: Zuwei Zhao <zuzhao@microsoft.com>
Co-authored-by: Wenbing Li <wenbingl@outlook.com>
Co-authored-by: Wenbing Li <10278425+wenbingl@users.noreply.github.com>
2021-08-27 21:45:52 -07:00
Yulong Wang
e8564d6597
[js/web] update emsdk to v2.0.26 (#8653)
* update emsdk to v2.0.26

* fix pooling build warning

* fix build break

* use pragma diagnostic semantic only when __GNUC__ is defined

* fix build break

* disable AttentionPastState_dynamic
2021-08-26 15:31:34 -07:00
Jorn Tuyls
9053e1522d
Check for Python_EXECUTABLE in pyxir.cmake to fix Vitis AI EP build (#8631)
Co-authored-by: Jorn Tuyls <jornt.tuyls@gmail.com>
2021-08-24 08:39:50 -07:00
Changming Sun
4bfff45859
Downgrade Eigen (#8817) 2021-08-23 18:06:23 -07:00
KeDengMS
d0ff2621ee
[Nuphar] Fix Windows build in VS 2019 (#8728)
Update TVM to fix c++17 build break in VS 2019
Remove tvm::nnvm from build
2021-08-13 16:13:34 -07:00
George Nash
e695cd304a
Dnnl refactor (#8627)
* dnnl ep rework

    rework DnnlTensor,DnnlNode,DnnlSubgraph to support arbitrary graph topology and tensor data types

    rework GetCapability to claim nodes in graph greedily from node topological ordering and delay creation of DnnlSubgraph until Compile

    rework compile to have DnnlSubgraphPrimitive as the object to handle primitive creation and execution
        instead of thread local primitive pool which duplicates intermediate memory allocated by the EP across threads

    DnnlSubgraphPrimitive provides helpers to handle many common functions for each dnnl primitive builder and become the centralized place to store input, output, intermediate memories, initializer memories and etc
        it provides functions to obtain input memories with automatic reordering/reshaping and moving between engines
        it provides interfaces to add primitive, set output memory for single node and etc

    add CONCURRENT_EXEC compile flag for dnnl library as without it, convolution primitive cannot be created and executed on different threads

    enable unit tests to run on dnnl ep as well if built with dnnl ep

    add dnnl ep support for Matmulinteger

* Add Relu to the DNNL refactor

Signed-off-by: George Nash <george.nash@intel.com>

* Add Convolution op to the DNNL rework

Signed-off-by: George Nash <george.nash@intel.com>

* Add Pooling ops to the DNNL rework

This adds the following ops:
    - AveragePool
    - GlobalAveragePool
    - GlobalMaxPool
    - MaxPool

Note: Pooling with dilation is not yet supported.
Note: GlobalLpPool, LpPool, MaxRoiPool, and MaxUnpool are not supported yet.

Signed-off-by: George Nash <george.nash@intel.com>

* Add Sum op to the DNNL rework

Signed-off-by: George Nash <george.nash@intel.com>

* Add ConvGrad op to the DNNL rework

Signed-off-by: George Nash <george.nash@intel.com>

* Add MaxPoolGrad and AveragePoolGrad ops to DNNL rework

Signed-off-by: George Nash <george.nash@intel.com>

* Added lrn operator to the refactored code

Signed-off by chethan.palangoutu.keshava@intel.com

* Added ReduceMean DNNL op to the refactor code

Signed-off-by: Chethan Palangotu Keshava <chethan.palangotu.keshava@intel.com>

* Added Softmax DNNL op for the refactored code

Signed-off-by: Chethan Palangotu Keshava <chethan.palangotu.keshava@intel.com>

* Added BatchNorm DNNL op inference-only for refactored code

Signed-off-by: Chethan Palangotu Keshava <chethan.palangotu.keshava@intel.com>

* Added Binary Ops to DNNL rework

Signed-off-by: Wang <zhaoyang.wang@intel.com>

* Added ReluGrad to DNNL Rework

Signed-off-by: Wang <zhaoyang.wang@intel.com>

* Update OneDNN tag to v2.3

Signed-off-by: Wang <zhaoyang.wang@intel.com>

* Added support for memory upto dim size 12

this is to fix the CI test cases that contain binary ops of input dim
size > 5

Signed-off-by: Wang <zhaoyang.wang@intel.com>

* Prevent claiming support for float16 and bfloat16 when only float is suppoted

By using The string.find used was causing the code to claiming support
for float16 and bfloat16 when we only supported float. We now explicitly
check the code for the data type or the data type with a 7 letter prefix
basically prefixed with "tensor("

Signed-off-by: George Nash <george.nash@intel.com>

* Disable uint8 mul and div, improve type conversion

Disable mul_uint8 and div_uint8 test cases as they use modulo for
overflow handling while onednn uses saturation

improve ype conversion using enum instead of string comparsion as well
as adding more types

Signed-off-by: Wang <zhaoyang.wang@intel.com>

Co-authored-by: Wang <zhaoyang.wang@intel.com>
Co-authored-by: Chethan Palangotu Keshava <chethan.palangotu.keshava@intel.com>
2021-08-13 14:15:43 -07:00
stevenlix
f00933c41a
Update TensorRT parser to the latest (#8712)
* update trt parser to the latest

* update cgmanifest

* update cgmanifest

* update setup_env_trt to cuda11.4

* Update setup_env_trt.bat
2021-08-12 18:10:51 -07:00
Ashwini Khade
96eb9810ba
Update onnx (#8458)
* updates for picking pnnx commit

* add tests filter to c# tests

* plus test fixes

* fix versioning for contrib ops

* fix tests

* test filter for optional ops

* more versioning related updates

* fix test

* fix layernorm spec

* more updates

* update docs

* add more test filters

* more filters

* update binary size threshold

* update docs

* plus more fixes

* updates per review

* update to release commit

* add filters for optional type tests

* plus updates
2021-08-05 09:21:44 -07:00
stevenlix
d14b08d09c
Update onnx-tensorrt parser and cgmanifest (#8585)
* update onnx-tensorrt parser and cgmanifest.json

* update cgmanifest
2021-08-02 18:55:33 -07:00
stevenlix
ee99fb400c
Upgrade TensorRT to v8.0.1 (#8512)
* update onnx-tensorrt parser to master

* disable unsupported tests

* add cuda sm 75 for T4

* update tensorrt pipeline

* update trt pipelines

* update trt pipelines

* Update linux-gpu-tensorrt-ci-pipeline.yml

* update trt cid pipeline

* Update linux-gpu-tensorrt-ci-pipeline.yml

* Update Tensorrt Windows build pool and TensorRT/CUDA/CuDNN version

* update to cuda11.4 in trt ci pipeline

* update base image to cuda11.4

* update packaging pipeline to cuda11.4

* clean up

* remove cuda11.1 and cuda11.3 docker file

* disable unsupported tensorrt tests at runtime

* Update linux-multi-gpu-tensorrt-ci-pipeline.yml
2021-08-02 11:20:31 -07:00
Zuwei Zhao
0a5b75f5cd
Update submodule onnxruntime-extensions. (#8282)
* Update submodule onnxruntime-extensions to latest.

* Add document for onnxruntime-extensions.

* Update cgmanifest.json for onnxruntime-extensions.

* Add example in JavaScript.

Co-authored-by: Zuwei Zhao <zuzhao@microsoft.com>
2021-07-13 10:21:11 +08:00
Chen Fu
df4cb6f301
Adding pytorch cpuinfo as dependency (#8178)
Pytorch cpuinfo library allows us to query current cpu features, micro-architecture and cache size, etc. These information is needed for targeted performance optimizations.

Unfortunately it does not work under Windows/ARM. We need to develop our own later
2021-07-12 14:21:12 -07:00
Zuwei Zhao
b46310b349
Integrate onnxruntime-extensions into onnxruntime. (#8143)
Co-authored-by: Zuwei Zhao <zuzhao@microsoft.com>
2021-07-01 09:34:03 -07:00
Changming Sun
c716b56f26
Update C++ Standard from 14 to 17 (#8041)
Switched the code to C++17. To build ONNX Runtime on old distros like CentOS 7, you need to install a newer GCC from additionary repos. If you build onnxruntime with the newer GCC, typically the result binary can't be distributed to other places because it depends on the new GCC's runtime libraries, something that the stock OS doesn't have. But on RHEL/CentOS, it can be better. We use Red Hat devtoolset 8/9/10 with CentOS7 building our code. The new library features(like std::filesystem) that not exists in the old C++ runtime will be statically linked into the applications with some restrictions:

1. GCC has dual ABI, but we can only use the old one. It means std::string is still copy-on-write and std::list::size() is still O(n). Also, if you build onnxruntime on CentOS 7 and link it with some binaries that were built on CentOS 8 or Ubuntu with the new ABI and export C++ symbols directly(instead of using a C API), the it won't work.

2. We still can't use std::optional. It is a limitation coming from macOS. We will solve it when we got macOS 11 build machines. It won't be too long.

3. Please avoid to use C++17 in CUDA files(*.cu). Also, the *.h files that they include(like core/framework/float16.h). This is Because CUDA 10.2 doesn't support C++17. You are welcome to use the new features in any *.cc files.
2021-06-25 14:08:01 -07:00
Negin Raoof
80b7b134bf
Adding optional ops in contrib ops (#7946)
* Added optional const spec
2021-06-24 13:16:31 -07:00
Changming Sun
275796a165
Update googletest to latest commit to fix build issues with GCC11 (#7984) 2021-06-08 16:06:53 -07:00
Changming Sun
4ecbae43b2
Use GCC 10 in Linux CPU CI pipeline (#7985) 2021-06-08 11:53:29 -07:00
Yulong Wang
faae347d9f
[wasm] upgrade emsdk version to 2.0.23 (#7893)
* upgrade emsdk version to 2.0.23

* fix build

* override gmock build options
2021-06-02 12:26:24 -07:00
Chen Fu
e26c668a9b
add google benchmark as direct dependency (#7762)
Co-authored-by: Chen Fu <fuchen@microsoft.com>

Description:
This change add google benchmark git repo as a submodule in onnxruntime repo.

Motivation and Context
Currently we have benchmarking code that depends on google benchmark. The version we are using has cross compilation issues for ARM CPUs. Recent changes in Google benchmark fixed these issues.

Another problem is that we now rely on ONNX to pull in Google benchmark, an indirect dependency. Updating ONNX involves complex steps and rightly so. However, updating Google benchmark dependency should not be hindered by these processes.
2021-05-19 20:12:17 -07:00
ashbhandare
56e993a434
Bump to rel-1.9.1 (#7684) 2021-05-13 18:41:28 -07:00
Changming Sun
41e370c2b3
Update protobuf to 3.16 (#7616) 2021-05-07 14:09:23 -07:00
Adrian Tsai
70e67ddd2b
Update DirectML version to 1.5.1 and enable ARM/ARM64 builds with DML (#7511)
* Update DirectML to version 1.5.1
* Enable --use_dml with ARM and ARM64
* Add ARM/ARM64 binaries to nuget packages
2021-04-30 00:49:30 -07:00
Changming Sun
7b003967b1
Add static code analyzer to Windows CPU/GPU CI builds and fix the warnings (#7489) 2021-04-29 11:54:57 -07:00
Ashwini Khade
75e054cd33
pick onnx release candidate (#7177)
* pick onnx release candidate

* fix typo

* filter batchnorm tests

* add implementation for reshape 14

* add identity op kernel for opset 14

* fix typo

* update onnx commit

* update commit to latest master

* add hashes for new kernel registrations and update 1

* TEST commit

* update onnx back to right commit

* Update onnx to latest in rel-1.9.0

* temp fix

* remove nonzeroshapesetter transformer

* pick rel branch latest commit

* fix build failures

* fix build failures

* fix build failures

* update the commit to latest in release branch

* add test filters for not impemented op14 ops in c# tests

* plus review comments
2021-04-22 23:57:09 -07:00
Changming Sun
b4cfa88bf7
Update protobuf to the latest version (#7396) 2021-04-21 10:30:06 -07:00
jeyblu
61ba9ac1bb
matmul in dnnl (#7311)
* update dnnl to v2.2

* dnnl matmul
2021-04-12 08:03:03 -07:00
Yulong Wang
405ca49012
build ONNXRuntime into WebAssembly (#6478)
* Simplified version of WebAssembly support to keep most of existing data structures and add cmake using Ninja and emcmake

* Clean up CMakeLists.txt and add an example to create and compute a kernel

* Load a model from bytes and remove graph building steps

* Add all cpu and contrib ops with mlas library

* WebAssembly build with Onnxruntime C/CXX API

* Use protobuf cmakefile directory instead of adding every necessary source file

* Fix invalid output at example

* add missing files

* Change an example to use Teams model and support ort mobile format

* add API for javascript

* fix input releasing in _ort_run()

* update API

* Let onnxruntime cmake build WebAssembly with option '--wasm'

* allow one-step building for wasm

* Make build script working on Linux and MacOS

* Fix broken build from Windows command

* Enable unit test on building WebAssembly

* Resolve comments

* update build flags

* wasm conv improvement from: 1) GemmV; 2) Depthwise direct convolution 3x3; 3) Direct convolution 3x3

* Cleaned mlas unittest.

* use glob

* update comments

* Update baseline due to loss scale fix (#6948)

* fix stream sync issue (#6954)

* Enable type reduction in EyeLike, Mod, random.cc CPU kernels. (#6960)

* Update EyeLike CPU kernel.

* Update Mod CPU kernel.

* Update Multinomial CPU kernel.

* Slight improvement to Pad CPU kernel binary size.

* Update RandomNormal[Like], RandomUniform[Like] CPU kernels.

* Fix warning from setting multiple MSVC warning level options. (#6917)

Fix warning from setting multiple MSVC warning level options. Replace an existing /Wn flag instead of always appending a new one.

* MLAS: quantized GEMM update (#6916)

Various updates to the int8_t GEMMs:

1) Add ARM64 udot kernel to take advantage of dot product instructions available in newer cores. Some models run 4x faster than the stock implementation we used before.
2) Refactor the x64 kernels to share common code for AVX2(u8u8/u8s8/avxvnni) vs AVX512(u8u8/u8s8/avx512vnni) to reduce binary size.
3) Extend kernels to support per-column zero points for matrix B. This is not currently wired to an operator.

* Implement QLinearAveragePool with unit tests. (#6896)

Implement QLinearAveragePool with unit tests.

* Attention fusion detect num_heads and hidden_size automatically (#6920)

* fixed type to experimental session constructor (#6950)

* fixed type to experimental session constructor

Co-authored-by: David Medine <david.medine@brainproducts.com>

* Update onnxruntime_perf_test.exe to accept free dimension overrides (#6962)

Co-authored-by: Ori Levari <orlevari@microsoft.com>

* Fix possible fd leak in NNAPI (#6966)

* Release buffers for prepacked tensors (#6820)

Unsolved problems:

1. One test failure was caused by a bug in Cudnn rnn kernels, when they can allocate a buffer and partially initialize it, the garbage data near tail of the buffer caused problem in some of the hardware. To attack this problem in a broader sense, should we add code in our allocators, and during a memory fuzzing test, fill an allocated buffer with garbage before returning to the caller?


2. Prepacking is used more widely than we know. For instance, Cudnn rnn kernels also cache their weights. They mix several weight tensors together into a single buffer, and never touch the original weight tensor anymore. This is the same idea with pre-pack, but they didn't override the virtual function, and they never tried to release those weight tensors, leading to memory waste. It also seems to me that there are some other kernels have similar behavior. Wonder how much memory we can save if we try to cleanup those too.

3. Turning off memory pattern planning does increase memory fragmentation, leading to out of memory error in some training test cases. Perhaps we can revisit the idea of pushing kernels-creation stage earlier, and then during initializer deserialization, we only avoid tracing those that will be prepacked.

* Enable type reduction for Range, ReverseSequence, ScatterND, Split, and Unique CPU kernels. (#6963)

* add CI

* fix test in ci

* fix flags for nsync in wasm build

* add copyright banner

* fix wasm source glob

* add missing exports

* resolve comments

* Perf gain by make packb wide to 4 from 16 on GEMM for WASM.
Remove no need direct conv in previous perf tuning.

* fix buildbreak introduced from latest master merge

* fix buildbreak in mlasi.h

* resolve all comments except MLAS

* rewrite packb related 3 functions for WASM_SCALAR seperately rather than using #ifdef in each.
and other changes according to PR feedback in mlas.

* More complete scalar path in sgemm from Tracy.

* Fix edge case handling in depthwise conv2d kernel 3x3. where:
  *) support input W==1 and H==1
  *) recalc in accurate pad_right and pad_bottom
  *) support hidden pad_right == 2 or pad_bottom == 2 when W == 1 or H==1 and no pad left/top

* Add more test coverage for conv depthwise from Tracy.
Fix one typo according to PR.

* resolve comments

* replace typedef by using

* do not use throw in OrtRun()

* output error message

Co-authored-by: Sunghoon <35605090+hanbitmyths@users.noreply.github.com>
Co-authored-by: Lei Zhang <zhang.huanning@hotmail.com>
Co-authored-by: Wei-Sheng Chin <wschin@outlook.com>
Co-authored-by: Tianlei Wu <tlwu@microsoft.com>
Co-authored-by: Edward Chen <18449977+edgchen1@users.noreply.github.com>
Co-authored-by: Tracy Sharpe <42477615+tracysh@users.noreply.github.com>
Co-authored-by: David Medine <david.eric.medine@gmail.com>
Co-authored-by: David Medine <david.medine@brainproducts.com>
Co-authored-by: Ori Levari <ori.levari@microsoft.com>
Co-authored-by: Ori Levari <orlevari@microsoft.com>
Co-authored-by: Guoyu Wang <62914304+gwang-msft@users.noreply.github.com>
Co-authored-by: Chen Fu <chenfucs@gmail.com>
2021-04-06 16:18:10 -07:00