onnxruntime/cmake/external
Yulong Wang 405ca49012
build ONNXRuntime into WebAssembly (#6478)
* Simplified version of WebAssembly support to keep most of existing data structures and add cmake using Ninja and emcmake

* Clean up CMakeLists.txt and add an example to create and compute a kernel

* Load a model from bytes and remove graph building steps

* Add all cpu and contrib ops with mlas library

* WebAssembly build with Onnxruntime C/CXX API

* Use protobuf cmakefile directory instead of adding every necessary source file

* Fix invalid output at example

* add missing files

* Change an example to use Teams model and support ort mobile format

* add API for javascript

* fix input releasing in _ort_run()

* update API

* Let onnxruntime cmake build WebAssembly with option '--wasm'

* allow one-step building for wasm

* Make build script working on Linux and MacOS

* Fix broken build from Windows command

* Enable unit test on building WebAssembly

* Resolve comments

* update build flags

* wasm conv improvement from: 1) GemmV; 2) Depthwise direct convolution 3x3; 3) Direct convolution 3x3

* Cleaned mlas unittest.

* use glob

* update comments

* Update baseline due to loss scale fix (#6948)

* fix stream sync issue (#6954)

* Enable type reduction in EyeLike, Mod, random.cc CPU kernels. (#6960)

* Update EyeLike CPU kernel.

* Update Mod CPU kernel.

* Update Multinomial CPU kernel.

* Slight improvement to Pad CPU kernel binary size.

* Update RandomNormal[Like], RandomUniform[Like] CPU kernels.

* Fix warning from setting multiple MSVC warning level options. (#6917)

Fix warning from setting multiple MSVC warning level options. Replace an existing /Wn flag instead of always appending a new one.

* MLAS: quantized GEMM update (#6916)

Various updates to the int8_t GEMMs:

1) Add ARM64 udot kernel to take advantage of dot product instructions available in newer cores. Some models run 4x faster than the stock implementation we used before.
2) Refactor the x64 kernels to share common code for AVX2(u8u8/u8s8/avxvnni) vs AVX512(u8u8/u8s8/avx512vnni) to reduce binary size.
3) Extend kernels to support per-column zero points for matrix B. This is not currently wired to an operator.

* Implement QLinearAveragePool with unit tests. (#6896)

Implement QLinearAveragePool with unit tests.

* Attention fusion detect num_heads and hidden_size automatically (#6920)

* fixed type to experimental session constructor (#6950)

* fixed type to experimental session constructor

Co-authored-by: David Medine <david.medine@brainproducts.com>

* Update onnxruntime_perf_test.exe to accept free dimension overrides (#6962)

Co-authored-by: Ori Levari <orlevari@microsoft.com>

* Fix possible fd leak in NNAPI (#6966)

* Release buffers for prepacked tensors (#6820)

Unsolved problems:

1. One test failure was caused by a bug in Cudnn rnn kernels, when they can allocate a buffer and partially initialize it, the garbage data near tail of the buffer caused problem in some of the hardware. To attack this problem in a broader sense, should we add code in our allocators, and during a memory fuzzing test, fill an allocated buffer with garbage before returning to the caller?


2. Prepacking is used more widely than we know. For instance, Cudnn rnn kernels also cache their weights. They mix several weight tensors together into a single buffer, and never touch the original weight tensor anymore. This is the same idea with pre-pack, but they didn't override the virtual function, and they never tried to release those weight tensors, leading to memory waste. It also seems to me that there are some other kernels have similar behavior. Wonder how much memory we can save if we try to cleanup those too.

3. Turning off memory pattern planning does increase memory fragmentation, leading to out of memory error in some training test cases. Perhaps we can revisit the idea of pushing kernels-creation stage earlier, and then during initializer deserialization, we only avoid tracing those that will be prepacked.

* Enable type reduction for Range, ReverseSequence, ScatterND, Split, and Unique CPU kernels. (#6963)

* add CI

* fix test in ci

* fix flags for nsync in wasm build

* add copyright banner

* fix wasm source glob

* add missing exports

* resolve comments

* Perf gain by make packb wide to 4 from 16 on GEMM for WASM.
Remove no need direct conv in previous perf tuning.

* fix buildbreak introduced from latest master merge

* fix buildbreak in mlasi.h

* resolve all comments except MLAS

* rewrite packb related 3 functions for WASM_SCALAR seperately rather than using #ifdef in each.
and other changes according to PR feedback in mlas.

* More complete scalar path in sgemm from Tracy.

* Fix edge case handling in depthwise conv2d kernel 3x3. where:
  *) support input W==1 and H==1
  *) recalc in accurate pad_right and pad_bottom
  *) support hidden pad_right == 2 or pad_bottom == 2 when W == 1 or H==1 and no pad left/top

* Add more test coverage for conv depthwise from Tracy.
Fix one typo according to PR.

* resolve comments

* replace typedef by using

* do not use throw in OrtRun()

* output error message

Co-authored-by: Sunghoon <35605090+hanbitmyths@users.noreply.github.com>
Co-authored-by: Lei Zhang <zhang.huanning@hotmail.com>
Co-authored-by: Wei-Sheng Chin <wschin@outlook.com>
Co-authored-by: Tianlei Wu <tlwu@microsoft.com>
Co-authored-by: Edward Chen <18449977+edgchen1@users.noreply.github.com>
Co-authored-by: Tracy Sharpe <42477615+tracysh@users.noreply.github.com>
Co-authored-by: David Medine <david.eric.medine@gmail.com>
Co-authored-by: David Medine <david.medine@brainproducts.com>
Co-authored-by: Ori Levari <ori.levari@microsoft.com>
Co-authored-by: Ori Levari <orlevari@microsoft.com>
Co-authored-by: Guoyu Wang <62914304+gwang-msft@users.noreply.github.com>
Co-authored-by: Chen Fu <chenfucs@gmail.com>
2021-04-06 16:18:10 -07:00
..
coremltools@523d5e03d8 Initial version of CoreML EP (#6392) 2021-01-27 10:43:17 -08:00
cub@c3cceac115
cxxopts@3c73d91c0b Introduce training changes. 2020-03-11 14:39:03 -07:00
date@e7e1482087
dlpack@e1e11e0d55 Post merge update for ORTModule 2021-03-16 20:11:59 -07:00
eigen@d10b27fe37 Update eigen to the latest to support C++20 (#4817) 2020-08-17 10:19:48 -07:00
emsdk@8b32b7def8 build ONNXRuntime into WebAssembly (#6478) 2021-04-06 16:18:10 -07:00
FeaturizersLibrary@fd5fe3de50 FeaturizersLibrary update and add variadic Input/Output to TimeSeriesImputer (#3674) 2020-04-24 08:53:00 -07:00
flatbuffers@6df40a2471 Move flatbuffers to 1.12 release (#5392) 2020-10-07 09:23:03 -07:00
googletest@703bd9caab Upgrade gtest to the latest version (#2827) 2020-01-13 20:16:48 -08:00
json@d98bf0278d Add provision in ORT for session options to be parsed when available via model file (#2449) 2019-12-03 16:56:07 -08:00
libprotobuf-mutator@7a2ed51a6b Onnxruntime fuzzing (#4341) 2020-07-06 16:34:34 -07:00
mimalloc@2d54553b7a Use a custom allocator for temporary buffers in reduction_ops.cc (#2775) 2020-02-23 16:04:30 +10:00
mp11@21cace4e57 Op kernel type reduction infrastructure. (#6466) 2021-01-28 07:27:19 -08:00
nsync@436617053d Update nsync 2020-02-20 11:25:34 -08:00
onnx@fe2433d3dd pull onnx latest commit (#7102) 2021-03-29 11:00:38 -07:00
onnx-tensorrt@99296a4462 pull onnx latest commit (#7102) 2021-03-29 11:00:38 -07:00
optional-lite@4acf4553ba Upgrade optional implementation to https://github.com/martinmoene/optional-lite. (#5563) 2020-11-03 15:27:47 -08:00
protobuf@498de9f761 Upgrade protobuf to 3.11.3 2020-02-12 14:47:00 -08:00
re2@30cad26715
SafeInt Revert to using release SafeInt repo now that it supports a build with exceptions disabled. (#5233) 2020-09-22 06:29:28 +10:00
tensorboard@373eb09e4c Introduce training changes. 2020-03-11 14:39:03 -07:00
tvm@eab844a872 update tvm submodule (#4284) 2020-06-19 14:51:18 -07:00
wil@e8c599bca6 Add DirectML Execution Provider (#2057) 2019-10-15 06:13:07 -07:00
dml.cmake Update DirectML 1.4.1 to 1.4.2 for ORT 1.7 (#6780) 2021-02-23 10:52:10 -08:00
dnnl.cmake Add GPU support for DNNL endpoint (#6741) 2021-03-09 09:40:42 -08:00
eigen.cmake apply eigen patch only for ACL. 2019-11-05 13:53:53 -08:00
featurizers.cmake Fix WCOS/Win32 linking bugs (#3126) 2020-03-19 08:52:40 -07:00
FindNumPy.cmake
jemalloc.cmake
mimalloc.cmake Use a custom allocator for temporary buffers in reduction_ops.cc (#2775) 2020-02-23 16:04:30 +10:00
onnx_minimal.cmake pull onnx latest commit (#7102) 2021-03-29 11:00:38 -07:00
pybind11.cmake Add python 3.9 support (#5874) 2020-11-30 12:02:48 -08:00
pyxir.cmake Initial release of Vitis-AI Execution Provider (#3771) 2020-05-19 05:32:32 -07:00
zlib.cmake