Commit graph

57 commits

Author SHA1 Message Date
Yulong Wang
e8564d6597
[js/web] update emsdk to v2.0.26 (#8653)
* update emsdk to v2.0.26

* fix pooling build warning

* fix build break

* use pragma diagnostic semantic only when __GNUC__ is defined

* fix build break

* disable AttentionPastState_dynamic
2021-08-26 15:31:34 -07:00
stevenlix
ee99fb400c
Upgrade TensorRT to v8.0.1 (#8512)
* update onnx-tensorrt parser to master

* disable unsupported tests

* add cuda sm 75 for T4

* update tensorrt pipeline

* update trt pipelines

* update trt pipelines

* Update linux-gpu-tensorrt-ci-pipeline.yml

* update trt cid pipeline

* Update linux-gpu-tensorrt-ci-pipeline.yml

* Update Tensorrt Windows build pool and TensorRT/CUDA/CuDNN version

* update to cuda11.4 in trt ci pipeline

* update base image to cuda11.4

* update packaging pipeline to cuda11.4

* clean up

* remove cuda11.1 and cuda11.3 docker file

* disable unsupported tensorrt tests at runtime

* Update linux-multi-gpu-tensorrt-ci-pipeline.yml
2021-08-02 11:20:31 -07:00
Chen Fu
df4cb6f301
Adding pytorch cpuinfo as dependency (#8178)
Pytorch cpuinfo library allows us to query current cpu features, micro-architecture and cache size, etc. These information is needed for targeted performance optimizations.

Unfortunately it does not work under Windows/ARM. We need to develop our own later
2021-07-12 14:21:12 -07:00
Zuwei Zhao
b46310b349
Integrate onnxruntime-extensions into onnxruntime. (#8143)
Co-authored-by: Zuwei Zhao <zuzhao@microsoft.com>
2021-07-01 09:34:03 -07:00
Yulong Wang
9b5f749176
[wasm] emsdk: allow to install emscripten only (#7961) 2021-06-07 09:45:02 -07:00
Yulong Wang
faae347d9f
[wasm] upgrade emsdk version to 2.0.23 (#7893)
* upgrade emsdk version to 2.0.23

* fix build

* override gmock build options
2021-06-02 12:26:24 -07:00
Chen Fu
e26c668a9b
add google benchmark as direct dependency (#7762)
Co-authored-by: Chen Fu <fuchen@microsoft.com>

Description:
This change add google benchmark git repo as a submodule in onnxruntime repo.

Motivation and Context
Currently we have benchmarking code that depends on google benchmark. The version we are using has cross compilation issues for ARM CPUs. Recent changes in Google benchmark fixed these issues.

Another problem is that we now rely on ONNX to pull in Google benchmark, an indirect dependency. Updating ONNX involves complex steps and rightly so. However, updating Google benchmark dependency should not be hindered by these processes.
2021-05-19 20:12:17 -07:00
Yulong Wang
405ca49012
build ONNXRuntime into WebAssembly (#6478)
* Simplified version of WebAssembly support to keep most of existing data structures and add cmake using Ninja and emcmake

* Clean up CMakeLists.txt and add an example to create and compute a kernel

* Load a model from bytes and remove graph building steps

* Add all cpu and contrib ops with mlas library

* WebAssembly build with Onnxruntime C/CXX API

* Use protobuf cmakefile directory instead of adding every necessary source file

* Fix invalid output at example

* add missing files

* Change an example to use Teams model and support ort mobile format

* add API for javascript

* fix input releasing in _ort_run()

* update API

* Let onnxruntime cmake build WebAssembly with option '--wasm'

* allow one-step building for wasm

* Make build script working on Linux and MacOS

* Fix broken build from Windows command

* Enable unit test on building WebAssembly

* Resolve comments

* update build flags

* wasm conv improvement from: 1) GemmV; 2) Depthwise direct convolution 3x3; 3) Direct convolution 3x3

* Cleaned mlas unittest.

* use glob

* update comments

* Update baseline due to loss scale fix (#6948)

* fix stream sync issue (#6954)

* Enable type reduction in EyeLike, Mod, random.cc CPU kernels. (#6960)

* Update EyeLike CPU kernel.

* Update Mod CPU kernel.

* Update Multinomial CPU kernel.

* Slight improvement to Pad CPU kernel binary size.

* Update RandomNormal[Like], RandomUniform[Like] CPU kernels.

* Fix warning from setting multiple MSVC warning level options. (#6917)

Fix warning from setting multiple MSVC warning level options. Replace an existing /Wn flag instead of always appending a new one.

* MLAS: quantized GEMM update (#6916)

Various updates to the int8_t GEMMs:

1) Add ARM64 udot kernel to take advantage of dot product instructions available in newer cores. Some models run 4x faster than the stock implementation we used before.
2) Refactor the x64 kernels to share common code for AVX2(u8u8/u8s8/avxvnni) vs AVX512(u8u8/u8s8/avx512vnni) to reduce binary size.
3) Extend kernels to support per-column zero points for matrix B. This is not currently wired to an operator.

* Implement QLinearAveragePool with unit tests. (#6896)

Implement QLinearAveragePool with unit tests.

* Attention fusion detect num_heads and hidden_size automatically (#6920)

* fixed type to experimental session constructor (#6950)

* fixed type to experimental session constructor

Co-authored-by: David Medine <david.medine@brainproducts.com>

* Update onnxruntime_perf_test.exe to accept free dimension overrides (#6962)

Co-authored-by: Ori Levari <orlevari@microsoft.com>

* Fix possible fd leak in NNAPI (#6966)

* Release buffers for prepacked tensors (#6820)

Unsolved problems:

1. One test failure was caused by a bug in Cudnn rnn kernels, when they can allocate a buffer and partially initialize it, the garbage data near tail of the buffer caused problem in some of the hardware. To attack this problem in a broader sense, should we add code in our allocators, and during a memory fuzzing test, fill an allocated buffer with garbage before returning to the caller?


2. Prepacking is used more widely than we know. For instance, Cudnn rnn kernels also cache their weights. They mix several weight tensors together into a single buffer, and never touch the original weight tensor anymore. This is the same idea with pre-pack, but they didn't override the virtual function, and they never tried to release those weight tensors, leading to memory waste. It also seems to me that there are some other kernels have similar behavior. Wonder how much memory we can save if we try to cleanup those too.

3. Turning off memory pattern planning does increase memory fragmentation, leading to out of memory error in some training test cases. Perhaps we can revisit the idea of pushing kernels-creation stage earlier, and then during initializer deserialization, we only avoid tracing those that will be prepacked.

* Enable type reduction for Range, ReverseSequence, ScatterND, Split, and Unique CPU kernels. (#6963)

* add CI

* fix test in ci

* fix flags for nsync in wasm build

* add copyright banner

* fix wasm source glob

* add missing exports

* resolve comments

* Perf gain by make packb wide to 4 from 16 on GEMM for WASM.
Remove no need direct conv in previous perf tuning.

* fix buildbreak introduced from latest master merge

* fix buildbreak in mlasi.h

* resolve all comments except MLAS

* rewrite packb related 3 functions for WASM_SCALAR seperately rather than using #ifdef in each.
and other changes according to PR feedback in mlas.

* More complete scalar path in sgemm from Tracy.

* Fix edge case handling in depthwise conv2d kernel 3x3. where:
  *) support input W==1 and H==1
  *) recalc in accurate pad_right and pad_bottom
  *) support hidden pad_right == 2 or pad_bottom == 2 when W == 1 or H==1 and no pad left/top

* Add more test coverage for conv depthwise from Tracy.
Fix one typo according to PR.

* resolve comments

* replace typedef by using

* do not use throw in OrtRun()

* output error message

Co-authored-by: Sunghoon <35605090+hanbitmyths@users.noreply.github.com>
Co-authored-by: Lei Zhang <zhang.huanning@hotmail.com>
Co-authored-by: Wei-Sheng Chin <wschin@outlook.com>
Co-authored-by: Tianlei Wu <tlwu@microsoft.com>
Co-authored-by: Edward Chen <18449977+edgchen1@users.noreply.github.com>
Co-authored-by: Tracy Sharpe <42477615+tracysh@users.noreply.github.com>
Co-authored-by: David Medine <david.eric.medine@gmail.com>
Co-authored-by: David Medine <david.medine@brainproducts.com>
Co-authored-by: Ori Levari <ori.levari@microsoft.com>
Co-authored-by: Ori Levari <orlevari@microsoft.com>
Co-authored-by: Guoyu Wang <62914304+gwang-msft@users.noreply.github.com>
Co-authored-by: Chen Fu <chenfucs@gmail.com>
2021-04-06 16:18:10 -07:00
Thiago Crepaldi
3348b8485f Post merge update for ORTModule
Changes include:
* Revert Event Pool changes
* Add copyright and revert unrelated changes
* Add DLPack as submodule and remove to_dlpack and from_dlpack from public API
* Update golden numbers for DHP Parallel tests
* Update ORTTrainer unit test numbers
* Rollback to DLPack v0.3
* Disable flaky test
* Update third party notices and CG manifest file
* Minor refactoring of ORTValue API
2021-03-16 20:11:59 -07:00
stevenlix
53eb948f4c
Upgrade TensorRT to v7.2.2 (#6452)
* upgrade to TensorRT 7.2.2

* extend GPU tensorrt CI timeout to 150 minutes

* update docker image name

* disable user interaction to avoid tensorrt container stuck when install tzdata

* upgrade to libssl1.1 for ubuntu20.04

* remove libicu60 from ubuntu20.04

* add libicu66 for ubuntu20.04

* debug

* llvm

* llvm

* disable ReverseSequenceTest.InvalidInput

* disable ReverseSequenceTest.InvalidInput

* fix issues

* fix issues

* Update linux-gpu-tensorrt-ci-pipeline.yml

* disable warning 4458 for TensorRT parser

* update onnx-tensorrt submodule

* disable warnings for TensorRT parser

* update onnx-tensorrt submodule to include latest bug fixes

* update setup_env_trt

* update pool for win trt ci pipeline'

Co-authored-by: George Wu <jywu@microsoft.com>
2021-02-18 04:30:47 -08:00
Edward Chen
d850fa63bf
Op kernel type reduction infrastructure. (#6466)
Add infrastructure to support type reduction in Op kernel implementations.
Update Cast and IsInf CPU kernels to use it.
2021-01-28 07:27:19 -08:00
Guoyu Wang
c05adb1147
Initial version of CoreML EP (#6392) 2021-01-27 10:43:17 -08:00
stevenlix
76dbd88526
Expose graph ModelPath to TensorRT shared library (#6353)
* Update graph_viewer.cc

* Update tensorrt_execution_provider.cc

* Update graph_viewer.h

* Update tensorrt_execution_provider.cc

* Update tensorrt_execution_provider.cc

* Update provider_api.h

* Update provider_bridge_ort.cc

* Update provider_interfaces.h

* Update provider_interfaces.h

* expose GraphViewer ModelPath API to TRT shared lib

* add modelpath to compile

* update

* add model_path to onnx tensorrt parser

* use GenerateMetaDefId to generate unique TRT kernel name

* use GenerateMetaDefId to generate unique TRT engine name

* fix issue

* Update tensorrt_execution_provider.cc

* remove GetVecHash

* Update tensorrt_execution_provider.h

* convert wchar_t to char for tensorrt parser

* update tensorrt parser to include latest changes

* fix issues

* Update tensorrt_execution_provider.cc

* merge trt parser latest change

* add PROVIDER_DISALLOW_ALL(Path)
2021-01-26 10:41:31 -08:00
Tracy Sharpe
fcd9fc9b6d
remove gemmlowp submodule (#6341) 2021-01-13 15:54:37 -08:00
Tixxx
32c67c2944
Deprecating Horovod and refactored Adasum computations (#5468)
deprecated horovod submodule
refactored adasum logic to be ort-native
added tests for native kernel and e2e tests
2020-12-17 16:21:33 -08:00
edgchen1
07bd4ef470
Upgrade optional implementation to https://github.com/martinmoene/optional-lite. (#5563) 2020-11-03 15:27:47 -08:00
Changming Sun
1514509fd7
Update protobuf submodule url (#5477) 2020-10-14 02:35:38 -07:00
Scott McKay
e0719a1073
Revert to using release SafeInt repo now that it supports a build with exceptions disabled. (#5233) 2020-09-22 06:29:28 +10:00
stevenlix
c794c88ae0
Solve name conflict in TensorRT engine caching (#5128)
* fix hash conflict

* Add verbose for engine deserialization and destroy old engine memory if new engine is generated

* update parser

* Update tensorrt_execution_provider.cc

* use a better hash algorithm

* Update tensorrt_execution_provider.cc
2020-09-11 09:12:56 -07:00
gwang-msft
fde7a2c848
Temporarily switch SafeInt to a fork for an option to disable exceptions (#5041)
* Removed submodule

* Add safeint fork
2020-09-02 23:21:39 -07:00
gwang-msft
82bc21e35e
Namespace change on ort flatbuffers schema (#4886)
* correct some errors in the flatbuffers schema, move flatbuffers submodule to cmake/external

* update the ort flatbuffers schema to use less namespace

* minor update

Co-authored-by: gwang0000 <62914304+gwang0000@users.noreply.github.com>
2020-08-21 17:43:11 -07:00
gwang-msft
d4d52056be
Flatbuffers schema for serialization of the onnxruntime::model/graph (#4870)
* add flatbuffers submodule

* test version of flat buffer schema

* test version of flat buffer schema

* minor updates

* add serialization of the value info, group defs in different namespace

* update comments

* update cgmanifest.json

* update namespace, changed typeinfovalue to use union, added root_type and file_identifier

* add new container type, add max_node_index to graph

* add serializing session state

* addressed review comments

* minor updates

Co-authored-by: gwang0000 <62914304+gwang0000@users.noreply.github.com>
2020-08-20 18:45:43 -07:00
stevenlix
7acef875bb
Fix bugs in TensorRT (#4780)
* fix bugs

* Move -Wno-deprecated-declarations to target compile flag
2020-08-13 16:09:27 -07:00
stevenlix
77c69a0325
Upgrade TensorRT to v7.1.3.4 (#4704)
* upgrade to TensorRT 7.1.3.4

* Upgrade onnx-tensorrt parser for TensorRT 7.1.3.4

* fix format issue

* fix format issue

* fix format issue

* Update tensorrt_execution_provider.cc

* change cmake version to 3.14

* Remove --msvc_toolset 14.16

* change to onnxruntime::make_unique

* use onnxruntime::make_unique

* disable some tests for TensorRT

* disable some tests for TensorRT

* Update upsample_op_test.cc

* Update tile_op_test.cc

* disable some tests for TensorRT

* Update constant_of_shape_test.cc

* update parser

* Update Dockerfile.ubuntu_tensorrt
2020-08-07 17:43:56 -07:00
gwang-msft
c2ec3b734b
[Android NNAPI EP] Remove dependency on external JD/DNNLibrary (#4576)
* remove dependency of external jd-dnnlibrary

* remove extra variables not used any more

* update /cgmanifest.json
2020-07-22 14:08:12 -07:00
EronsJ
632b2896f3
Onnxruntime fuzzing (#4341)
* Add protobuf mutator library as a git submodule

* Added files and instructions to build the protobuf mutator library in CMake

* Added fuzzing flag to build system and added fuzzing dependency library. To run fuzzing test use the flags --fuzz_testing --build_shared_lib --use_full_protobuf --cmake_generator 'Visual Studio 16 2019'

* Added src files and build instructions for the main fuzzing engine

* Removed Random number generation test from inside the engine

* Added license header to files

* Removed all pep8 violations introduced by this change and other E501 violations
2020-07-06 16:34:34 -07:00
Derek Murray
9d748afff1
Set spdlog submodule branch to "master" explicitly. (#4087)
The default branch for the spdlog repository on GitHub recently changed from "master"
to "v1.x", which has a different API for `syslog_sink::syslog_sink()`. This breaks
builds of the server for anyone who has checked out the submodules since that change.

Fixes #4077.

Co-authored-by: Derek Murray <demurra@microsoft.com>
2020-05-29 17:53:40 -07:00
Edward Chen
deac467683 Merge remote-tracking branch 'origin/master' into edgchen1/merge_from_master 2020-04-23 20:50:33 +00:00
Mikhail Kuznetsov
3cf3595579
Replaced spaces on tabs (#3555) 2020-04-22 15:16:19 -07:00
Edward Chen
e542cfd0e0 Introduce training changes. 2020-03-11 14:39:03 -07:00
stevenlix
f4a5d17294
Upgrade to CUDA10.2 for TensorRT (#3084)
* Switch to CUDA10.2

* Update win-gpu-tensorrt-ci-pipeline.yml

* Update win-gpu-tensorrt-ci-pipeline.yml

* remove dynamic_shape

* update onnx-tensorrt submodule

* check if input shape is specified for TensorRT subgraph input and enable some TensorRT unit tests

* fix format issue

* add shape inference instruction for TensorRT

* update according to the reviews

* Update win-gpu-tensorrt-ci-pipeline.yml
2020-02-25 05:36:01 -08:00
Scott McKay
a1db87b382
Add SafeInt bounds checking to memory allocation size calculations. (#3022)
* Add SafeInt bounds checking to memory allocation size calculations.

* Fix TensorRT library includes
2020-02-20 11:41:03 -08:00
stevenlix
da653ccdac
Upgrade TensorRT to version 7.0.0.11 (#2973)
* update onnx-tensorrt submodule to trt7 branch

* add fp16 option for TRT7

* switch to master branch of onnx tensorrt

* update submodule

* update to TensorRT7.0.0.11

* update to onnx-tensorrt for TensorRT7.0

* switch to private branch due to issues in master branch

* remove trt_onnxify

* disable warnings c4804 for TensorRT parser

* disable warnings c4702 for TensorRT parser

* add back sanity check of shape tensort input in the parser

* disable some warnings for TensorRT7

* change fp16 threshold for TensorRT

* update onn-tensorrt parser

* fix cycle issue in faster-rcnn and add cycle detection in GetCapability

* Update TensorRT container to v20.01

* Update TensorRT image name

* Update linux-multi-gpu-tensorrt-ci-pipeline.yml

* Update linux-gpu-tensorrt-ci-pipeline.yml

* disable rnn tests for TensorRT

* disable rnn tests for TensorRT

* disabled some unit test for TensorRT

* update onnx-tensorrt submodule

* update build scripts for TensorRT

* formating the code

* Update TensorRT-ExecutionProvider.md

* Update BUILD.md

* Update tensorrt_execution_provider.h

* Update tensorrt_execution_provider.cc

* Update win-gpu-tensorrt-ci-pipeline.yml

* use GetEnvironmentVar function to get env virables and switch to Win-GPU-2019 agent pool for win CI build

* change tensorrt path

* change tensorrt path

* fix win ci build issue

* update code based on the reviews

* fix build issue

* roll back to cuda10.0

* add RemoveCycleTest for TensorRT

* fix windows ci build issues

* fix ci build issues

* fix file permission

* fix out of range issue for max_workspace_size_env
2020-02-12 07:03:58 -08:00
Changming Sun
bc9f55df47
Change eigen submodule url (#2975) 2020-02-06 11:22:55 -08:00
Tiago Koji Castro Shibata
cff266e1b9 Fix cgmanifest.json generating script (#2770)
* Fix protobuf submodule name

* Workaround pygit2 bug
2020-01-14 14:59:07 -08:00
Dmitri Smirnov
aa37dea598
Convert ExternalProject Featurizers into git submodule (#2834)
Add git submodule for Featurizer library.
  Update cmake to build for git submodule.
2020-01-14 10:32:06 -08:00
Changming Sun
c7a9c6b488
Split onnxruntime server to a separated folder (#2744) 2019-12-27 11:21:23 -08:00
stevenlix
293b15480b Add dynamic shape support in TensorRT execution provider (#2450)
* remove onnx-tensorrt submodule

* add new onnx-tensorrt submodule (experiment) for trt6

* update engine build for trt6

* update compile and compute for tensorrt6.0

* Update tensorrt_execution_provider.cc

* Update tensorrt_execution_provider.cc

* Update tensorrt_execution_provider.cc

* Update tensorrt_execution_provider.cc

* switch to onnx-tensorrt master for TensorRT6'

* Update tensorrt_execution_provider.cc

* Handle dynamic batch size and add memcpy in TensorRT EP

* update test cases

* Update tensorrt_execution_provider.cc

* update onnx-tensorrt submodule

* Update Dockerfile.ubuntu_tensorrt

* Update Dockerfile.ubuntu_tensorrt

* Update run_dockerbuild.sh

* Update run_dockerbuild.sh

* Update install_ubuntu.sh

* Update concat_op_test.cc

* Update tensorrt_execution_provider.cc

* Upgrade TensorRT to version 6.0.1.5

* Update onnxruntime_providers.cmake

* Update CMakeLists.txt

* Update reduction_ops_test.cc

* Update install_ubuntu.sh

* Update Dockerfile.ubuntu_tensorrt

* Update Dockerfile.tensorrt

* Update BUILD.md

* Update run_dockerbuild.sh

* Update install_ubuntu.sh

* Update onnxruntime_providers.cmake

* Update install_ubuntu.sh

* Update install_ubuntu.sh

* Update gemm_test.cc

* Update gather_op_test.cc

* Update CMakeLists.txt

* Removed submodule

* update onnx-tensorrt submodule

* update header file

* Removed submodule

* add submodule onnx-tensorrt kevin's branch shape-test'

* add debugging code

* Update tensorrt_execution_provider.cc

* Update tensorrt_execution_provider.cc

* merge master

* Removed submodule

* update onnx-tensorrt submodule

* add more changes for dynamic shapes

* Update tensorrt_execution_provider.cc

* update for dynamic shape

* update dynamic shape processing

* fix logger issue

* remove submodule onnx-tensorrt

* add submodule onnx-tensorrt

* add env variable min_subgraph_size

* remove redundency

* update document

* use onnxruntime::make_unique

* fix multi-run issue

* remove some tests to save CI build time

* Add dynamic shape test

* Update TensorRT-ExecutionProvider.md

* Add example of running Faster R-CNN model on TensorRT EP

* Add more details on env variables

* update environment variables

* Update tensorrt_basic_test.cc

* Update model tests

* Update tensor_op_test.cc

* remove --use_full_protobuf

* Update build.py
2019-12-03 23:18:33 -08:00
Hariharan Seshadri
5c2e474751
Add provision in ORT for session options to be parsed when available via model file (#2449)
* Initial commit

* Fix gitmodules

* Nits

* Nits

* Updates

* Update

* More changes

* Updates

* Update

* Some updates

* More changes

* Update

* Update

* Merge

* Update

* Updates

* More changes

* Update

* Fix nits

* Updates

* Fix warning

* Fix build

* Add comment

* PR feedback

* PR feedback

* Updates

* Updates

* Update

* More changes

* Fix build break

* Comment test for now

* Updates

* Updates

* PR feedback

* Updates

* Nits

* Add tests

* Fix build

* Fix build

* Fix build

* Fix build break

* Fix build

* Nits

* PR feedback

* More change

* Expose GetSessionOptions in pybind logic and add unit test for python

* Fix build

* PR feedback

* PR feedback
2019-12-03 16:56:07 -08:00
Adrian Tsai
4090d0d0de
Add DirectML Execution Provider (#2057)
This change adds a new execution provider powered by [DirectML](https://aka.ms/DirectML).

DirectML is a high-performance, hardware-accelerated DirectX 12 library for machine learning on Windows. DirectML provides GPU acceleration for common machine learning tasks across a broad range of supported hardware and drivers.

The DirectML execution provider is capable of greatly improving evaluation time of models using commodity GPU hardware, without sacrificing broad hardware support or requiring vendor-specific extensions to be installed.

**Note** that the DML EP code was moved verbatim from the existing WindowsAI project, which is why it doesn't yet conform to the onnxruntime coding style. This is something that can be fixed later; we would like to keep formatting/whitespace changes to a minimum for the time being to make it easier to port fixes from WindowsAI to ORT during this transition.

Summary of changes:
* Initial commit of DML EP files under onnxruntime/core/providers/dml
* Add cmake entries for building the DML EP and for pulling down the DirectML redist using nuget
* Add a submodule dependency on the Windows Implementation Library (WIL)
* Add docs under docs/execution_providers/DirectML-ExecutionProvider.md
* Add support for DML EP to provider tests and perf tests
* Add support for DML EP to fns_candy_style_transfer sample
* Add entries to the C ABI for instantiating the DML EP
2019-10-15 06:13:07 -07:00
stevenlix
544e53e24e Update TensorRT to version 6.0.1.5 (#1966)
* remove onnx-tensorrt submodule

* add new onnx-tensorrt submodule (experiment) for trt6

* update engine build for trt6

* update compile and compute for tensorrt6.0

* Update tensorrt_execution_provider.cc

* Update tensorrt_execution_provider.cc

* Update tensorrt_execution_provider.cc

* Update tensorrt_execution_provider.cc

* switch to onnx-tensorrt master for TensorRT6'

* Update tensorrt_execution_provider.cc

* Handle dynamic batch size and add memcpy in TensorRT EP

* update test cases

* Update tensorrt_execution_provider.cc

* update onnx-tensorrt submodule

* Update Dockerfile.ubuntu_tensorrt

* Update Dockerfile.ubuntu_tensorrt

* Update run_dockerbuild.sh

* Update run_dockerbuild.sh

* Update install_ubuntu.sh

* Update concat_op_test.cc

* Update tensorrt_execution_provider.cc

* Upgrade TensorRT to version 6.0.1.5

* Update onnxruntime_providers.cmake

* Update CMakeLists.txt

* Update reduction_ops_test.cc

* Update install_ubuntu.sh

* Update Dockerfile.ubuntu_tensorrt

* Update Dockerfile.tensorrt

* Update BUILD.md

* Update run_dockerbuild.sh

* Update install_ubuntu.sh

* Update onnxruntime_providers.cmake

* Update install_ubuntu.sh

* Update install_ubuntu.sh

* Update gemm_test.cc

* Update gather_op_test.cc

* Update CMakeLists.txt

* Removed submodule

* update onnx-tensorrt submodule

* Add Ubuntu18.04 build option

* Add Ubuntu18.04 build option

* Add Ubuntu18.04 build option

* Add Ubuntu18.04 build option

* Remove redundency

* Fix issue that it does not add memcopy node correctly if some nodes fall back to CUDA EP.
e.g. after partition, there's TRT_Node -> Cuda_node (with CPU memory expected), we still need to add memcpy node between them.

* update for Trt Windows build

* Update onnxruntime_providers.cmake

* Disable opset11 tests on TensorRT

* Update pad_test.cc

* Update build.py

* update scripts for ubuntu18.04

* Disable warning for Windows build
2019-10-06 10:40:53 -07:00
Dmitri Smirnov
d1b1cdc5c4
Replace GSL with GSL-LITE submodule and fix up refs (#1920)
Remove gsl subodule and replace with a local copy of gsl-lite
  Refactor for onnxruntime::make_unique
  gsl::span size and index are now size_t
  Remove lambda auto argument type detection.
  Remove constexpr from fail_fast in gsl due to Linux not being happy.
  Comment out std::stream support due to MacOS std lib broken.
  Move make_unique into include/core/common so it is accessible for server builds.
  Relax requirements for onnxruntime/test/providers/cpu/ml/write_scores_test.cc
  due to x86 build.
  Add ONNXRUNTIME_ROOT to Server Lib includes so gsl is recognized
2019-10-01 12:43:29 -07:00
Yulong Wang
e6ce384402
add dependency 'cub' as submodule (#1924) 2019-09-26 16:10:39 +08:00
kile0
125900c961 Enable integration with mimalloc memory allocator (#1673)
* add mimalloc submodule

* basic hooks into execution provider header and build script option

* pull mimalloc into build

* windows has to use the override vcxproj already set up, and disable bfcarena when using mimalloc

* fix import_location

* generalize build msbuild command

* add mimalloc dependency to python package as well as various commenting cleanups

* update mimalloc commit as stop gap

* include mimalloc changes from master

* create capi directory if doesn't exist for mimalloc copying over

* disable runtime hooks and remove old comment

* temporary change to test CI

* fetch the mimalloc output name property

* uniformly call target_link_libraries

* query cmake to get the correct windows sdk to target

* revert change to trailing directory slash

* pickup windows sdk off msbuild path if possible

* copy the produced dll/so at install time, not configure time

* deal with mimalloc unimplemented atomic

* move to dev branch of mimalloc to avoid atomic issues on gcc

* for windows specify solution settings (x86) rather than individual project settings

* pin mimalloc submodule to updated commit

* typo

* Revert "temporary change to test CI"

This reverts commit 764867376936a5d307dded3cc37f00a34e3b0c96.
2019-09-13 17:12:48 -07:00
stevenlix
1c5b15c2b8
Remove memory copy between TensorRT and CUDA (#1561)
* remove memory copy between CUDA and TRT

* add info to RegisterExecutionProvider input

* use new IDeviceAllocator for trt allocator

* remove SetDefaultInputsMemoryType from TRT EP

* remove onnx-tensorrt 5.0

* add submodule onnx-tensorrt branch 5.1

* remove redundancy

* Update transformer_memcpy.cc

* Update tensorrt_execution_provider.cc

* switch to TensorRT 5.1.5.0

* update python binding

* disable failed test case on TensorRT

* Update activation_op_test.cc

* upgrade to TensorRT container 19.06

* update according to feedback

* add comments

* remove tensorrt allocator and use cuda(gpu) allocator

* update onnx-tensorrt submodule

* change ci build cuda directory name
2019-08-08 19:31:39 -07:00
Colin Versteeg
5ee0f185dc Add GRPC support to ONNX Runtime Server (#1144)
* add grpc

* add-submodule

* Revert "add-submodule"

This reverts commit e35994b25035ce310a98909658582bff759ee358.

* fix submodule

* IT BUILDS

* Initial commit of prediction_service_impl.cpp

* Server builds and runs!

* add request id, health and reflection. GRPC is done

* enable channelz for monitoring

* GRPC unit tests

* clang format

* add unit tests

* Add function tests for GRPC

* add grpc to model_zoo_tests

* revert update protobuf to 3.7.0

* update submodules

* builds but runs some gflags tests which fail

* get build working

* confine build changes to onnxruntime_server.cmake

* update build files

* code reveiw comments

* Maik's code review comments

* update cares version to fix compilation issue

* update build to fix c-ares

* code review comments

* update cgmanifest.json

* remove extraneous file

* Klein comments.

* update ci based on discussions for go dependency

* fix tag issue

* fix build issues

* remove stray submodule

* update dockerfile and build script

* dynamic linking changes

* update build script

* code review comments

* update dockerfile

* update script for mount

* code review comments
2019-07-18 11:10:38 -07:00
Colin Versteeg
a8ff209ab6 Refactor Onnx runtime Server to only use public APIs (#1271)
* replace log sinks

* limit headers to include dir

* first changes to do dynamic linking

* wip for using cxx api

* remove weird dangling dependency

* building with tests failing

* finish updating converters

* fix const

* intital introduction of typedef

* change logging to use spdlog

* get tests passing

* clang format

* map logging levels better

* clean up unused imports

* trent cr comments

* clang-format

* code review comments

* changing buffer use to reserve

* Dynamically link

* revert tvm

* update binary uploading

* catch exceptions by const-ref

* Revert "revert tvm"

This reverts commit 387676dd1018134d15eb71fa126f7caf94380800.

* fix typo

* update versioning of lib
2019-07-04 01:08:14 -07:00
KeDengMS
0d204f3f06
Implementation of TVM codegen library (#888)
Description:

This change adds the common part of TVM based codegen library. It includes following parts:
* Microsoft TVM Inventory (MTI): a set of TVM ops for neural networks, similar to TOPI
* Compiler pass for traversing ONNX graph and generate TVM ops
* Compiler pass for traversing generated graph and specify TVM schedule
* Compiler pass for handling weight layout
* Utils for debugging

Motivation and Context:

TVM is an open deep learning compiler stack for cpu, gpu and specialized accelerators. To leverage it in ONNX, we built an execution provider named Nuphar. Currently, Nuphar gets good performance on CPUs with AVX2 on quantized LSTM models.

This codegen library was part of Nuphar execution provider. It is split out for sharing with other execution providers, as we'd like to reuse TVM in more devices.
2019-07-03 10:32:59 -07:00
daquexian
c65489a47f Initial PR for NNAPI execution provider (#1220)
* init

* Update DNNLibrary

* Update DNNLibrary, set compiler flags, it compiles now

* Add more missing flags, add test

* Update DNNLibrary

* Update Compile method, fix allocator and some other bugs

* Update DNNLibrary

* Implement CopyTensor

* Not delete state explicitly since it is managed by unique_ptr

* Add the missing files when SingleUnitTestProjct is ON

* misc changes

* Fix wrong name in provider factory

* Add my own test

* Update the code of add node into graph, and add the missing initializer into graph

* Fix the bug that re-build the graph produces extra output

* Update DNNLibrary

* Transpose nchw (ONNX) -> nhwc (NNAPI)

* Add license

* Add GetSupportedNodes method (implement it later)

* Rename onnxruntime_nnapi_test->onnxruntime_nnapi_squeezenet_test

* Update squeezenet_test.cpp after rebase master

* Remove squeezenet_test.cpp since it is almost same with the c++ sample

* Update DNNLibrary for GetSupportedNodes

* Update GetSupportedNodes

* Revert "Remove squeezenet_test.cpp since it is almost same with the c++ sample"

This reverts commit a97575fd9ff49e50ba1dc8d8154790d8cd86c48d.

* Update DNNLibrary

* Fix multiple outputs bug

* Remove GetKernelRegistry

* Revert "Revert "Remove squeezenet_test.cpp since it is almost same with the c++ sample""

This reverts commit 2a0670e9cbf10ea654111ce39e198a4be0ddd838.

* Set default memory type of NNAPI EP

* Add CPUOutput allocator

* Update DNNLibrary for multiple outputs

* Fix bug of nhwc->nchw

* Remove GetExecutionHandle()
2019-07-02 06:03:29 -07:00
stevenlix
723d5c782a
Improve TensorRT GetCapability to Enable More Models (#1012)
* Improve TensorRT GetCapability Accuracy

* Update onnxruntime_providers.cmake

* made changes based on feedback

* update unit tests for TensorRT

* update onnx-tensorrt submodule to v5.0 branch

* remove uncessary comments

* convert int32 to int64 at inferencing output

* add more data types in compute

* change returns in compute

* use StatusCode as return in compute
2019-05-24 10:12:55 -07:00