Commit graph

86 commits

Author SHA1 Message Date
gwang-msft
d4d52056be
Flatbuffers schema for serialization of the onnxruntime::model/graph (#4870)
* add flatbuffers submodule

* test version of flat buffer schema

* test version of flat buffer schema

* minor updates

* add serialization of the value info, group defs in different namespace

* update comments

* update cgmanifest.json

* update namespace, changed typeinfovalue to use union, added root_type and file_identifier

* add new container type, add max_node_index to graph

* add serializing session state

* addressed review comments

* minor updates

Co-authored-by: gwang0000 <62914304+gwang0000@users.noreply.github.com>
2020-08-20 18:45:43 -07:00
stevenlix
7acef875bb
Fix bugs in TensorRT (#4780)
* fix bugs

* Move -Wno-deprecated-declarations to target compile flag
2020-08-13 16:09:27 -07:00
stevenlix
77c69a0325
Upgrade TensorRT to v7.1.3.4 (#4704)
* upgrade to TensorRT 7.1.3.4

* Upgrade onnx-tensorrt parser for TensorRT 7.1.3.4

* fix format issue

* fix format issue

* fix format issue

* Update tensorrt_execution_provider.cc

* change cmake version to 3.14

* Remove --msvc_toolset 14.16

* change to onnxruntime::make_unique

* use onnxruntime::make_unique

* disable some tests for TensorRT

* disable some tests for TensorRT

* Update upsample_op_test.cc

* Update tile_op_test.cc

* disable some tests for TensorRT

* Update constant_of_shape_test.cc

* update parser

* Update Dockerfile.ubuntu_tensorrt
2020-08-07 17:43:56 -07:00
gwang-msft
c2ec3b734b
[Android NNAPI EP] Remove dependency on external JD/DNNLibrary (#4576)
* remove dependency of external jd-dnnlibrary

* remove extra variables not used any more

* update /cgmanifest.json
2020-07-22 14:08:12 -07:00
EronsJ
632b2896f3
Onnxruntime fuzzing (#4341)
* Add protobuf mutator library as a git submodule

* Added files and instructions to build the protobuf mutator library in CMake

* Added fuzzing flag to build system and added fuzzing dependency library. To run fuzzing test use the flags --fuzz_testing --build_shared_lib --use_full_protobuf --cmake_generator 'Visual Studio 16 2019'

* Added src files and build instructions for the main fuzzing engine

* Removed Random number generation test from inside the engine

* Added license header to files

* Removed all pep8 violations introduced by this change and other E501 violations
2020-07-06 16:34:34 -07:00
Derek Murray
9d748afff1
Set spdlog submodule branch to "master" explicitly. (#4087)
The default branch for the spdlog repository on GitHub recently changed from "master"
to "v1.x", which has a different API for `syslog_sink::syslog_sink()`. This breaks
builds of the server for anyone who has checked out the submodules since that change.

Fixes #4077.

Co-authored-by: Derek Murray <demurra@microsoft.com>
2020-05-29 17:53:40 -07:00
Edward Chen
deac467683 Merge remote-tracking branch 'origin/master' into edgchen1/merge_from_master 2020-04-23 20:50:33 +00:00
Mikhail Kuznetsov
3cf3595579
Replaced spaces on tabs (#3555) 2020-04-22 15:16:19 -07:00
Edward Chen
e542cfd0e0 Introduce training changes. 2020-03-11 14:39:03 -07:00
stevenlix
f4a5d17294
Upgrade to CUDA10.2 for TensorRT (#3084)
* Switch to CUDA10.2

* Update win-gpu-tensorrt-ci-pipeline.yml

* Update win-gpu-tensorrt-ci-pipeline.yml

* remove dynamic_shape

* update onnx-tensorrt submodule

* check if input shape is specified for TensorRT subgraph input and enable some TensorRT unit tests

* fix format issue

* add shape inference instruction for TensorRT

* update according to the reviews

* Update win-gpu-tensorrt-ci-pipeline.yml
2020-02-25 05:36:01 -08:00
Scott McKay
a1db87b382
Add SafeInt bounds checking to memory allocation size calculations. (#3022)
* Add SafeInt bounds checking to memory allocation size calculations.

* Fix TensorRT library includes
2020-02-20 11:41:03 -08:00
stevenlix
da653ccdac
Upgrade TensorRT to version 7.0.0.11 (#2973)
* update onnx-tensorrt submodule to trt7 branch

* add fp16 option for TRT7

* switch to master branch of onnx tensorrt

* update submodule

* update to TensorRT7.0.0.11

* update to onnx-tensorrt for TensorRT7.0

* switch to private branch due to issues in master branch

* remove trt_onnxify

* disable warnings c4804 for TensorRT parser

* disable warnings c4702 for TensorRT parser

* add back sanity check of shape tensort input in the parser

* disable some warnings for TensorRT7

* change fp16 threshold for TensorRT

* update onn-tensorrt parser

* fix cycle issue in faster-rcnn and add cycle detection in GetCapability

* Update TensorRT container to v20.01

* Update TensorRT image name

* Update linux-multi-gpu-tensorrt-ci-pipeline.yml

* Update linux-gpu-tensorrt-ci-pipeline.yml

* disable rnn tests for TensorRT

* disable rnn tests for TensorRT

* disabled some unit test for TensorRT

* update onnx-tensorrt submodule

* update build scripts for TensorRT

* formating the code

* Update TensorRT-ExecutionProvider.md

* Update BUILD.md

* Update tensorrt_execution_provider.h

* Update tensorrt_execution_provider.cc

* Update win-gpu-tensorrt-ci-pipeline.yml

* use GetEnvironmentVar function to get env virables and switch to Win-GPU-2019 agent pool for win CI build

* change tensorrt path

* change tensorrt path

* fix win ci build issue

* update code based on the reviews

* fix build issue

* roll back to cuda10.0

* add RemoveCycleTest for TensorRT

* fix windows ci build issues

* fix ci build issues

* fix file permission

* fix out of range issue for max_workspace_size_env
2020-02-12 07:03:58 -08:00
Changming Sun
bc9f55df47
Change eigen submodule url (#2975) 2020-02-06 11:22:55 -08:00
Tiago Koji Castro Shibata
cff266e1b9 Fix cgmanifest.json generating script (#2770)
* Fix protobuf submodule name

* Workaround pygit2 bug
2020-01-14 14:59:07 -08:00
Dmitri Smirnov
aa37dea598
Convert ExternalProject Featurizers into git submodule (#2834)
Add git submodule for Featurizer library.
  Update cmake to build for git submodule.
2020-01-14 10:32:06 -08:00
Changming Sun
c7a9c6b488
Split onnxruntime server to a separated folder (#2744) 2019-12-27 11:21:23 -08:00
stevenlix
293b15480b Add dynamic shape support in TensorRT execution provider (#2450)
* remove onnx-tensorrt submodule

* add new onnx-tensorrt submodule (experiment) for trt6

* update engine build for trt6

* update compile and compute for tensorrt6.0

* Update tensorrt_execution_provider.cc

* Update tensorrt_execution_provider.cc

* Update tensorrt_execution_provider.cc

* Update tensorrt_execution_provider.cc

* switch to onnx-tensorrt master for TensorRT6'

* Update tensorrt_execution_provider.cc

* Handle dynamic batch size and add memcpy in TensorRT EP

* update test cases

* Update tensorrt_execution_provider.cc

* update onnx-tensorrt submodule

* Update Dockerfile.ubuntu_tensorrt

* Update Dockerfile.ubuntu_tensorrt

* Update run_dockerbuild.sh

* Update run_dockerbuild.sh

* Update install_ubuntu.sh

* Update concat_op_test.cc

* Update tensorrt_execution_provider.cc

* Upgrade TensorRT to version 6.0.1.5

* Update onnxruntime_providers.cmake

* Update CMakeLists.txt

* Update reduction_ops_test.cc

* Update install_ubuntu.sh

* Update Dockerfile.ubuntu_tensorrt

* Update Dockerfile.tensorrt

* Update BUILD.md

* Update run_dockerbuild.sh

* Update install_ubuntu.sh

* Update onnxruntime_providers.cmake

* Update install_ubuntu.sh

* Update install_ubuntu.sh

* Update gemm_test.cc

* Update gather_op_test.cc

* Update CMakeLists.txt

* Removed submodule

* update onnx-tensorrt submodule

* update header file

* Removed submodule

* add submodule onnx-tensorrt kevin's branch shape-test'

* add debugging code

* Update tensorrt_execution_provider.cc

* Update tensorrt_execution_provider.cc

* merge master

* Removed submodule

* update onnx-tensorrt submodule

* add more changes for dynamic shapes

* Update tensorrt_execution_provider.cc

* update for dynamic shape

* update dynamic shape processing

* fix logger issue

* remove submodule onnx-tensorrt

* add submodule onnx-tensorrt

* add env variable min_subgraph_size

* remove redundency

* update document

* use onnxruntime::make_unique

* fix multi-run issue

* remove some tests to save CI build time

* Add dynamic shape test

* Update TensorRT-ExecutionProvider.md

* Add example of running Faster R-CNN model on TensorRT EP

* Add more details on env variables

* update environment variables

* Update tensorrt_basic_test.cc

* Update model tests

* Update tensor_op_test.cc

* remove --use_full_protobuf

* Update build.py
2019-12-03 23:18:33 -08:00
Hariharan Seshadri
5c2e474751
Add provision in ORT for session options to be parsed when available via model file (#2449)
* Initial commit

* Fix gitmodules

* Nits

* Nits

* Updates

* Update

* More changes

* Updates

* Update

* Some updates

* More changes

* Update

* Update

* Merge

* Update

* Updates

* More changes

* Update

* Fix nits

* Updates

* Fix warning

* Fix build

* Add comment

* PR feedback

* PR feedback

* Updates

* Updates

* Update

* More changes

* Fix build break

* Comment test for now

* Updates

* Updates

* PR feedback

* Updates

* Nits

* Add tests

* Fix build

* Fix build

* Fix build

* Fix build break

* Fix build

* Nits

* PR feedback

* More change

* Expose GetSessionOptions in pybind logic and add unit test for python

* Fix build

* PR feedback

* PR feedback
2019-12-03 16:56:07 -08:00
Adrian Tsai
4090d0d0de
Add DirectML Execution Provider (#2057)
This change adds a new execution provider powered by [DirectML](https://aka.ms/DirectML).

DirectML is a high-performance, hardware-accelerated DirectX 12 library for machine learning on Windows. DirectML provides GPU acceleration for common machine learning tasks across a broad range of supported hardware and drivers.

The DirectML execution provider is capable of greatly improving evaluation time of models using commodity GPU hardware, without sacrificing broad hardware support or requiring vendor-specific extensions to be installed.

**Note** that the DML EP code was moved verbatim from the existing WindowsAI project, which is why it doesn't yet conform to the onnxruntime coding style. This is something that can be fixed later; we would like to keep formatting/whitespace changes to a minimum for the time being to make it easier to port fixes from WindowsAI to ORT during this transition.

Summary of changes:
* Initial commit of DML EP files under onnxruntime/core/providers/dml
* Add cmake entries for building the DML EP and for pulling down the DirectML redist using nuget
* Add a submodule dependency on the Windows Implementation Library (WIL)
* Add docs under docs/execution_providers/DirectML-ExecutionProvider.md
* Add support for DML EP to provider tests and perf tests
* Add support for DML EP to fns_candy_style_transfer sample
* Add entries to the C ABI for instantiating the DML EP
2019-10-15 06:13:07 -07:00
stevenlix
544e53e24e Update TensorRT to version 6.0.1.5 (#1966)
* remove onnx-tensorrt submodule

* add new onnx-tensorrt submodule (experiment) for trt6

* update engine build for trt6

* update compile and compute for tensorrt6.0

* Update tensorrt_execution_provider.cc

* Update tensorrt_execution_provider.cc

* Update tensorrt_execution_provider.cc

* Update tensorrt_execution_provider.cc

* switch to onnx-tensorrt master for TensorRT6'

* Update tensorrt_execution_provider.cc

* Handle dynamic batch size and add memcpy in TensorRT EP

* update test cases

* Update tensorrt_execution_provider.cc

* update onnx-tensorrt submodule

* Update Dockerfile.ubuntu_tensorrt

* Update Dockerfile.ubuntu_tensorrt

* Update run_dockerbuild.sh

* Update run_dockerbuild.sh

* Update install_ubuntu.sh

* Update concat_op_test.cc

* Update tensorrt_execution_provider.cc

* Upgrade TensorRT to version 6.0.1.5

* Update onnxruntime_providers.cmake

* Update CMakeLists.txt

* Update reduction_ops_test.cc

* Update install_ubuntu.sh

* Update Dockerfile.ubuntu_tensorrt

* Update Dockerfile.tensorrt

* Update BUILD.md

* Update run_dockerbuild.sh

* Update install_ubuntu.sh

* Update onnxruntime_providers.cmake

* Update install_ubuntu.sh

* Update install_ubuntu.sh

* Update gemm_test.cc

* Update gather_op_test.cc

* Update CMakeLists.txt

* Removed submodule

* update onnx-tensorrt submodule

* Add Ubuntu18.04 build option

* Add Ubuntu18.04 build option

* Add Ubuntu18.04 build option

* Add Ubuntu18.04 build option

* Remove redundency

* Fix issue that it does not add memcopy node correctly if some nodes fall back to CUDA EP.
e.g. after partition, there's TRT_Node -> Cuda_node (with CPU memory expected), we still need to add memcpy node between them.

* update for Trt Windows build

* Update onnxruntime_providers.cmake

* Disable opset11 tests on TensorRT

* Update pad_test.cc

* Update build.py

* update scripts for ubuntu18.04

* Disable warning for Windows build
2019-10-06 10:40:53 -07:00
Dmitri Smirnov
d1b1cdc5c4
Replace GSL with GSL-LITE submodule and fix up refs (#1920)
Remove gsl subodule and replace with a local copy of gsl-lite
  Refactor for onnxruntime::make_unique
  gsl::span size and index are now size_t
  Remove lambda auto argument type detection.
  Remove constexpr from fail_fast in gsl due to Linux not being happy.
  Comment out std::stream support due to MacOS std lib broken.
  Move make_unique into include/core/common so it is accessible for server builds.
  Relax requirements for onnxruntime/test/providers/cpu/ml/write_scores_test.cc
  due to x86 build.
  Add ONNXRUNTIME_ROOT to Server Lib includes so gsl is recognized
2019-10-01 12:43:29 -07:00
Yulong Wang
e6ce384402
add dependency 'cub' as submodule (#1924) 2019-09-26 16:10:39 +08:00
kile0
125900c961 Enable integration with mimalloc memory allocator (#1673)
* add mimalloc submodule

* basic hooks into execution provider header and build script option

* pull mimalloc into build

* windows has to use the override vcxproj already set up, and disable bfcarena when using mimalloc

* fix import_location

* generalize build msbuild command

* add mimalloc dependency to python package as well as various commenting cleanups

* update mimalloc commit as stop gap

* include mimalloc changes from master

* create capi directory if doesn't exist for mimalloc copying over

* disable runtime hooks and remove old comment

* temporary change to test CI

* fetch the mimalloc output name property

* uniformly call target_link_libraries

* query cmake to get the correct windows sdk to target

* revert change to trailing directory slash

* pickup windows sdk off msbuild path if possible

* copy the produced dll/so at install time, not configure time

* deal with mimalloc unimplemented atomic

* move to dev branch of mimalloc to avoid atomic issues on gcc

* for windows specify solution settings (x86) rather than individual project settings

* pin mimalloc submodule to updated commit

* typo

* Revert "temporary change to test CI"

This reverts commit 764867376936a5d307dded3cc37f00a34e3b0c96.
2019-09-13 17:12:48 -07:00
stevenlix
1c5b15c2b8
Remove memory copy between TensorRT and CUDA (#1561)
* remove memory copy between CUDA and TRT

* add info to RegisterExecutionProvider input

* use new IDeviceAllocator for trt allocator

* remove SetDefaultInputsMemoryType from TRT EP

* remove onnx-tensorrt 5.0

* add submodule onnx-tensorrt branch 5.1

* remove redundancy

* Update transformer_memcpy.cc

* Update tensorrt_execution_provider.cc

* switch to TensorRT 5.1.5.0

* update python binding

* disable failed test case on TensorRT

* Update activation_op_test.cc

* upgrade to TensorRT container 19.06

* update according to feedback

* add comments

* remove tensorrt allocator and use cuda(gpu) allocator

* update onnx-tensorrt submodule

* change ci build cuda directory name
2019-08-08 19:31:39 -07:00
Colin Versteeg
5ee0f185dc Add GRPC support to ONNX Runtime Server (#1144)
* add grpc

* add-submodule

* Revert "add-submodule"

This reverts commit e35994b25035ce310a98909658582bff759ee358.

* fix submodule

* IT BUILDS

* Initial commit of prediction_service_impl.cpp

* Server builds and runs!

* add request id, health and reflection. GRPC is done

* enable channelz for monitoring

* GRPC unit tests

* clang format

* add unit tests

* Add function tests for GRPC

* add grpc to model_zoo_tests

* revert update protobuf to 3.7.0

* update submodules

* builds but runs some gflags tests which fail

* get build working

* confine build changes to onnxruntime_server.cmake

* update build files

* code reveiw comments

* Maik's code review comments

* update cares version to fix compilation issue

* update build to fix c-ares

* code review comments

* update cgmanifest.json

* remove extraneous file

* Klein comments.

* update ci based on discussions for go dependency

* fix tag issue

* fix build issues

* remove stray submodule

* update dockerfile and build script

* dynamic linking changes

* update build script

* code review comments

* update dockerfile

* update script for mount

* code review comments
2019-07-18 11:10:38 -07:00
Colin Versteeg
a8ff209ab6 Refactor Onnx runtime Server to only use public APIs (#1271)
* replace log sinks

* limit headers to include dir

* first changes to do dynamic linking

* wip for using cxx api

* remove weird dangling dependency

* building with tests failing

* finish updating converters

* fix const

* intital introduction of typedef

* change logging to use spdlog

* get tests passing

* clang format

* map logging levels better

* clean up unused imports

* trent cr comments

* clang-format

* code review comments

* changing buffer use to reserve

* Dynamically link

* revert tvm

* update binary uploading

* catch exceptions by const-ref

* Revert "revert tvm"

This reverts commit 387676dd1018134d15eb71fa126f7caf94380800.

* fix typo

* update versioning of lib
2019-07-04 01:08:14 -07:00
KeDengMS
0d204f3f06
Implementation of TVM codegen library (#888)
Description:

This change adds the common part of TVM based codegen library. It includes following parts:
* Microsoft TVM Inventory (MTI): a set of TVM ops for neural networks, similar to TOPI
* Compiler pass for traversing ONNX graph and generate TVM ops
* Compiler pass for traversing generated graph and specify TVM schedule
* Compiler pass for handling weight layout
* Utils for debugging

Motivation and Context:

TVM is an open deep learning compiler stack for cpu, gpu and specialized accelerators. To leverage it in ONNX, we built an execution provider named Nuphar. Currently, Nuphar gets good performance on CPUs with AVX2 on quantized LSTM models.

This codegen library was part of Nuphar execution provider. It is split out for sharing with other execution providers, as we'd like to reuse TVM in more devices.
2019-07-03 10:32:59 -07:00
daquexian
c65489a47f Initial PR for NNAPI execution provider (#1220)
* init

* Update DNNLibrary

* Update DNNLibrary, set compiler flags, it compiles now

* Add more missing flags, add test

* Update DNNLibrary

* Update Compile method, fix allocator and some other bugs

* Update DNNLibrary

* Implement CopyTensor

* Not delete state explicitly since it is managed by unique_ptr

* Add the missing files when SingleUnitTestProjct is ON

* misc changes

* Fix wrong name in provider factory

* Add my own test

* Update the code of add node into graph, and add the missing initializer into graph

* Fix the bug that re-build the graph produces extra output

* Update DNNLibrary

* Transpose nchw (ONNX) -> nhwc (NNAPI)

* Add license

* Add GetSupportedNodes method (implement it later)

* Rename onnxruntime_nnapi_test->onnxruntime_nnapi_squeezenet_test

* Update squeezenet_test.cpp after rebase master

* Remove squeezenet_test.cpp since it is almost same with the c++ sample

* Update DNNLibrary for GetSupportedNodes

* Update GetSupportedNodes

* Revert "Remove squeezenet_test.cpp since it is almost same with the c++ sample"

This reverts commit a97575fd9ff49e50ba1dc8d8154790d8cd86c48d.

* Update DNNLibrary

* Fix multiple outputs bug

* Remove GetKernelRegistry

* Revert "Revert "Remove squeezenet_test.cpp since it is almost same with the c++ sample""

This reverts commit 2a0670e9cbf10ea654111ce39e198a4be0ddd838.

* Set default memory type of NNAPI EP

* Add CPUOutput allocator

* Update DNNLibrary for multiple outputs

* Fix bug of nhwc->nchw

* Remove GetExecutionHandle()
2019-07-02 06:03:29 -07:00
stevenlix
723d5c782a
Improve TensorRT GetCapability to Enable More Models (#1012)
* Improve TensorRT GetCapability Accuracy

* Update onnxruntime_providers.cmake

* made changes based on feedback

* update unit tests for TensorRT

* update onnx-tensorrt submodule to v5.0 branch

* remove uncessary comments

* convert int32 to int64 at inferencing output

* add more data types in compute

* change returns in compute

* use StatusCode as return in compute
2019-05-24 10:12:55 -07:00
Changming Sun
687bac455d Convert eigen to a submodule and update it to the latest version 2019-04-18 21:24:56 -07:00
stevenlix
06888437dd Update onnx-tensorrt submodule to master (#753) 2019-04-02 16:34:00 -07:00
Dmitri Smirnov
0e687a2c90
Implement tokenex regular expression matching and add tests. (#480)
* Implement tokenex regular expression matching and add tests.
  Import re2 module.
2019-02-20 15:56:32 -08:00
stevenlix
8ea7197b82 trt (#361)
* updated cmake files for tensorrt
2019-01-23 13:28:13 -08:00
Changming Sun
8cfe8d33a3 Add nsync (#292)
* Add nsync

* nsync2

* nsync3

* fix build

* update comments

* fix build option
2019-01-09 10:40:55 -08:00
Ke Zhang
37b74c771a
add gemmlowp as submodule. (#206) 2018-12-18 13:57:53 -08:00
Pranav Sharma
89618e8f1e Initial bootstrap commit. 2018-11-19 16:48:22 -08:00