Commit graph

9 commits

Author SHA1 Message Date
gwang-msft
7ca8388dc9
[ORT Mobile] file format schema and file I/O code (#4973)
* ort mobile file format schema and [de]serializing code
2020-09-01 11:51:31 +10:00
edgchen1
0ec90f7019
Put safeint_interface include directory into onnxruntime_common interface include directories to simplify usage by other targets. (#3546) 2020-04-16 10:34:32 -07:00
Scott McKay
a1db87b382
Add SafeInt bounds checking to memory allocation size calculations. (#3022)
* Add SafeInt bounds checking to memory allocation size calculations.

* Fix TensorRT library includes
2020-02-20 11:41:03 -08:00
Dmitri Smirnov
d1b1cdc5c4
Replace GSL with GSL-LITE submodule and fix up refs (#1920)
Remove gsl subodule and replace with a local copy of gsl-lite
  Refactor for onnxruntime::make_unique
  gsl::span size and index are now size_t
  Remove lambda auto argument type detection.
  Remove constexpr from fail_fast in gsl due to Linux not being happy.
  Comment out std::stream support due to MacOS std lib broken.
  Move make_unique into include/core/common so it is accessible for server builds.
  Relax requirements for onnxruntime/test/providers/cpu/ml/write_scores_test.cc
  due to x86 build.
  Add ONNXRUNTIME_ROOT to Server Lib includes so gsl is recognized
2019-10-01 12:43:29 -07:00
KeDengMS
0d204f3f06
Implementation of TVM codegen library (#888)
Description:

This change adds the common part of TVM based codegen library. It includes following parts:
* Microsoft TVM Inventory (MTI): a set of TVM ops for neural networks, similar to TOPI
* Compiler pass for traversing ONNX graph and generate TVM ops
* Compiler pass for traversing generated graph and specify TVM schedule
* Compiler pass for handling weight layout
* Utils for debugging

Motivation and Context:

TVM is an open deep learning compiler stack for cpu, gpu and specialized accelerators. To leverage it in ONNX, we built an execution provider named Nuphar. Currently, Nuphar gets good performance on CPUs with AVX2 on quantized LSTM models.

This codegen library was part of Nuphar execution provider. It is split out for sharing with other execution providers, as we'd like to reuse TVM in more devices.
2019-07-03 10:32:59 -07:00
Maik Riechert
ded7eeb033 make builds more robust (#906) (#932) 2019-04-29 12:58:20 -07:00
Pranav Sharma
5d452b3029
Use protobuf-lite to reduce onnxruntime.dll size. (#639)
* Test protobuf-lite

* Test protobuf-lite

* Test protobuf-lite

* Optimize protobuf usage for LITE_RUNTIME to reduce the binary size of
onnxruntime.dll. More details can be found here https://developers.google.com/protocol-buffers/docs/proto.
The reduction is significant. For commit id: 4873b452151bafe49da332aaeab639ef0318fc1ca28d728, the size
reduced by ~700K; from 4873728 to 4172800.

* Add LITE_RUNTIME flag in in.proto files

* Fix merge conflict.

* Address PR comments

* Forgot to add 2 files + fix linux and gpu build errors.

* Fix build errors + test failures

* Fix cuda tests

* Fix tensor rt build

* Use full protobuf for trt

* Address PR comments

* Print tensor shape proto as text string for easier debugging
2019-03-21 14:06:38 -07:00
Changming Sun
5e113661a9 Build system upgrades (#281)
* update

* runas normal user
2019-01-07 13:15:24 -08:00
Pranav Sharma
89618e8f1e Initial bootstrap commit. 2018-11-19 16:48:22 -08:00