Commit graph

360 commits

Author SHA1 Message Date
Faith Xu
b194d79dfb Third party attribution updates (#398)
* Update ThirdPartyNotices.txt

* Update murmur_hash3.cc

* Update normalizer_test.cc

* Update simple_thread_pool.h

* Update simple_thread_pool.h

* Update ThirdPartyNotices.txt
2019-01-28 23:37:02 -08:00
Scott McKay
8f215b44e0
Refactor InferenceSession::Impl::Load code to remove duplication. (#248)
* Add ability to initialize InferenceSession with a model that is already loaded.

* Cleanup some unnecessary namespace qualifications and some long lines.

* Remove InferenceSession::Initialize(std::shared_ptr<Model>&)

* Remove unit test for init from existing Model instance.
2019-01-29 17:18:38 +10:00
Ashwini Khade
b92bc99861
QLinearConv (#370)
* First draft QLinearConv

* Add shape inference for quantized conv operators

* adding test cases for QLinearConv

* plus minor corrections
2019-01-28 23:13:47 -08:00
shahasad
5ef4c90f1d Make the return namedonnxvalue objects disposable in C# API (#392)
* added the disposablenamedonnxvalue as result container

* C-API related fixes and tensorproto fix

* addressed some of the review comments
2019-01-28 21:40:19 -08:00
jignparm
571e1e9a6c
Jignparm/updateversion 2.0 (#394)
* Update version to 2.0

* added __init__.pu
2019-01-28 21:22:45 -08:00
Changming Sun
7c0a6f3d9c CI: Enable C# tests 2019-01-28 11:40:00 -08:00
jignparm
8e03560dbb
Fix -e option for runtest-docker.sh (#385) 2019-01-27 14:50:10 -08:00
Ryan Hill
d875ab2acd
C API - Remove reference counting (#344) 2019-01-25 19:41:10 -08:00
Changming Sun
6349114583 Revert "Rashuai/link with ltcg (#378)" (#383)
This reverts commit f53cc032db.
2019-01-25 19:00:23 -08:00
shahasad
ab730cc58d
passed the OnnxRuntimeBuildDirectory to the docker for the dotnet linux test (#381)
* passed the OnnxRuntimeBuildDirectory to the docker

* removed the requirement for the docker host to set the env var

* set the env var to the path where the build dir is mounted in the container
2019-01-25 15:52:50 -08:00
jignparm
ccca1e9402
Update property file for Nuget Linux package (#369)
* Copy mkldnn to output folder for linux. Nuget doesn't resolve dll dependency correctly within a package

* Modify to copy all dlls to output folder

* update rpath for shared library

* Simplified linker flags for RPATH

* Removing copying of dlls to output folder, since setting RPATH works fine now
2019-01-25 10:45:39 -08:00
Randy
f53cc032db Rashuai/link with ltcg (#378)
* compile with GL&LTCG

* remove tab

* restrict flag to only relwithdebinfo

* enable all OPT flags for relwithdebinfo
2019-01-24 19:29:05 -08:00
Tang, Cheng
373177ddd3 gemm with empty input (#350)
* gemm with empty input

* check negative

* align the comments
2019-01-24 18:24:32 -08:00
edgchen1
f922d427d3 Improve VerifyKernelDef() performance when op has many inputs/outputs/type constraints. (#347)
* Improve VerifyKernelDef() performance when op has many inputs/outputs/type constraints.

* Added two modes for resolving type binding.

* Updated TypeBindingResolver to avoid heap allocation.

* Tweaked TypeBindingResolver for performance.
2019-01-24 17:34:03 -08:00
Bowen Bao
644c13050b Handle negative axes for reduce ops (#365)
* Handle negative axes for reduce ops

* negative axes are not handled in shape inference if input shape
 is not known at that time.

* nit: use HandleNegativeAxis in provider/common.h
2019-01-24 16:40:54 -08:00
shahasad
f94fdad861
Fixes on the dotnet end-to-end test scripts to get it running on linux (#376)
* fixed typo in runtest.sh

* some fixes

* some fixes

* some fixes in the runtest.sh

* added test data url

* fixes on the dotnet test scripts

* fix on prior mistake regarding installation of apt-transport-https

* added verbosity in the test run for easy debugging

* updated comment in the runtest.sh
2019-01-24 13:14:29 -08:00
Dmitri Smirnov
829b2a5e81
Promote TfIdfvectorizer to ONNX ver 9 (#373)
* Advance ONNX commit, move Ngram files under ONNX and rename to TfIdfVectorizer

* Rename Ngram to TfIdfVectorizer and redeclare in ONNX domain

* Restore tfidfvectorizer tests

* Remove ML definition.
2019-01-24 10:11:26 -08:00
Randy
89f643f04b
add new types to shape op (#362)
* add new types to shape op

* add all fixed type support
2019-01-24 09:51:09 -08:00
Pranav Sharma
61bbf4bfcc
Fix redundant population of output_indices_ in the execution frame and reserve memory for it in advance. (#375) 2019-01-23 21:19:31 -08:00
Changming Sun
82b4ec3d42 Fix a bug in KernelRegistry::Register function 2019-01-23 16:27:54 -08:00
Scott McKay
bca8daf762
Update ONNX. Implement Scan 9 changes (#366)
* Update ONNX version to pickup Scan spec change that adds scan_output_axes.
Add logic to transpose an output
  - write to temporary buffer when executing subgraph
  - transpose temporary buffer into Scan output when execution completes
Add unit tests

* Update to ONNX dbf3581835e3a05716e10587511d7ab3b2cdc386 to pickup inferencing bugfix.
Update test to match.

* Disable some tests for opset 9 operators that haven't been implemented yet.
2019-01-24 08:10:39 +10:00
stevenlix
8ea7197b82 trt (#361)
* updated cmake files for tensorrt
2019-01-23 13:28:13 -08:00
Harry Summer
904d7c6ec8 Add --cuda_version option to enable manually specifying cuda version 2019-01-22 20:47:28 -08:00
Bowen Bao
d040b452cb Expand: add additional supported types. (#364) 2019-01-22 19:07:36 -08:00
jignparm
ea816615eb
remove use_tvm from base script. Put it in yaml configuration (#363) 2019-01-22 16:46:38 -08:00
Hector Li
647cc2dced
use gemm to replace matmul + add (#234)
* matmul add fusion

* add shape check on Gemm input C

* walk around the issue with RemoveNode

* update the version support

* If MatMul has shape [K] * [K, N], update it to [1, K] * [K, N], so that it can work for Gemm

* Fuse Gemm+Activation into FusedGemm

* test

* revert the change which fuse the matmul with shape [K]*[K, N] to Gemm as shape [1, K]*[K, N], this may cause runtime failure, as the we can't change input data shape.

* revert the change which change the shape for Matmul from [K]*[K, N] to [1, K]*[K, N]. It enables fuse Matmul + Add to Gemm, but the issue is the data is not aware of this, so the data shape is still [K]*[K, N] and cause runtime issue.

* 1. Fix build issue for CUDA
2. Update Gemm so that we can fuse Matmul [K] * [K, N] + Add [1, N] into Gemm with shape [1,K] * [K, N] + [1, N]

* Fix build issue

* Fuse the activation node even it connects the output

* resolve the merge conflicts

* Add test model for Gemm+Activation fusion
2019-01-22 15:21:55 -08:00
Scott McKay
8b55596dfe
The CUDA compiler doesn't support gsl::suppress so disable when __NVCC__ is defined. (#358) 2019-01-22 17:42:33 +10:00
Changming Sun
c87929e949 Use nsync for implementing condition variable 2019-01-21 22:59:42 -08:00
Du Li
1653ba9fcc
Optimizing Upsample op (#352) 2019-01-18 16:36:00 -08:00
Tracy Sharpe
22337bb641
fix linaro build (#355) 2019-01-18 16:11:53 -08:00
jignparm
0a21226b09 comment out 16-bit float models in C# (#351) 2019-01-18 14:16:53 -08:00
Tracy Sharpe
6f30bec040 Implement MLAS convolution+activation fusion (#354)
* conv+activation fusion
2019-01-18 14:16:28 -08:00
Ke Zhang
6831fc16ed Kezhan/kernel registry refine (#346)
* refactor kernel registry to make it a little bit more readable.

* update

* update cudaexecutionprovider

* fix build break

* fix comments

* fix build break
2019-01-18 09:55:30 -08:00
Changming Sun
948cc03490 upgrade onnx 2019-01-17 13:10:30 -08:00
Changming Sun
21713b7a41 Reduce test parallelism for cuda model tests 2019-01-17 13:10:30 -08:00
Changming Sun
36c62d84b4 remove ConstantLike OP 2019-01-17 13:10:30 -08:00
Scott McKay
9f3ae4279f Handle copy to/from non-CPU devices across control flow nodes (#339) 2019-01-17 10:51:23 -08:00
Changming Sun
c2704b5afb cleanup code (#343) 2019-01-16 17:12:22 -08:00
jignparm
b3f0d0b659
added unit test to guard against native API changes (#337)
* added unit test to guard against native API changes

* Removed cuda and mkldnn from API checks

* Updated per some code comments
2019-01-16 16:53:06 -08:00
Hector Li
790cda6ea7
Fix the issue which causes wrong output. (#342)
Root cause:
The cudaStreamWaitEvent is used after copy data from GPU memory to CPU memory, but the following node has CPU code depend on the data. Should use cudaEventSynchronize instead.
Fix:
Add code in executor to check the input memory type first, if it wants CPU memory, pass the CPUExecutionProvider type to BeforeUsingAsInput, then it will use cudaEventSynchronize to wait the write event.
2019-01-16 14:47:18 -08:00
Ashwini Khade
5d0e024284 Askhade/add quantized matmul (#295)
* Quantized Matmul Operators

* fix type inference after master merge

* bug fix for linux

* Plus review comments

* fix a check

* fix build error
2019-01-16 13:36:25 -08:00
Changming Sun
34afa0a598 Delete onnxruntime_exec 2019-01-16 11:18:44 -08:00
Changming Sun
d23f01dcd9 Suppress warnings for gemmlowp 2019-01-15 22:29:30 -08:00
Ashwin Kumar
95b8941e9d
Fix Seg fault when repeats input contain a 0 (#336)
* Fix Seg fault when repeats input contain a 0

* refine
2019-01-15 21:34:04 -08:00
Scott McKay
f678f58750
Revert to ignoring optional subgraph inputs (#306)
* Revert to ignoring optional subgraph inputs due to abandoning PR 216. Restores previous behaviour that changed a couple of days ago with the Scan v9 checkin.

* Update to allow either all inputs, or just required inputs to be provided for the subgraph.

* Update IterateSequence to prefer all inputs over required inputs.
2019-01-16 11:58:19 +10:00
Changming Sun
6225d5fe1e
Update test data (#334)
* update test data
2019-01-15 17:01:46 -08:00
Ashwin Kumar
492d9fd6cc
Use Eigen ThreadPool in OnnxRuntime (#323)
* switch to nonblocking threadpool in inference session and sessions state

* switch to eigen threadpool - first draft

* refine

* refine

* add a switch to easily revert back to windows thread pool

* switch thread pool in test runner and turn on leak checker

* remove unncessary files

* fix build error

* more build fixes

* catch exceptions in parallel executor

* fix mac build error

* fix mac build error

* more build fixes

* more mac build fixes

* fix cv issue

* change macro to include cuda compiler for  disabled compiler warning

* try switching the macro to win32 only

* test #error

* move #disable warning to the top

* Update onnxruntime_framework.cmake

* move eigen include to public scope

* turn off eigenthreadpool by default and add todo comment
2019-01-15 15:19:30 -08:00
Ke Zhang
139abda393
convinteger implementation based on gemmlowp (#294)
* update

* cmake change

* rename

* update

* update

* add cmake

* fix build warnings.

* fix comments

* update cmake to avoid run gemmlowp tests

* update cmake

* update

* fix build break

* update

* fix comments

* fix test failure

* add one more test case with padding.

* fix conv implementation of mkldnn and cuda to use updated computekernelshape function.

* fix linux ci build break
2019-01-15 14:39:50 -08:00
Hector Li
835b511fa8 cuda fix to unblock the tf model tests (#333)
* Check the pads attribute on Conv, and auto fallback to CPU if it's not symmetric padding

* Insert copy nodes after all graph transformer. It causes some issue if do the cast transformer before memory copy transformer.
2019-01-15 14:05:47 -08:00
Changming Sun
7977871740 Split build pipeline 2019-01-15 12:30:59 -08:00