Commit graph

502 commits

Author SHA1 Message Date
Hariharan Seshadri
43e2ee37f2
Some cosmetic changes (#7741) 2021-05-18 00:02:07 -07:00
Hariharan Seshadri
53d1d55ea8
Add ability for pre-packed weights of shared initializers to be shared across sessions (#7421) 2021-05-14 20:44:42 -07:00
Ashwini Khade
442c7300eb
add opset14 rnn ops (#7687)
* add opset14 rnn ops

* update kernel hashes
2021-05-14 05:52:54 -07:00
Chi Lo
a94a893d5e
Update SessionOptions.cs (#7540)
Fix compile warning
2021-05-04 01:51:35 -07:00
Sheil Kumar
94c4c44bfc
Enable Microsoft.Ai.MachineLearning package to work on .NET5 down to 17763 Windows SDK (#7522)
* upgrade cswinrt and downgrade target framework

* fix sdk version references

* cswinrt 1.1.0

Co-authored-by: Sheil Kumar <sheilk@microsoft.com>
2021-05-01 00:56:36 -07:00
Changming Sun
7b003967b1
Add static code analyzer to Windows CPU/GPU CI builds and fix the warnings (#7489) 2021-04-29 11:54:57 -07:00
Chi Lo
0dbe51b002
Enable TRT EP for C# (#7482)
* enabled TRT EP for C#

* Fix potential leak
2021-04-29 04:56:40 -07:00
Ashwini Khade
75e054cd33
pick onnx release candidate (#7177)
* pick onnx release candidate

* fix typo

* filter batchnorm tests

* add implementation for reshape 14

* add identity op kernel for opset 14

* fix typo

* update onnx commit

* update commit to latest master

* add hashes for new kernel registrations and update 1

* TEST commit

* update onnx back to right commit

* Update onnx to latest in rel-1.9.0

* temp fix

* remove nonzeroshapesetter transformer

* pick rel branch latest commit

* fix build failures

* fix build failures

* fix build failures

* update the commit to latest in release branch

* add test filters for not impemented op14 ops in c# tests

* plus review comments
2021-04-22 23:57:09 -07:00
Brian Popow
1bbe538379 Update references 2021-04-21 13:36:10 -07:00
Brian Popow
aa1ce726aa Remove unnecessary encoding step 2021-04-21 13:36:10 -07:00
Changming Sun
b4cfa88bf7
Update protobuf to the latest version (#7396) 2021-04-21 10:30:06 -07:00
Sheil Kumar
265db2ad96
Fix Microsoft.AI.MachineLearning .NET5 publishing and C# Store Release build (#7373)
* fix .net publishing

* make experimental api build with microsoft.ai.machinelearning.idl import

Co-authored-by: Sheil Kumar <sheilk@microsoft.com>
2021-04-19 15:36:43 -07:00
Ashwini Khade
e7c5dcd572
Fix Zip-Nuget-Java Packaging Pipeline (#7208)
* Ignore test failures due to opset support

* skip identity sequence test

* plus fixes
2021-04-05 10:58:13 -07:00
Ashwini Khade
b22e60bd44
pull onnx latest commit (#7102)
* update onnx commit

* fix test scripts to remove deprecated call

* update filters

* add registration for relu and cumsum ver 14

* add promote trilu to onnx domain

* update onnx-tensorrt submodule

* update flag

* update flag

* update dependencies

* fix android ci failure
2021-03-29 11:00:38 -07:00
Shucai Xiao
c588d5d13a
Add rocm execution provider to prover_list (#6306)
* code changes to add rocm ep to ep_list
2021-03-12 07:51:08 -08:00
Tiago Koji Castro Shibata
fa8d1b44b8
Fix app packaging in UWP (#6804)
* Change msbuild condition for UAP

* update .netcore target as well

* create nuget packages with _native path

* validate path under _native directory for windowsai package

* pep8

* add diagnostic error message

* pep8

* use baseame

* lib\uap10.0

* uap10

* build\\uap10.0

* Manually binplace winmds into appx when PackageReference is used.

* always binplace winmd regardless of packagereference since c# should work with packages.config also

* resolve all paths to full paths to avoid some reference warnings

* move winmds out of lib folder to prevent automatic component registration

Co-authored-by: Sheil Kumar <sheilk@microsoft.com>
2021-03-04 11:16:25 -08:00
Changming Sun
0be5475de6
Update packaging pipelines(#6664) 2021-02-17 09:53:36 -08:00
Changming Sun
eefeacd828
Skip running gpt2 model in C# x86 (#6722) 2021-02-17 09:37:16 -08:00
Sheil Kumar
87cb6fd495
Add LearningModelBuilder to WinML Experimental Namespace along with various Audio operators (#6623)
* model building

* fix build

* winml adapter model building api

* model building

* make build

* make build again

* add model building with audio op

* inplace and inorder fft

* add ifft

* works!

* cleanup

* add comments

* switch to iterative rather than recursive and use parallelization

* batched parallelization

* fft->dft

* cleanup

* window functions

* add melweightmatrix op

* updates to make spectrogram test work

* push latest

* add onesided

* cleanup

* Clean up building apis and fix mel

* cleanup

* cleanup

* naive stft

* fix test output

* middle c complete

* 3 tones

* cleanup

* signal def new line

* Add save functionality

* Perf improvements, 10x improvement

* cleanup

* use bitreverse lookup table for performance

* implement constant initializers for tensors

* small changes

* add matmul tests

* merge issues

* support add attribute

* add tests for double data type windowfunctions and minor cleanup

* stft onesided/and not tests

* cleanup

* cleanup

* clean up

* cleanup

* remove threading attribute

* forward declare orttypeinfo

* warnings

* fwd declare

* fix warnings

* 1 more warning

* remove saving to e drive...

* cleanup and fix stft test

* add opset picker

* small additions

* add onnxruntime tests

* add signed/unsigned

* fix warning

* fix warning

* finish onnxruntime tests

* make windows namespace build succeed

* add experimental flag

* add experimental api into nuget package

* add experimental api build flag and add to windows ai nuget package

* turn experimental for tests

* add minimum opset version to new experimental domain

* api cleanup

* disable ms experimental ops test when --ms_experimental is not enabled

* add macro behind flag

* remove unused x

* pr feedback

Co-authored-by: Sheil Kumar <sheilk@microsoft.com>
2021-02-12 14:17:10 -08:00
Changming Sun
8378a45ae7
Add python 3.8/3.9 support for Windows GPU and Linux ARM64 (#6615)
Add python 3.8/3.9 support for Windows GPU and Linux ARM64

Delete jemalloc from cgmanifest.json.

Add onnx node test to Nuphar pipeline.

Change $ANDROID_HOME/ndk-bundle to $ANDROID_NDK_HOME. The later one is more accurate.

Delete Java GPU packaging pipeline

Remove test data download step in Nuget Mac OS pipeline. Because these machines are out of control and out of our network, it's hard to make it reliable and the data secure.

Fix a doc problem in c-api-artifacts-package-and-publish-steps-windows.yml. It shouldn't copy C_API.md, because the file has been moved into a different branch.

Delete the CI build docker file for Ubuntu cuda 9.x and Ubuntu x86 32 bits

And, due to some internal restrictions, I need to rename some of the agent pools
2021-02-11 16:43:35 -08:00
Changming Sun
0b89f931d0
Update CUDA build configs (#6598)
1. Fix Nuget package build break caused by #6225
2. Delete Dockerfile.centos_gpu. It is not used anywhere.
3. Fix Linux CUDA 10.2 build error caused by glibc upgrade
2021-02-08 22:55:42 -08:00
Nat Kershaw (MSFT)
af9dfa7a4d
Remove docs that have been migrated to https://onnxruntime.ai/docs (#6225) 2021-02-05 18:09:27 -08:00
Dmitri Smirnov
dda5a62072
Fix updated Doxygen errors. (#6588) 2021-02-05 18:07:03 -08:00
Changming Sun
91b19b8364
Delete nuget extra configs (#6477) 2021-01-27 20:25:45 -08:00
Hariharan Seshadri
33f60a06d5
Dont use default string marshalling in C# (#6219) 2021-01-20 17:44:36 -08:00
Changming Sun
5084ce0969
Update nuget build (#6297)
1. Update the ProtoSrc path. The old one is not used anymore.
2. Regenerate OnnxMl.cs
3. Delete some unused code in tools/ci_build/build.py
4. Avoid set intra_op_param.thread_pool_size in ModelTests in OpenMP build.
5. Fix a typo in the C API pipeline.
2021-01-11 10:49:05 -08:00
Edward Chen
d761571afc
Deprecate Python global configuration functions [Part 2] (#6171)
Update Python API to allow more flexibility for setting providers and provider options.

The providers argument (InferenceSession/TrainingSession constructors, InferenceSession.set_providers()) now also accepts a tuple of (name, options dict).
Fix get_available_providers() API (and the corresponding function in the C API) to return the providers in default priority order. Now it can be used as a starting point for the providers argument and maintain the default priority order.
Convert some usages of the deprecated global configuration functions to use EP-specific options instead.

Update some EP-specific option parsing to fail on unknown options.

Other clean up.
2021-01-07 10:10:55 -08:00
Hariharan Seshadri
d42399e1b0
Allow querying a GraphProto's doc_string as part of ModelMetadata (#6248) 2021-01-05 22:18:03 -08:00
Sheil Kumar
a6a23db130
Enable C# .NET5 for WinML (#6120)
* build for .net5

* only reference cswinrt for .net5

* remove netstandard2.0 references

* upgrade language version

* net5

* remove extra comment closure

* add targetframework

* set target framework

* remove net*

* pep8 errors

* make test project build with .net windows SDK projection

* disable c# builds for non-x64 builds

* fix pep8 errors

* disable for store build

* fix tests

* remove cswinrt and sdk references from package

* bump cswinrt down to 1.0.1

* fix bin path

Co-authored-by: Sheil Kumar <sheilk@microsoft.com>
2020-12-14 15:05:15 -08:00
Edward Chen
6d642a3dba
Replace direct pulls from image cache container registry with get_docker_image.py, build definition clean up. (#5906) 2020-12-01 19:10:23 -08:00
Chun-Wei Chen
c63e8cf7d7
Remove chronological starttime assertion in InferenceTest.cs because it is not determined (#5976)
* remove chronological starttime assertion because it is not determined

* use different vars
2020-12-01 15:58:12 -08:00
Changming Sun
2d9dcc4576
Add python 3.9 support (#5874)
1. Add python 3.9 support(except Linux ARM)
2. Add Windows GPU python 3.8 to our packaging pipeline.
2020-11-30 12:02:48 -08:00
Dmitri Smirnov
c4b55d29fe
Fix publishing pipelines. (#5942)
Fix publishing pipelines.
2020-11-25 16:23:08 -08:00
Dmitri Smirnov
c2d610066a
C#: Add CreateFromMemory to FixedBufferOnnxValue to allow bind user buffers and pass custom binary compatible types (#5886)
Add CreateFromMemory to FixedBufferOnnxValue so users can bind their own custom binary compatible buffers to feed/fetch data.
2020-11-24 14:10:14 -08:00
Hariharan Seshadri
d46dbeafd3
Expose knobs to create and share (CPU) allocators across sessions in C# and Python (#5634) 2020-11-21 14:12:33 -08:00
Dmitri Smirnov
ceedf5630b
Document all C# API pubic interfaces (#5853)
Address documentation shortcomings.
 Document all required public interfaces.
 Add pipeline configuration.
Make Doxygen lookup a env vars for paths.
2020-11-20 14:03:55 -08:00
S. Manohar Karlapalem
ff58f621fa
Remove nGraph Execution Provider (#5858)
* Remove nGraph Execution Provider

Pursuant to nGraph deprecation notice: https://github.com/microsoft/onnxruntime/blob/master/docs/execution_providers/nGraph-ExecutionProvider.md#deprecation-notice

**Deprecation Notice**

| | |
| --- | --- |
| Deprecation Begins	| June 1, 2020 |
| Removal Date |	December 1, 2020 |

Starting with the OpenVINO™ toolkit 2020.2 release, all of the features
previously available through nGraph have been merged into the OpenVINO™
toolkit. As a result, all the features previously available through
ONNX RT Execution Provider for nGraph have been merged with ONNX RT
Execution Provider for OpenVINO™ toolkit.

Therefore, ONNX RT Execution Provider for **nGraph** will be deprecated
starting June 1, 2020 and will be completely removed on December 1,
2020. Users are recommended to migrate to the ONNX RT Execution Provider
for OpenVINO™ toolkit as the unified solution for all AI inferencing on
Intel® hardware.

* Remove nGraph Licence info from ThirdPartyNotices.txt

* Use simple Test.Run() for tests without EP exclusions

To be consistent with rest of test code.

* Remove nGraph EP functions from Java code
2020-11-19 16:47:55 -08:00
Guoyu Wang
261462be0d
Change NNAPI runtime options to use uint32_t (#5863)
* Change nnapi options unsigned long -> uint32_t

* Move options from long to int in java code
2020-11-19 13:38:49 -08:00
Changming Sun
85f945a875
Regenerate CI build docker images (#5850) 2020-11-18 14:36:59 -08:00
Justin Stoecker
bd236ecc26
Switch to unified DirectML 1.4.0 redistributable (#5794)
Transitions from the ORT-only DML NuGet (hosted on the onnxruntime_public feed) to the new unified DirectML NuGet (Microsoft.AI.DirectML) on nuget.org. In addition, the Microsoft.AI.MachineLearning (WinML) and Microsoft.ML.OnnxRuntime.DirectML packages now take a dependency on the Microsoft.AI.DirectML package. This means we can remove the extra copy of DML binaries in these packages since they will be installed by the DML package.
2020-11-17 13:42:23 -08:00
Dmitri Smirnov
2a6c73cf8c
Address publishing pipelines failures. (#5806)
* Address pipelines failures.

* Addrss one more fp16 model failure.
2020-11-16 10:19:19 -08:00
Dmitri Smirnov
2f35e65135
Add Float16 and BFloat16 support to C# API (#5775)
Add Float16 and BFloat16 support.
2020-11-12 17:57:08 -08:00
Dmitri Smirnov
871af477d7
Fix outputs of Sequences and Maps exposure. (#5743)
Fix outputs of Sequences and Maps exposure.
  Add more test conditions.
  Make sure RunWithBingind calls the right function.
2020-11-11 10:21:22 -08:00
Changming Sun
00b18d9dc5 Update InferenceTest.cs to exclude one more model in x86 mode 2020-11-10 09:02:43 -08:00
Tiago Koji Castro Shibata
9e68e98423
Add static CRT DLLs to Nuget package (#5661)
* Add static runtime yaml option

* Add to WAI Nuget build matrix

* Support empty build flags

* Add DML to x64

* Bundle static rt

* Bundle after Nugets are built

* Fix typo

* Skip static tests

* Pack test artifact only in x64 dynamic

* No DML static runtime

* Add Store static

* Revert "Add Store static"

This reverts commit 69133e5838.

* Static subfolder
2020-11-05 09:26:17 -08:00
Changming Sun
0b9f7bb1b0 Update InferenceTest.cs 2020-11-04 11:39:49 -08:00
Guoyu Wang
a2b551ff08
Add runtime options for NNAPI EP (#5576)
* Add options for nnapi ep

* Add nnapi flags test

* add comments

* Add flag comments

* Make the flags bitset const

* Fix build break

* Add stub changes to java and c# api

* Fix java related build break

* Fix java build break

* Switch to bit flags instead of bitset
2020-11-04 10:08:43 -08:00
Changming Sun
4936e10e22
Disable some model tests (#5664)
These are the new models added by WinML team. But some of our EPs can't pass some of tests.
2020-11-02 22:01:35 -08:00
Ashwini Khade
1cca903680
update onnx commit id (#5594)
* update onnx commit id

* update onnx commit for docker images

* update docker images
2020-11-02 09:46:36 -08:00
Hariharan Seshadri
7a80a4b526
Support more C# APIs (#5608) 2020-10-30 19:19:50 -07:00
Hariharan Seshadri
4291c57322
[C# and Python APIs] Expose knobs to enable/disable platform telemetry collection (#5481) 2020-10-21 10:32:13 -07:00
Ashwini Khade
df22611026
Update ONNX commit (#5487)
* update ONNX

* update onnx + register kernels for reduction ops

* bug fix kernel reg

* update cgmanifests

* revert unsqueeze op 13 registration

* filter ops which are not implemented yet

* filter some tests

* update onnx commit to include conv transpose bug fix

* update docker images

* undo not required test changes

* fix test failures
2020-10-21 07:22:20 -07:00
Chun-Wei Chen
2b6b3a2ee6
Add GetProfilingStartTimeNs() to Python/C# APIs (#5280)
* add Python API for getProfilingStartTime

* debug for using Python API

* add in C# api

* use uint intead of uint64_t to prevent warning

* typo for GetProfilingStartTimeNs

* remove const

* Update onnxruntime/python/session.py

Co-authored-by: Pranav Sharma <emailpranav@gmail.com>

* remove unnecessary return

* Add Python unit test

* Add C# unit test and refactor Python test

* use ulong in C# for uint64_t in C++

* remove time.monotonic_ns

* syntax: remove public for inner function

* correct the API's order

* getprofilingstarttime after run

* Correct the right order in NativeMethod.cs

* update order

* nit: remove spaces

* Update csharp/src/Microsoft.ML.OnnxRuntime/InferenceSession.cs

Co-authored-by: Guoyu Wang <62914304+gwang-msft@users.noreply.github.com>

* use the updated function

* add comment about the precision

* add more comments

* add session.py back

* fix flake8

* remove session.py

* Add comments in C, C#, Python APIs about precision

Co-authored-by: Pranav Sharma <emailpranav@gmail.com>
Co-authored-by: Guoyu Wang <62914304+gwang-msft@users.noreply.github.com>
2020-10-14 05:32:43 -07:00
S. Manohar Karlapalem
6e6147fb75
Use correct protoc tool file name for C# builds (#5429)
In Linux builds, the protoc tool is simply named 'protoc' (without
the .exe extension).
2020-10-13 09:43:03 -07:00
Hariharan Seshadri
b9f90e297e
Support sharing of initializers between session via the Python API (#5407) 2020-10-09 20:26:28 -07:00
Tianlei Wu
8133223871
clear cudaDelayLoadedLibs since delayload is disabled (#5386) 2020-10-07 11:33:12 -07:00
Sunghoon
1612934f72
Allow protobuf format of input data for performance test (#5323)
* Allow protobuf format of input data like onnxruntime_perf_tool

* Add OnnxML.cs to fix build failure
2020-10-01 21:40:29 -07:00
Changming Sun
17f1178c2e
Downgrade GCC (#5269)
Co-authored-by: Edward Chen <18449977+edgchen1@users.noreply.github.com>
2020-09-24 21:14:54 -07:00
Pranav Sharma
974b9bfc09
Allow sharing of initializers between sessions. (#5092)
* Allow sharing of initializers between sessions.

* Allow sharing of initializers between sessions (2).

* Add test for C#

* Add test for C#; address PR comments

* Address PR comments
Moved AddInitializer logic to internal session options
Added tests for owned buffer
Clarified documentation
Fix bug where memory info and not device was getting compared

* Fix test

* Fix training build

* Add ver 5 end marker and ver 6 starter, add scenario and usage examples.
2020-09-21 14:09:37 -07:00
Tiago Koji Castro Shibata
1a2e289d2d
Fix nuget build (#5163)
* Fix nuget content

* Revert "Fix nuget content"

This reverts commit e2cdcec4e39964c50eac2fb306c7a4bb84352443.

* Nuget packaging

* skip tests

* msbuild path

* Force msbuild version

* Workaround https://github.com/NuGet/Home/issues/7621

* cleanup
2020-09-16 10:37:09 -07:00
Changming Sun
a0a435abc6
Add sympy==1.1.1 to Linux docker image (#5177) 2020-09-15 16:08:49 -07:00
Sheil Kumar
c0d7c8bc44
Add docs indicating that the onnxruntime engine from other distributions can be compatible with the WinRT NuGet (#5009)
* add docs for mix and matching

* typos

Co-authored-by: Sheil Kumar <sheilk@microsoft.com>
2020-09-14 21:15:51 -07:00
Changming Sun
c5efb0085d
Update Linux GPU build pipelines to CUDA 10.2 (#5120)
* Update Linux GPU build pipelines to CUDA 10.2
2020-09-10 17:40:51 -07:00
Hariharan Seshadri
782ccff207
Add dll probe path so that the right DirectML.dll is loaded while running C# tests (#5104) 2020-09-10 16:19:21 -07:00
Changming Sun
47554a0422
Disable some tests (#5103) 2020-09-10 08:15:18 -07:00
Tiago Koji Castro Shibata
62848c4de5
Add store builds to nuget packaging (#5040)
* Nuget store packaging

* Move DNNL workaround to EP

* Fix warning as error

* Disable store tests

* Skip store tests

* msbuild target

* Cross compile protoc in Store

* Disable DML in store

* Move store builds to CPU queue

* Copy uap10 to final nuget

* Fix pip8 error

* Remove extra dml copies

* Fix argparse

* pep8

* Forward IsStoreBuild

* Apply is_store_build to duplicate generate_nuspec

* runtimes

* Refactor uap10

* Store .NET

* uap

* PR feedback
2020-09-09 21:38:14 -07:00
Hariharan Seshadri
61151af321
Fix typo in DML native method call from the C# API (#5083) 2020-09-09 14:47:50 -07:00
Changming Sun
924ecb0623
Use manylinux2014 for Linux CPU build (#5091) 2020-09-09 10:09:52 -07:00
Scott McKay
36dc057913
Add unit test for C# setting of session options config entry. (#5073)
Make error message slightly more user friendly.
2020-09-07 20:15:33 +10:00
gwang-msft
5d60d57ce2
Add csharp API for AddSessionConfigEntry (#5072)
Co-authored-by: gwang0000 <62914304+gwang0000@users.noreply.github.com>
2020-09-05 21:40:38 +10:00
Xiang Zhang
0dad79b495
Add SetLanguageProjection C Api and use it in four projections (#5023)
* Add SetLanguageProjection C Api and use it in four projections

* static cast enum languageprojection to uint32_t

* resolve comments

* fix typo and line added unintentionally

* revert unecessary change

* reorder c# api

* add TensorAt and CreateAndRegisterAllocator in Csharp to keep the same order as C apis
2020-09-04 14:26:39 -07:00
Hariharan Seshadri
64d52ae47d
Support creating sessions using DML EP via C# (#4955) 2020-08-29 15:18:50 -07:00
Dmitri Smirnov
2b460eaeca
Revise IDisposable implementation in C# interfaces (#4915)
Revise IDisposable implementation in C# interfaces
2020-08-27 09:17:42 -07:00
Ashwini Khade
0d3bbfdd0f
enable nuget packaging in local builds (#4884)
* enable building nuget packages

* add nuget creation from build.py

* add documentation

* fix flake8 errors

* fix nuget package version

* enable csharp tests

* update csharp tests

* copy nuget packges to nuget-artifacts

* add libmklml_gnu

* plus review updates

* fix references for release builds
2020-08-26 12:33:48 -07:00
Hariharan Seshadri
6c26e52134
Support accessing a model's metadata in C# (#4867)
Implement access to model's metadata in C#
2020-08-25 11:13:49 -07:00
Hariharan Seshadri
26bd8c2085
Support scalar tensors in c# (#4849) 2020-08-25 11:00:33 -07:00
gwang-msft
dee7596724
Add a generic collection of session configurations to the SessionOptions (#4718)
* adding generic configurations for session options

* fix a build break on linux

* fix training ci build break

* fix training ci build break

* addressed CR comments

* fix traning ci build break

* move config_key from enum to string

* add c# api

* add python api

* fix build break

* move prepacking from 2 new api entries to session options configs

* fix traning ci build break

* add python test, update some comments, move const key definition to avoid build break

* addressed comments

* move definitions of keys to common.h

* move api to version 5

* remove accidental change in build.py

* remove pragma to avoid build break

* addressed CR comments

* fix the python build break, and move location of config keys definition

* small typo changes
2020-08-18 13:40:40 -07:00
Hariharan Seshadri
c878ecbbe0
Sahar/csharp support openvino (refined) (#4835)
* Sahar/csharp support openvino (#4703)

* Temp changes and include openvino to ensure nuget package is created with linux till we configure azure ci pipeline

* string id change

* native nuget indentation changes

* documentation changes

* Update Openvino_execution_provider.md

Documentation includes openvino execution provider

* Update OpenVino-ExecutionProvider.md

update details to build csharp api for openvino execution provider .

* vadm backend revert

* Update Openvino-Execution-Provider.md

updated for review comments

* Update OpenVino-Execution-Provider.md

* Update OpenVINO-ExecutionProvider.md

* nuget package custome support for openvino
change in native nuget spec python script for including linux runtime

* change to make path to boolean flag

* removed the tab

* Update OpenVINO-ExecutionProvider.md

updated for review comments

* chnages to include pep8 warnings
modification to documentation

Co-authored-by: saharfraza <sfatima.3001@gmail.com>
Co-authored-by: sfatimar <sahar.fatima@intel/com>

* Changes to include csharp support for openvino

* Fix flake error

* Fix

Co-authored-by: sfatimar <64512376+sfatimar@users.noreply.github.com>
Co-authored-by: saharfraza <sfatima.3001@gmail.com>
Co-authored-by: sfatimar <sahar.fatima@intel/com>
2020-08-17 21:52:17 -07:00
George Wu
94a6f50af6 Revert "Sahar/csharp support openvino (#4703)"
This reverts commit 0a0ac70eec.
2020-08-17 10:05:21 -07:00
Changming Sun
5eec4f66ed
Refactor manylinux docker image and the related pipelines (#4751)
1. Publish the image ACR, instead of building it every time for every PR
2. Make USE_MKLML and USE_OPENMP be able to co-exist. Currently both of them are enabled in our Linux CI build but indeed only one of them is taking effect.
3. Split nuphar and DNNL to separated pipelines.
4. Fix two warnings in onnxruntime/core/optimizer/matmul_scale_fusion.cc and onnxruntime/test/tvm/tvm_basic_test.cc.
5. Update the manylinux2010_x86_64 image to the latest.
2020-08-17 09:40:31 -07:00
sfatimar
0a0ac70eec
Sahar/csharp support openvino (#4703)
* Temp changes and include openvino to ensure nuget package is created with linux till we configure azure ci pipeline

* string id change

* native nuget indentation changes

* documentation changes

* Update Openvino_execution_provider.md

Documentation includes openvino execution provider

* Update OpenVino-ExecutionProvider.md

update details to build csharp api for openvino execution provider .

* vadm backend revert

* Update Openvino-Execution-Provider.md

updated for review comments

* Update OpenVino-Execution-Provider.md

* Update OpenVINO-ExecutionProvider.md

* nuget package custome support for openvino
change in native nuget spec python script for including linux runtime

* change to make path to boolean flag

* removed the tab

* Update OpenVINO-ExecutionProvider.md

updated for review comments

* chnages to include pep8 warnings
modification to documentation

Co-authored-by: saharfraza <sfatima.3001@gmail.com>
Co-authored-by: sfatimar <sahar.fatima@intel/com>
2020-08-16 17:07:26 -07:00
Marcus Turewicz
ce65275edf
C# samples: Faster R-CNN (#4733)
* C# sample: Faster R-CNN

* Add link to new sample in samples README

* Remove duplicate image
2020-08-13 17:05:01 -07:00
Dmitri Smirnov
3530ce541c
Expose IOBinding features via C/C++/C# language bindings. (#4646)
Expose I/O Binding in C/C++/C#
  Expose OrtAllocator, OrtMemoryAllocation, OrtMemoryInfo and OrtIoBinding
2020-08-10 13:33:49 -07:00
Marcus Turewicz
37c45c3d6b
C# ResNet50 v2 sample/tutorial (#4722)
C# ResNet50 v2 sample
  Update samples README
2020-08-07 13:36:36 -07:00
Yufeng Li
b22091dc91
Add the framework to support prepack (#4413)
* add support of prepack
* add support for QAttention and DynamicQuantizeMatMul
* add an use_prepacking option
* add use_prepacking in c_sharp api
2020-08-07 09:39:19 -07:00
Sheil Kumar
5c5efa900d
Add .NET Core 3.0 nuget e2e pipeline tests (#4695)
* bump cswinrt version

* add cswinrt

* test dotnetcore 3.0

* rename buildpacakge source

* set folder path to the package source and not the version

* refactor .netframework tests

* build .net core anycpu

Co-authored-by: Sheil Kumar <sheilk@microsoft.com>
2020-08-05 13:02:24 -07:00
Dmitri Smirnov
bb9b452a88
resolves #3101 - fix nuget package restore for sdk-style projects (#4680)
Co-authored-by: Christof Senn <christof.senn@gmail.com>
2020-08-03 15:27:48 -07:00
Sheil Kumar
ee5ca27ae2
Split Microsoft.AI.MachineLearning.nupkg in a NuGet package and symbol NuGet package (#4503)
* add threadpool interface

* generate snupkgs

* include_pdb check

* fix snupkg generation

* Add task to merge snupkgs

* folder exists

* check dir

* revert thread pool stuff

Co-authored-by: Sheil Kumar <sheilk@microsoft.com>
2020-07-14 14:52:39 -07:00
Sheil Kumar
fdb4a3a2e8
Add cppwinrt and cswinrt tests in windowsai nuget pipeline (#4381)
* build e2e cppwinrt tests

* add use nuget task

* make all referenced to package version prop/target-ified

* remove dupe props/targets reference

* work around project.assets.json error by deleting it

* powershell test invocation

* switch to batch script

* print debug info

* update x86->x64

* stdio.h

* pushd/popd

* add csharp tests

* package.config -> packages.config

* typo

* x86 -> anycpu

* debug is default

* add test path

* update csproj as well

* debug

* really replace all package versions

* debug output

* really use [PackageVersion]

* sleep intead of converting async operation to task and waiting

* dont close software bitmap

* switch to powershell script

* remove binding check

* continue on failure

* continuse on error action

* continueOnError and errorActionPreference

* tabbing

Co-authored-by: Sheil Kumar <sheilk@microsoft.com>
2020-07-07 09:36:42 -07:00
Pranav Sharma
2204d39a06
Add build option to disable traditional ML ops from the binary. (#4272)
* Add build option to disable traditional ML ops from the binary.

* Fix python tests by splitting tests for ML ops to a separate file. Exclude ML tests from onnx_test_runner and C# tests. Exclude ML op sources.

* Update Edge pkg pipelines with new MLops env variable and fix C# packaging pipeline tests to skip ML ops.
2020-06-20 06:36:06 -07:00
Yulong Wang
12367a6b11
[C#] enable string-typed FixedBufferOnnxValue in input (#4178) 2020-06-16 11:06:11 -07:00
Sheil Kumar
4377ff4a1a
Enable .NET Core 2.0 and .NET Framework 4.6.1 in Microsoft.AI.MachineLearning NuGet package (#4125)
* add project to download cswinrt and build winrt c# interop dll

* Add to nuget package

* reverse if check

* run generation before core compile

* add generated files to compile

* update .net package to binplace native libs

* add props to .netstandard2.0 folder

* auto binplace ml native binaries

* force 'Any CPU' platform build

* Fix anycpu and platform targets

* fix flake errors

* fix variable order

* fix flake pep8 errors, semicolon

Co-authored-by: Sheil Kumar <sheilk@microsoft.com>
2020-06-09 09:08:19 -07:00
Changming Sun
2ab3a19728
Enlarge the read buffer size in C#/Java test code (#4150)
1. Enlarge the read buffer size further, so that our code can run even faster. TODO: need apply the similar changes to python some other language bindings.
2. Add coreml_VGG16_ImageNet to the test exclusion set of x86_32. It is not a new model but previously we didn't run the test against x86_32.
2020-06-08 16:13:11 -07:00
Changming Sun
08e5f89b37
Fix the nuget gpu pipeline (#4106) 2020-06-01 20:42:15 -07:00
Changming Sun
3eaec57c38
Fix the daily pipeline failures (#4084)
1. Fix the nuget cpu pipeline and put code coverage pipeline back.
2. Reduce onnx_test_runner's default logging level from WARNING to ERROR. Because there are too many log messages now.
3. Enlarge the protobuf read buffer size for onnx_test_runner. It was missed from PR #4020.
2020-06-01 14:44:49 -07:00
Paul Fultz II
7759136610
Add amd migraphx execution provider to onnx runtime (#2929)
* Add amd migraphx execution provider to onnx runtime

* rename MiGraphX to MIGraphX

* remove unnecessary changes in migraphx_execution_provider.cc

* add migraphx EP to tests

* add input requests of the batchnorm operator

* add to support an onnx operator PRelu

* update migrapx dockerfile and removed one unused line

* sync submodules with mater branch

* fixed a small bug

* fix various bugs to run msft real models correctly

* some code cleanup

* fix python file format

* fixed a code style issue

* add default provider for migraphx execution provider

Co-authored-by: Shucai Xiao <Shucai.Xiao@amd.com>
2020-05-27 04:24:59 +08:00
Hariharan Seshadri
1168f4e85a
Support session EndProfiling() in the CSharp API (#3934) 2020-05-18 19:47:52 -07:00
Hariharan Seshadri
1a183784a8
Fix C# layer in the way it handles sequences (#3965)
* Fix C# layer in the way it handles sequence of tensors

* Revert comment
2020-05-18 11:10:13 -07:00
Pranav Sharma
47ae9691fd
Fix ordering of APIs. (#3951) 2020-05-14 21:27:46 -07:00
Pranav Sharma
22a711457f
Fix C# log APIs. Also fixes github issue #3409. (#3840)
* Fix C# log APIs. Fixes github issue #3409.

* Fix build error due to accidental duplication of GraphOptimizationLevel

* Fix runoptions

* Fix broken test. Add --blame switch to dotnet test cmd line to print the failed test in case of crash.
2020-05-08 14:31:06 -07:00
Scott McKay
687edd702c
Add RelWithDebInfo target to the C# projects so that it correctly finds the native build. (#3839)
Make the cmake file slightly more consistent for the build c# flag.
2020-05-06 20:01:04 +10:00
Sheil Kumar
43a828f0a2
Add tests for WinRT Projection Raw ABI consumption (#3718)
Add tests for WinRT Projection Raw ABI consumption
Co-authored-by: Sheil Kumar <sheilk@microsoft.com>
2020-05-02 00:33:17 -07:00
edgchen1
e22d97ba56
Merge pull request #3643 from microsoft/ort_training_for_merge_to_master
Introduce ORT training implementation
2020-04-25 07:15:22 -07:00
Sheil Kumar
a475f2824d
Create the Nuget WindowsAI Pipeline (#3684)
* add windowsai.yml for new Microsoft.AI.MachineLearning nuget

* temporarily add windowsai.yml to gpu.yml

* pass in build arch

* remove install onnx task

* no dml for arm or arm64

* refactor nuget pipeline defs

* update package creation

* pass in build and sources path

* missing hyphens

* copy license file

* fix parameter variable

* disable arm builds for now

* remove commented script block

* download pipeline atifcat name update

* set working dir

* Add bundling nuget script

* path combine

* null path

* combine needs parentheses

* binplace microsoft.* dlls in new nuget package

* update artifact name

* move merged nuget to artifacts directory

* move to merged subfolder in artifacts staging dir

* forward slash to back

* enable arm

* vcvarsall needs x64 vars setup

* Run Tests

* fix tests

* move global variables

* update yml to not have global variable in template

* removed parameters

* fixes

* Add build arch as an env variable

* ne not neq

* %Var% for batch script

* dont pass argument for x64

* disable arm tests

* skip csharp/cxx tests for microsoft nuget package

* remove test-win as it tests only c# cxx and capi

* test build for store apps

* dont build for store

* tools/nuget/generate_nuspec_for_native_nuget.py

* remove args.

* add new props and targets for microsoft.ai

* make windowsai props/targets static

* add dependency

* dont ship dot net props

* Remove c# fom windowsai nuget

* copy license file

* native packages must have win10 as the platform, not win

* cuda header in wrong if branch

* no dml for arm builds

* only build dml for x64/ x86

* User/sheilk/props update (#3616)

* prelim store work

* props

* Fix desktop nuget props/targets

* clean up targets and make store apps work

Co-authored-by: Sheil Kumar <sheilk@microsoft.com>

* update windowsai.yml with latest

* remove extra dloadhelpers

* Add abi headers to abi dir, and reference native includes

* update windowsai.yml

* minor update

* remove parameters

* add doesrp param

* hard code esrp to true

* add directml for x86/x64

* revert gpu yml changes

* add store builds

* add store builds

* add checks again in old way

* dup job names for store and desktop builds

* move all of the runtime binaries to win10 folder

* only set safeseh on x86

* disable the store builds for now... missing msvcprt.lib

* copy paste deletion...

* switch back to win- (#3646)

Co-authored-by: Sheil Kumar <sheilk@microsoft.com>

* use stahlworks

* & not supported in ado

* add cuda to cpu nuget(???) and EnableDelayedExpansion to enable x86 dml package

* revert nocontribops

* add underscore...

* extra win/win10 change

* merged nuget... still not being bundled...

* files in merged directory

* missing parens causing dml to be included in cpu package

* more diagnostic info

* switch dir to get-childitem

* wait for compression to complete

* add winml_adapter to mkml and gpu packages

* enable_wcos

* add mklml binaries

* props and targets missing from mklml

Co-authored-by: Sheil Kumar <sheilk@microsoft.com>
2020-04-24 20:20:04 -07:00
Ethan Tao
e9f1e7e797 resolve conflicts 2020-04-24 15:15:36 -07:00
S. Manohar Karlapalem
6d4f2f5bf9
OpenVINO EP v2.0 (#3585)
* Added FP16 transformations

* Revert "Added CMAKE_BUILD_TYPE to make building dynamic"

This reverts commit d3e17af1af655cfdc4d2fec33f52055caa525e85.

* Added FP16 transformations for FP16 builds

* Backend logic cleanup

Cleans the backend(intel_graph.*) code in the following ways:-

1. Minimize global usage: Since all the IR graphs need to be
re-generated on every Infer, it is bad practice to rely on globals
for their saving and usage as there would be multiple readers and
writers to the same global variable leading to incorrect usages or
contentions. This change replaces globals with locals where possible.
 This change also fixes an existing bug with due to
incorrect global usage.

2. Remove all unused functions.

3. Remove all unused headers and prepocessor directives.

* removed commented out code

* Disabled default optimization for Intel EP

Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>

* Fix missed plugins.xml for python bindings

* Fixed the build after latest master changes

Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>

* Disabled unsupported ops for accelerators

Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>

* Added some more disabled ops

Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>

* Added environment variable to enable debugging

Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>

* Added more debug statements

Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>

* Fixed unsupported ops list for GPU and VPU

Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>

* Fixed unsqueeze unit tests

Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>

* Added error message to the status

Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>

* Overwrite Model proto with shape info from data

Overwrites the shape info of Model proto with the shape from
actual input data. Needed for inferring models with Dynamic
shapes.

* Removed print statement and disabled where op

Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>

* Disabled Reshape with Empty initializer

* Added more debug statements for 1P

* Don't allow 1D inputs with symbol for dimension

* Disabled some 3rd phase ops

* Disabled split and added zero dimension check for OutputDefs

* Cleanup zero dimensionality check

* Added different data type check for inputs and initializers

* Added conditions for Mod, Cast and Pad

* Removed unused variable

* Disabled scan and added conditions for squeeze

* Added changes for fixing all C++ unit tests

* Implements Backend Manager class for caching

Backend Manager provides a layer of indirection between EP interface
and OV backend that provides caching services for models with
symbolic dims in input shapes.

* clean up commented blocks

* clang-formatting

* Read I/O type info from ModleProto

Read the tensor element type information from ModelProto object,
as FusedNode is no longer available.

* code cleanup

* clang-formatting

* Added print statement for jenkins

* Disabled some python tests

* Changed the path of convert fp32 to fp16 hpp

* Added conditions for BatchNorm in GetCapability

* Fixed failed tests

* Revert "Added conditions for BatchNorm in GetCapability"

This reverts commit c3c28c3b00d27892c42546b35dacdd807a48ee90.

* Added Intel to onnxruntime backends

* pick up vars set by OV package setupvars.sh

* Added conditions for Identity

* remove a few cout prints

* Added conditions for GPU_FP32 unit tests

* Revert "pick up vars set by OV package setupvars.sh"

This reverts commit 8199e029c03eae21a1a7ef6bfdc93d00e5d0198b.

* Commented out fatal message for protobuf

* Might need to be removed

* Add interface class for current backend

* moved common logic to base class

* simplified cpu backend

* Removed unused headers

* use vectors to save i/o tensors for windows compatibility

* move utils fxns to backend_utils namespace

* rename ov_backend to ibackend

* Factory pattern for backend creation

* rename CPU backend to Basic backend

* renamed to vad-M and added to factory list

* Added conditions for VPU

* Added print statements

* Changed the logic for checking for symbolic shapes

* Modified logic for zero dimension check

* Removed VPU single dimension condition

* Removed comments

* Modified logic in DimensionCheck method

* Remove legacy OpenVINO EP

Remove all the legacy code for OpenVINO EP. UEP code will take its
place going forward.

This change does NOT remove OVEP files in the following areas asa
they will be reused by UEP:-
1. Documentation: All .md files
2. Docker releated files
3. Python bindings
4. Java bindings
5. C# bindings
6. ORT Server
7. CI pipeline setup files

* Rename Intel EP to OpenVINO EP

* Added unique names to the subgraphs

* Removed subgraphs with only constant inputs

* Modified subgraph partitioning algorithm to remove const input subgraphs

* Apply suggestion to onnxruntime/core/providers/openvino/openvino_execution_provider.cc

* Tracking output names to fix the output order bug

* Changed output names to a unordered map

* Modified logic to check for symbolic input shapes

* Fixed a bug in Reshape check

* Added empty model path to Model constructor

* Made necessary changes to cmake to build from the binary package

* Changed INTEL_CVSDK_DIR to INTEL_OPENVINO_DIR

* Enable dyn device selection with C++ API

* Added Round operator to unsupported list

* Modified subgraph partition logic for MYRIAD

* Removed supported ops from the list

* Enable dyn dev selection in Py API's

* Add documentation for dynamic device selection

* Use MYRIAD || HDDL instead of VPU

* Removed temporary cast of Int64 to FP32

* Disabled unit Tests for CPU_FP32 and GPU_FP32

* Removed default "CPU" from unit tests to allow overriding

* Removed ops Concat, Squeeze, Unsqueeze from unsupported list

* Get the device id from info

* Removed overwriting device_id and precision

* Enabled ConvTranspose and EyeLike

* Reordered unsupported ops in alphabetical order

* Fixed syntax error

* Fixed syntax error

* Code clean-up: Handle exceptions, logs and formatting

Code formatted according to ORT coding guidelines.

* remove debug print from pybind code

* updated docs with ops and models

* formatting prints

* Added default values for c and j for openvino

* Overriding the values set for c and j to be 1
* BACKEND_OPENVINO should be empty if openvino is not in build

* Overriding c value with default for perftest

* fix VAD-M device string bug

* Add IE error details to exceptions

* Use IE specific device names in EP

* Add VAD-F (FPGA) device support

* Removed unecessary libraries from whl package

* Code changes for Windows compatibility

* Add VAD-F option to python API

* [revert before merge] cmake changes for RC

* Enable Windows build in CMake

* Unset macro OPTIONAL for windows builds

inference_engine.hpp's include chain defines a macro 'OPTIONAL'
which conflicts with onnx project's headers when using MSVC. So
would need to explictly unset it for MSVC.

* Use a single copy of plugin/IE::Core

Defined as a static member in Backend manager

* Remove restriction of single subgraphs for  myriad

* Passed subgraph name to Backend to enhance log statements

* Disabled zero dimension conditions

* Disabled concat to remove zero dims

* Enabled building ngraph as part of ORT

* Removed serializing and added versioning

* Fix CPU_FP32 unit tests

* Removed unecessary condition

* add ngraph.so.0.0 to .whl

* Check for zero dimensions only for inputs and outputs

* Restrict loading only 10 subgraphs on myriad

* Build ngraph.dll within UEP. Doesn't link yet

* Rename Linux included libngraph.so to libovep_ngraph.so

Renames locally built libngraph.so containing ONNX importer to
libovep_ngraph.so in order to avoid linkage conflicts with
libngraph.so supplied by OpenVINO binary installer.
Applies only for Linux builds.

* use output_name cmake properties for lib name

* fix .so name format in lib_name.patch

* CMake code cleanup

* Rename WIN32 included ngraph.dll to ovep_ngraph.dll

To avoid conflict with ngraph.dll distributed by openvino.

* Added myriad config for networks without 4 dimensions

* Loading the 10 max clusters for inference on myriad

* Refactor code and add Batching support

Encapsulate subgraph settings into context structs.

Add batching support for completely supported models.

* Disabled some broken tests

* use input_indexes to avoid batch-checking initializers

* Avoid static initialization order error on WOS

* Added candy to broken tests

* InternalCI changes for 2020.2

* Updated DLDT instructions

* Unsaved changed in install_openvino.sh

* Changes after manual check

* Remove custom ngraph onnx_import build for WOS

ONNX Importer on WOS does not have protobuf issue.

* Remove FP32ToFP16 ngraph pass

This conversion is performed implicitly within IE.

* Surround debug logic by #ifndef NDEBUG

* remove invalid TODO comments

* removed references to ngrpah-ep

* clang-formatting

* remove commented code

* comment edits

* updating copyright year to that of first OpenVINO-EP release

* remove redundant log msg

* Modified operator and topology support

* Update build instructions

* doc formatting

* Fixed clip unit tests

* Revert "Remove FP32ToFP16 ngraph pass"

This reverts commit ec962ca5f315a5658ad980e740196f19de2639c1.

* Applying FP16 transformation only for GPU FP16

* Fixed GPU FP32 python tests

* automatically use full protobuf

* disable onnxrt server for now

* Disabled upsample

* update dockerfile instructions

* Removed MO paths and added ngraph path

* Remove OVEP from ORT Server docs

Will put it back in after validation

* Updated path to Ngraph lib

* Disabled Resize and some other python tests

* Removed unnecesary header files

* Use commit SHA to fetch ngraph repo

* Avoid un-needed file changes due to version update

* Fixed clip tests

* Fixed Pow, max and min onnx tests

* build.md doc typo

* Update cmake patch command for ngraph src

* remove dead cmake code for onnxruntime_USE_OPENVINO_BINARY

* use spaces instead of tab

* remove commented code

* Add info about protobuf version

* edit debug env var and enable for WIN32

* specify only version tag of 2020.2 for dockerbuilds

* remove unnecessary file changes

* Pass empty string as default argument to C# tests

* Use ${OPENVINO_VERSION} to name openvino install directory in CI builds

* Enabled unnecessarily disabled tests

* Fixed ngraph protobuf patch

* Fixed error in protobuf patch

* Revert "Use ${OPENVINO_VERSION} to name openvino install directory in CI builds"

This reverts commit 89e72adb8bf3b9712f5c81c5e13fe68c6c0df002.

* Remove unsetting OPTIONAL macro

This is no longer used in recent ONNX update onnx/onnx@da13be2,
so this unset workaround is no longer necessary.

* Use a null string  default argument for C# API

* Set OpenVINO version yml files and pass to CI Docker builds

Git Tag info for DLDT as well as install directory are set
using this value.

This reverts commit 9fa9c20348ed72ae360a95c98e9b074d2f9fafc5.

* Documentation: recommendation and instructions for disabling ORT graph optimizations

* more doc updates

* Reduced the number of models according to CI time constraints

Co-authored-by: ynimmaga <yamini.nimmagadda@intel.com>
Co-authored-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
Co-authored-by: Mikhail Treskin <mikhail.treskin@intel.com>
Co-authored-by: mbencer <mateusz.bencer@intel.com>
Co-authored-by: Aravind <aravindx.gunda@intel.com>
Co-authored-by: suryasidd <48925384+suryasidd@users.noreply.github.com>
2020-04-24 04:06:02 -07:00
Sheil Kumar
470f6e34d0
remove microsoft.ai.machinelearning.dll binpace (#3678)
Co-authored-by: Sheil Kumar <sheilk@microsoft.com>
2020-04-23 23:06:16 -07:00
Edward Chen
daa14b64e3 Merge remote-tracking branch 'origin/master' into edgchen1/merge_from_master 2020-04-21 03:31:32 +00:00
Sheil Kumar
2717c178cc
Fork the WinML APIs into the Microsoft namespace (#3503)
* Migrate winml to Microsoft Namespace (packaging changes are pending)

* add ns_prefix toggle

* fix packaging

* Users/sheilk/add missing raw header (#3484)

* add dualapipartition

* wrong variable for repo root

Co-authored-by: Sheil Kumar <sheilk@microsoft.com>

* remove existence check to force failures

* extra paren

* dualapipartition needs to be referenced from the source

* add microsoft.ai.machinelearning.dll to the output dir

* rename the idl file so that assembly info is correctly added into the winmd

* fix namespaces

* update namespaces

* default to microsoft, and add namespace override as build argument

* update cmakesetings.json as well

* remove from cmakelists.txt

Co-authored-by: Sheil Kumar <sheilk@microsoft.com>
Co-authored-by: Changming Sun <chasun@microsoft.com>
2020-04-17 06:18:54 -07:00
Sheil Kumar
951484ba53
Dualapipartitionattibute.h header is missing in nuget package (#3350)
* add dualapipartition

* wrong variable for repo root

Co-authored-by: Sheil Kumar <sheilk@microsoft.com>
2020-04-16 22:21:57 -07:00
harshitha
80e0c64e2e merged with master 2020-04-16 17:13:36 +00:00
Yulong Wang
cf2fddf760
fix nuget build (#3532) 2020-04-15 10:30:11 -07:00
Yulong Wang
718068f020
update C# API to optimize inference latency (#3171)
* update C# API to optimize inference latency

* rename PinnedOnnxValue to fixedBufferOnnxValue and fix build break

* add more test cases

* add conditions on string tensors for pre-allocated outputs

* change to random inputs

* fix word spell

* resolve comments

* resolve comments

* remove FixedBufferOnnxValueTests.cs

* fix trivial typos in doc
2020-04-08 11:57:40 -07:00
Thiago Crepaldi
759818f2c1 Merge remote-tracking branch 'origin/master' into thiagofc/ort_training_merge_from_master 2020-03-31 10:53:22 -07:00
Cassie
2b10e625f9
added public value varibale to NamedOnnxValue (#3347)
Co-authored-by: cassieview <cassie.siljander@microsoft.com>
2020-03-30 10:45:39 -07:00
Hariharan Seshadri
ef7b98f988
Support DisposableNamedOnnxValue inputs in c# Run() (#3175)
* Initial commit

* Update error message

* Update

* Updates to support holding onto onnxValue and pinnedmemoryBuffer

* Updates

* Minor updates

* Comment out a portion of the tests

* PR feedback

* Minor nit update

* Resolve comments

* PR feedback

* PR updates

* PR feedback
2020-03-23 18:36:12 -07:00
Edward Chen
24793f5fc7 Revert change from RelWithDebInfo to Release in OnnxRuntime.CSharp.sln. 2020-03-12 16:51:45 -07:00
Zeeshan Siddiqui
2cad08bd60 Merged PR 5688: Upgrade ONNX submodule to the latest from github ONNX master.
We want to implement SoftmaxCrossentropy and NegativeLossLikelihoodLoss forward training ops for opset-12 but that requires ONNX submodule to point to the latest commit to have the latest and greatest ONNX spec!

- Reverse integrate changes from *.in.proto files in github ONNX repo.
- Regenerate csharp/test/Microsoft.ML.OnnxRuntime.Tests/OnnxMl.cs
- Disable ONNX tests that don't have op implementation for the latest opset.
2020-03-12 16:51:45 -07:00
smk2007
6cdd2b4934
Enable DML Nuget Package for x64 or x86 architectures (#3120)
* add dml gpu pipelines

* add x86 to the gpu dml dev build pipeline

* Enable DML x86 builds

* Fix uint64_t -> size_t warning

* fix warnings

* enable dml on x86 ci builds

* operatorHelper 773 error uint32_t vs uint64_t

* operatorHelper 773 error uint32_t vs uint64_t

* make x86 pipeline use the gpu pool

* more warnings

* fix x86 directml path

* make dml nuget package

* disable tf_pnasnet_large

* disable zfnet512

* make validation use wildcards

* disable x86 dml gpu tests

* add args.

* update gpu.yml

* change nupkg wildcard

* add debug statements

* package x86 dml nupkg

* dont drop managed nuget again from dml pipeline build

* Add DML EULA

* directml license should be renamed to not clobber the existing license

* casing on dml package....

* {} to ()

* fix license name

* disable dml from x86 ci

* typo and cr feedback

* remove featurizers

* ship the dml pdb as well
2020-03-02 20:18:46 -08:00
Hariharan Seshadri
4188b1111a
Add a summary for each ExecutionProviderAppend methods in SessionOptions.cs (#3111)
* Add a summary for each ExecutionProviderAppend methods in SessionOptions.cs

OnnxRuntime managed dll is EP agnostic meaning it will expose all methods pertaining to all possible EPs supported by OnnxRuntime in general. Not all these methods are really "available" to use for a .NET developer unless they have the correpsonding native onnxruntime shared library. Adding a summary line so that intellisense points that out.

* remove empty line
2020-02-28 21:46:57 -08:00
Hariharan Seshadri
86b755774f
Create a separate Nuget hosting just managed assemblies (#3020)
* Initial commit

* More changes

* More changes

* More changes 3

* More changes 4

* More changes 5

* More changes 5

* More changes 6

* More changes 7

* More changes 8

* Remove C# ifdefs

* More changes 10

* More changes 11

* YAML changes for other release pipelines

* Add release notes metadata

* Props and Targets change

* Add CSHarp proj

* More changes 12

* More changes

* Minor fix

* Minor fix

* Fix yaml

* Some missing logic for winml

* Minor update

* Fix casing for winmd file

* Fix casing

* Add targets and props for managed section into native nuget

* revert file

* a
2020-02-27 18:00:17 -08:00
Changming Sun
d72639ef77
Fix CUDA 10.1 DLL names (#3102) 2020-02-27 14:43:16 -08:00
Yufeng Li
f1ba531d9c
Disable test_zfnet512 and test_bvlc_reference_caffenet for x86 in C# tests (#3094) 2020-02-26 14:40:55 -08:00
Hariharan Seshadri
bf7afbef23
Changes in the props file to support .NET + AnyCPU configuration (#3091) 2020-02-25 20:28:36 -08:00
Hariharan Seshadri
d7f2cdcc7e
Fix target platform of managed OnnxRuntime dll and enable x86 .NET testing (#3056)
* WIP: Re-enable x86 .NET testing in Release pipelines

Enabling x86 testing will make sure that ORT packages doesn’t break x86 projects of customers

* Remove setting some env variables

* Comment out a test failing on x86 builds

* More changes

* Minor fix

* More changes

* More changes

* s

* s

* s

* Revert minor change

* More changes

* More changes

* More changes 2

* explicitly set platform target

* Delete bin and obj folders

* Clean output dirs

* Add back TargetFramwork

* Disable x86 .net framework tests

* Skip x86 tests in MKLML pipeline
2020-02-24 23:02:59 -08:00
smk2007
44d5eaf3d7
WinML exists in the nuget packages but does not publish its WinMD and headers (#3037)
* publish winmd and raw headers

* Add the lib too

* add missing conditions

* Fix copy/paste condition error
2020-02-20 10:24:29 -08:00
Changming Sun
69bc8ce3c2 Upgrade protobuf to 3.11.3 2020-02-12 14:47:00 -08:00
Tiago Koji Castro Shibata
fb2182f3fc
Release ARM/ARM64 Nuget packages (#2987)
* Enable ARM64 release builds

* Add ARM release

* Skip C# dll signing in ARM

* Copy ARM binaries to Nuget

* Restore nuget packages before ARM packaging

* wip

* Use host protoc at C# build

* Set ProtocDirectory on cross-compiled builds

* wip

* Fix typo
2020-02-10 16:29:27 -08:00
smk2007
c32cedc6c9
Merge windowsai (winml layering) into master (#2956)
* Initial Commit

* Merged PR 3985217: add onecoreuap_apiset.lib in order to avoid linking against kernel32.lib etc (#2346)

add onecoreuap_apiset.lib in order to avoid linking against kernel32.lib etc and violating our OS layering requirements.

We linked against onecoreuap_apiset.lib in VB so we will continue doing this, but I am still unsure why not to link against onecore instead since that is where we ship. However, since Sheil is the owner of this code we will wait to discuss with him before changing anything.

* Initial changes for layering

* more snipping to get core into ort

* update build instructions to include --build_shared_lib (#2358)

* update build instructions to include --build_shared_lib

* fix line breaks

* Task 23998197: add winml_lib_core into onnnxruntime.dll (#2368)

* Task 23998197: add winml_lib_core into onnnxruntime.dll

* PR feedback
build break on perf_test

* return proper error when the model path isn't found (#2391)

* LearningModelSession is cleaned up to use the adapter, and parts of b… (#2382)

this is a big PR.    we are going to move it up to layer_dev , which is still a L3 so we are still safe to do work there agile.

we are going to move this into the L3 so that ryan can start doing intergration testing.   

we will pause for a full code review and integration test result prior to going into the L2.

>>>> raw comments from previous commits >>> 

* LearningModelSession is cleaned up to use the adapter, and parts of binding are.
* moved everything in the winmladapter
made it all nano-com using, WRL to construct objects in the ORT side.
base interfaces for everythign for winml to call
cleaned up a bunch of winml to use the base interfaces.
* more pieces
* GetData across the abi.
* renamed some namepsace
cleaned up OrtValue
cleaned up Tensor
cleaned up custom ops.
everything *but* learnignmodel should be clean
* make sure it's building.   winml.dll is still a monolith.

* model moved over.
everything builds clean.
step !

* weak ref comment

* Layer dev paulm (#2408)

* model moved over.
everything builds clean.
step !

* weak ref comment

* added a wrapper for RoGetActivationFactory to hook back into winml for creating winml objects.
fixes model load.

* Layer dev paulm (#2414)

* model moved over.
everything builds clean.
step !

* weak ref comment

* added a wrapper for RoGetActivationFactory to hook back into winml for creating winml objects.
fixes model load.

* User/xianz/win ml telemetry (#2410)

* add option to enable winml telemetry

* add option to enable winml telemetry

* clean logs while developping

* clean the log of GUID

* compile onnxruntime_common with winml telemetry

* use option for use_telemetry

* rename option winml_use_telemetry to onnxruntime_use_telemetry

* little change

* fixed some lifetime management.
fixed the debug build.
squeezenet passes using winmlrunner for CPU and GPU

* Layer dev paulm (#2423)

* model moved over.
everything builds clean.
step !

* weak ref comment

* added a wrapper for RoGetActivationFactory to hook back into winml for creating winml objects.
fixes model load.

* fixed some lifetime management.
fixed the debug build.
squeezenet passes using winmlrunner for CPU and GPU

* PR feedback.

* Layer dev paulm (#2424)

* model moved over.
everything builds clean.
step !

* weak ref comment

* added a wrapper for RoGetActivationFactory to hook back into winml for creating winml objects.
fixes model load.

* fixed some lifetime management.
fixed the debug build.
squeezenet passes using winmlrunner for CPU and GPU

* PR feedback.

* couple of fixes and coded getmutabledata()

* Layer dev paulm (#2425)

* model moved over.
everything builds clean.
step !

* weak ref comment

* added a wrapper for RoGetActivationFactory to hook back into winml for creating winml objects.
fixes model load.

* fixed some lifetime management.
fixed the debug build.
squeezenet passes using winmlrunner for CPU and GPU

* PR feedback.

* couple of fixes and coded getmutabledata()

* fixed 2 more heap corruptions

* Layer dev paulm (#2426)

* model moved over.
everything builds clean.
step !

* weak ref comment

* added a wrapper for RoGetActivationFactory to hook back into winml for creating winml objects.
fixes model load.

* fixed some lifetime management.
fixed the debug build.
squeezenet passes using winmlrunner for CPU and GPU

* PR feedback.

* couple of fixes and coded getmutabledata()

* fixed 2 more heap corruptions

* Add opset and IR check when loading model (#2413)

* Add opset and IR check.
* Add test case for future opsets.

https://github.com/microsoft/onnxruntime/issues/2371

* fixed map and sequence when passing stl types across the ABI .
found a leak in nvidia driver, but skipped it.
all winmlapitests pass now

* Moved SessionOptions over to the abi

* WinML CI (#2412)

* Pass flags to build/test WinML in CI

* Add initial CMake config for unit tests in WinML

* Set winml_unittests standard to C++17

* Add WinML API tests and port them to googletest

* Install WinML test collateral

* Add LearningModelSessionAPITests ported to googletest

* Fix WinML test files encoding

* Add GPU tests

* Add parameterized test, skip GPU tests

* Enable precompiled header

* Remove unused code and collateral

* Remove brand images

* Add dllload.cpp

* Remove images not used in API tests

* Add LICENSE.md to image collaterals

* Add models with licenses

* Remove FNS Candy tests

* Add API test models

* Add ModelInSubdirectory

* Install collaterals post-build with copy_if_different, split common lib

* fix warnings

* Link to gtest_main

* Register WinML TraceLogging provider on Onnxruntime.dll (#2455)

* Register WinML TraceLogging provider on Onnxruntime.dll

* Add ifdef to make sure trace logging provider has telemetry option when LAYERING_DONE

* No need for ifdef for TraceLoggingOptionMicrosoftTelemetry

* PR feedback

* Move etw registration into lotus environment constructor and deresgister in lotus environment destructor

* Brianma/cpuwinml (#2466)

* allow building winml cpu without dml.

* Brianma/breaks (#2469)

* fix some more breaks

* learning model doesn't need lotusEnvironment and CPU shouldn't include dmlEP headers

* move dml checks out of winml and into the adapter

* better error handling

* Brianma/fi (#2470)

* learning model doesn't need lotusEnvironment and CPU shouldn't include dmlEP headers

* User/xianz/win ml telemetry (#2410)

* add option to enable winml telemetry

* add option to enable winml telemetry

* clean logs while developping

* clean the log of GUID

* compile onnxruntime_common with winml telemetry

* use option for use_telemetry

* rename option winml_use_telemetry to onnxruntime_use_telemetry

* little change

* Add opset and IR check when loading model (#2413)

* Add opset and IR check.
* Add test case for future opsets.

https://github.com/microsoft/onnxruntime/issues/2371

* WinML CI (#2412)

* Pass flags to build/test WinML in CI

* Add initial CMake config for unit tests in WinML

* Set winml_unittests standard to C++17

* Add WinML API tests and port them to googletest

* Install WinML test collateral

* Add LearningModelSessionAPITests ported to googletest

* Fix WinML test files encoding

* Add GPU tests

* Add parameterized test, skip GPU tests

* Enable precompiled header

* Remove unused code and collateral

* Remove brand images

* Add dllload.cpp

* Remove images not used in API tests

* Add LICENSE.md to image collaterals

* Add models with licenses

* Remove FNS Candy tests

* Add API test models

* Add ModelInSubdirectory

* Install collaterals post-build with copy_if_different, split common lib

* fix warnings

* Link to gtest_main

* fix bad merge

* Checking in a staging checkpoint point so that Ryan can work with me in parrallel

* build break.

* Brianma/testfails (#2473)

* add missing ir version to dictvectorizer-string.onnx

* add missing ir version to relu.onnx

* add missing ir version to zipmap*onnx

* add IR version to manually generated models

* remove an unnecessary ifdef dml

* Brianma/windowsai fi (#2475)

* update dockerfiles/README (#2336)

* Make elementwise op run 4 items per thread (#2335)

Description: Describe your changes.
Make elementwise op run 4 items per thread
unroll for loop to leverage ILP
remove unnessary N==0 check inside elementwise GPU kernel
Motivation and Context
Why is this change required? What problem does it solve?
It can improve the performance of GPU elementwise ops. ~2% performance gain on popular NLP bert model.
If it fixes an open issue, please link to the issue here.

* Add CUDA GatherElements kernel (#2310)

* Updates

* Update test

* Update

* Updates

* nits

* PR feedback

* Update

* Update

* PR feedback

* PR comments

* Update

* Fix build

* Fix build

* Nits

* Fix

* Layer Normalization Fusion  (#2319)

basic layer normalization transform

* Add FastGelu Cuda Op for Gelu and Add bias fusion (#2293)

* Add FastGelu cuda op

* Add AddBiasGelu for experiment

* Revert "Add AddBiasGelu for experiment"

This reverts commit 5c1ee019858c657e6bb75887265cb85675626e5b.

* Add bias

* Add unit tests

* update comment

* update script

* fix build error

* update coding style

* update for CR feedback
Enable half2 optimization only when cuda arch >= 7.0

* move _Tanh to common.cuh

* implement CPU contrib OP Attention (#2333)

* Remove unused initializer from GraphProto as well as name_to_initial_tensor_ in CleanUnusedInitializers. (#2320)

* Remove unused initializer from GraphProto as well as name_to_initial_tensor_ in CleanupUnusedInitializers.

This means initializers that have been replaced during graph optimizations are not left in the GraphProto when we save an optimized model.

* Handle edge case where a model has an unused initializer with matching graph input by also removing the graph input.

* Use non-const iterators in std::find_if calls to make centos build happy.

* Nuget pipeline changes (#2305)

1. refactor the pipeline, remove some duplicated code
2. Move Windows_py_GPU_Wheels job to Win-GPU-CUDA10. We'll deprecated the "Win-GPU" pool
3. Delete cpu-nocontribops-esrp-pipeline.yml and cpu-nocontribops-pipeline.yml
4. In Linux nuget jobs, run "make install" before creating the package. So that extra RPAH info will be removed

* Cuda Reverse Sequence Op, maping types of same size using same template function. (#2281)

* Set ElementType to String type of node metadata, instead of byte[] (#2348)

* Set ElementType to String type of node metadata, instead of byte[]

* Fix spacing

* Introduce PrimitiveType into a Type System along with an integer constant (#2307)

Improve perf by avoiding GetType<T>() calls. Introduce MLTypeCallDispatcher to switch on Input Type. Add Tensor IsType<T>() fast method.

* Fix/test dim value of 0 handling in a couple of places (#2337)

* Update the CUDA Where implementation broadcasting logic to handle a dim with value of 0.
Add unit test
Also add unit test for unary op with dim value of 0

* Exclude ngraph from Where test with 0 dim.

* Openvino EP R3.1 onnxrt server (#2357)

* onnxrt server with OVEP

* onnxrt server with OVEP

* Update Dockerfile.server.openvino

* onnxrt server OVEP fix reviews

* onnxrt server OVEP fix reviews

* Implement cuda nonzero op. (#2056)

Implement cuda nonzero op.

* Direct use python numpy array's memory if already contiguous.  (#2355)

* Direct use python numpy array's memory if already contiguous. This
could greatly improve performance for session with large input,
like big image 1920x1080 fastrcnn, 30~40% speed up could be achieved.

* Add test case enforce contiguous/non-contiguos numpy array as inputs.

* Add helper to create output to minimize binary size. (#2365)

Add ConstEigenTensorMap typedef so we don't unnecessarily const_cast the const input Tensor.

* fix builds enabling onnxruntime_DEBUG_NODE_INPUTS_OUTPUTS (#2369)

* fix builds enabling onnxruntime_DEBUG_NODE_INPUTS_OUTPUTS

* update

* Add Tracelogging for profiling (#1639)

Enabled only if onnxruntime_ENABLE_INSTRUMENT is ON

* test bidaf with nuphar for avx target (#2370)

increase nuphar test coverage a bit

* Fix a bug in TLS refcount that may destabilized CUDA CI (#2374)

* update output size calculation for resize (#2366)

* change how output size is calculated for resize op

* add tests for ver 10 resize

* Extend OneHot CPU kernel to support more types (#2311)

* Extend OneHot CPU kernel to support input int64_t, depth int32_t, output float

* Skip BERT before the test data fix is picked up

* Fix bug with Slice. Need to pass in flattened input dimensions so the initial offset into the input is calculated correctly. (#2372)

* Add opset 11 version of Split to CUDA ops (#2376)

Organize the CUDA ops definitions so all the opset 10 and 11 parts are together (same setup used for CPU ops)

* Layer Norm Fusion Fix (#2379)

* layer norm fusion fix

* Add input shape check in code and unit tests

* Fuse Add + Gelu (#2360)

Implement the transformer to fuse add + gelu
Implement the accurate kernel

* Skip layer norm transform (#2350)

* skip layer normalization transformer

* Another try to stabilize CUDA CI (#2383)

The root cause seems to be failure in CUDA dealloc when tear down. cudaFree return code was ignored before, so should the debug check.

* fix BUILD.md typo (#2375)

build.py: error: argument --config: invalid choice: 'RelWithDebugInfo' (choose from 'Debug', 'MinSizeRel', 'Release', 'RelWithDebInfo')

* Fixed compilation with ngraph (#2388)

* Fix reuse logic in allocation planner. (#2393)

* Fix reuse logic in allocation planner.

* PR comments

* Add helpful comments

* Don't allow reuse across string tensors.

* [NupharEP] Multiple optimizations  (#2380)

Fuse transpose into MatMul
Implement Pow and constant scalar simplification
Vectorize ReduceMean
Improve symbolic shape inference
Minor updates for better debugging in fused function name

* Avoid using the default logger in the graph lib and optimizers (#2361)

1. Use the session logger if it is available.
2. Don't disable warning 4100 globally. We should fix the warnings instead of disabling it.

* Change CUDA implementation of Transpose to support all fixed size tensor types (#2387)

* Change CUDA implementation of Transpose to not use a typed kernel so we can support more types with minimum binary size.
Add support for 8, 16, 32 and 64 bit types.
Add unit tests.
Add method so the implementation can be called directly (will be used by CUDA Scan very soon).

* Disable TensorRT for MLFloat16 and int8 unit tests.

* Address PR comment and add support for calling cublas implementation if type is mlfloat16.

* Add opset 11 versions of the existing CUDA operators that had negative axis support explicitly added. (#2398)

* Add opset 11 versions of the existing CUDA operators that had negative axis support explicitly added.

* [NupharEP] force some low/zero cost ops to be inlined (#2409)

* fix cross compile bug (#2415)

* Minor optimization: if a node has already been placed, there's no need to find a kernel for it. (#2417)

* Add Reshape Fusion (#2395)

* Add reshape fusion

* Add some comments

* update comments

* update comment format

* update according to feedback

* update for recent logger change

* fix build error

* (1) Support both input and output edges in find path in graphutils
(2) Add a test case of only one constant initializer of Concat input.
(3) Refactor ReshapeFusion class to allow add more subgraph fusion in the future.

* fix error

* (1) loose constraint on initializer: non constant is allowed for reshape fusion.
(2) Change versions type to vector.
(3) Add logging.
(4) Return false when multiple output edges matched in FindPath. Add comments.

* only allow one direction (input or output) in FindPath

* [NupharEP] Update notebook and docker image (#2416)

Add BERT squad in Nuphar tutorial
Enhance speed comparsion readability

* Fix the issue in matmul_add_fusion (#2407)

Fix the issue in matmul_add_fusion

If Muatmul + Add has shape [K] * [K, N], reset it to [1, K] * [K, N] will make the output shape to [1, N] will also requires a reshape on the output.
Fix: just remove the shape reset to not fuse it.

Add a negative test case for matmul+add fusion

* feat(treeregressor): Update TreeEnsembleRegressor for type support (#2389)

Updates the `TreeEnsembleRegressor` to allow for `double`, `float`,
`int64`, and `int32` inputs to match the upstream specification.

Signed-off-by: Nick Groszewski <nicholas.groszewski@capitalone.com>

* onnxrt server documentation update (#2396)

* Added support for Pad-2 operator in OpenVINO-EP (#2405)

* Add CUDA If operator. (#2377)

* Add CUDA If operator.
Uses CPU operator for implementation.
By adding a CUDA version the inputs/outputs (with the exception of the 'cond' input) stay on GPU, and no other logic is required to avoid a copy to CPU across the control flow node.

* Improved documentation for onnxruntime::utils::SwapByteOrderCopy(), added precondition check.

* Fix the type constraints on CUDA If operator to exclude strings. (#2431)

* add Im2col<uint8_t> (#2438)

* Adjust codegen vectorization width from target (#2439)

* Adjust codegen vectorization width from target

* Add CUDA Scan operator. (#2403)

* Add Scan CUDA op.
Uses CPU implementation for logic.
Added some device specific functors for handling when data needs to be manipulated on a different device.
Added ability to override the materialization logic in the OrtValue slicer so DML can plugin their handling.

* Fix Windows GPU C API packaging pipeline failure (#2440)

Fix Windows GPU C API packaging pipeline failure (#2440)

* Correctly handle implicit inputs for fused nodes (#2390)

* Correctly handle implicit inputs for fused nodes

Previously, nuphar's partitioning function didn't include
node's implicit inputs into the inputs list of MetaDef, and hence
a crash was triggered in the onnx graph checker.

This commit fixed the issue. Furthermore, it also fixed a related
issue where we didn't add implicit inputs into
graph_inputs_excluding_initializers_ in Graph::SetGraphInputsOutputs.

the issue was that graph_inputs_including_initializers_ populated by
SetInputs (e.g. called by FunctionImpl::FunctionImpl) may contain
implicit inputs which were not of any node's initializers in the graph.
Because they were not part of any initializers, these implicit inputs
couldn't be visited by going through all nodes' inputs.
Consequently, they would *not* be added into graph_inputs_excluding_initializers_.

We fixed the issue by first copying the populated graph_inputs_including_initializers_
into graph_inputs_excluding_initalizers_, which then had both initializers and
non-initializers as its initial content. Later, we erase initializers from the
list. In this way, we can ensure all implicit inputs to remain in
graph_inputs_excluding_initializers_.

* refined comments and fixed duplicates

Address CR by revisiting comments in terms of implicit inputs

Also fixed an issue by skipping duplicates while copying inputs
from graph_inputs_including_initializers_.

* address CR

explain why we need to collect nodes' implicit inputs

* don't rely on pointer values for iterating std::set

Previously, openvino relied on iterating a set of NodeArg pointers
to construct inputs and outputs for a fused graph. It could cause
non-determinism. The reason was that although iterating std::set by
itself is stable, pointer values of NodeArgs may vary. Consequently,
we could end up visiting the set's elements in different orders for
different runs for the same test, which resulted in constructing
inputs (and outputs) with different orders to the fused graph.
For example, for the same test, we may have inputs [A, B] in some
runs but inputs[B, A] in others.

Let's use std::string as the key type to avoid such nondeterminism.

This commit also added implicit inputs into meta->inputs while returning
the capability from the openvino provider.

* Fixed another latent issue in openvino's GetCapability function

The issue was that we couldn't simply erase fused_inputs and fused_outputs
while iterating the nodes. For example, an output NodeArg may have multiple
uses, and it's wrong if we erase it from fused_outputs when we encounter only
one of its uses as input.

* Remove DeviceAllocatorRegistry class (#2451)

Remove DeviceAllocatorRegistry class

* CSharp api and test for loading custom op shared library (#2420)

- Added C-API test for loading custom op shared lib.
- Made some changes in C++ api header and C-api implementation to get it working.
- Added C# API and corresponding test for loading custom op shared library.

* Parallel Gelu with ParallelFor (#2399)

Parallel Gelu to get better performance for Gelu

* Clean up build.py (#2446)

* Pull the latest image before running docker build

* Fuse SkipLayerNorm with Bias (#2453)

Fuse SkipLayerNorm with Bias

* Allow more than one invocation of CreateEnv in the same process. (#2467)

* Allow more than one invocation of CreateEnv in the same process.

* Fix centos build

* Symbolic shape inference improvements: (#2460)

* Symbolic shape inference improvements:
- add a mode to guess unknown ops' output rank
- add support for GatherND
- add support for If
- fix a bug in get_int_values when then tensor rank > 1D, by treating it as no sympy data
- add symbol to literal merge when ONNX silently merges dims
- fix a bug in Concat when input dim is 0
- fix a bug in ConstantOfShape that computed dim is not updated
- add support for dynamic shape in ConstantOfShape
- fix a bug in Loop output shape that loop iterator dim is not inserted at dim 0
- add support for dynamic padding in Pad
- add support for dynamic shape in Reshape
- add support for Resize with opset > 10, by treating output dims as dynamic
- fix a bug in Slice when starts/ends are dynamic
- restrict input model to opset 7 and above
- make output model optional to avoid disk write when testing

Run model tests for symbolic shape inference

Reduce 2GB docker image size of nuphar

* add additional test data set for nuget pipeline (#2448)

* add SAS token to download internal test data for nuget pipeline

* update azure endpoint

* fix keyvault download step

* fix variable declaration for secret group

* fix indentation

* fix yaml syntax for variables

* fix setting secrets for script

* fix env synctax

* Fix macos pipeline

* attempt to add secrets to windows download data

* fix mac and win data download

* fix windows data download

* update test data set url and location

* Revert "Brianma/windowsai fi (#2475)"

This reverts commit 5780b864a1.

* Add scenario tests (#2457)

* Add scenario tests

* Remove TODO from model license

* Add winml_api test dependency

* fix model load test. fi from master changed the constructor (#2483)

* make api tests all pass (#2486)

* fix bad merge

* fix bad model merge

* Layer dev paulm (#2492)

* commetns for dml graph transformer
fixed ort value passing using the allocatir info

* fixed and coded maps and sequences across the abi

* Rename ambiguous header (#2489)

* fix one more missing IR version model (#2500)

* add missing IR version to 4 more models used by scenario tests (#2501)

* Add CLI parameters to test runner, build WinML in ARM and x86 CI (#2479)

* Support test parameters through CLI arguments

* Add WinML do Windows x86/ARM CI builds

* Code style fixes

* Update googletest

Remove GPUTEST macros everywhere now that GTEST_SKIP is supported

* Refactor main.cpp

* Build scenario tests without DML

* Link scenario tests to DML when it's enabled (#2502)

* Layer dev release pipeline (#2488)

Adds winml binaries to existing cpu nuget package, and creates new gpu dml nuget package with winml binaries and DML EP.

* Layer dev paulm (#2506)

* commetns for dml graph transformer
fixed ort value passing using the allocatir info

* fixed and coded maps and sequences across the abi

* cleaned up w4's
cleaned up the model info ABI
delayload directml.dll from winml

* Remove usage of IOBinding in WinML and use C_API Run method (#2504)

* remove usage of iobinding

* Change data structure to use vector of Ort::Values

* Polish bind input / output

* Use C APIrun method

* Update providers on evaluate getresults

* Remove run and IObinding interface from WinMLAdapter

* Remove use of IObinding

* bind unbound outputs code moved to learningmodelbinding

* clean up unneeded istensor adapter function

* Fix comment

* Check if session is closed before binding and clearing

* PR feedback

* Layer dev paulm (#2507)

* commetns for dml graph transformer
fixed ort value passing using the allocatir info

* fixed and coded maps and sequences across the abi

* cleaned up w4's
cleaned up the model info ABI
delayload directml.dll from winml

* cleaned up namepsace aliases.
renamed _winmla to winmla
this was good PR feedback from tiago a while back.

* Make tests dependend on winml_dll (#2509)

* add dml binaries to DirectML package and be more explicit about condition variables (#2520)

* re-enable warnings for winml builds and fix the warnings that were hiding (#2526)

* turn devmode back on for winml builds

* fix some warnings. include protobuf in a way that disables some warnings

* undo protobufhelpers changes and just ignore 4100 errors in pb code

* attempt to isolate protobufhelpers errors

* add template specialization for getting tensor proto data

* Layer dev paulm (#2533)

* commetns for dml graph transformer
fixed ort value passing using the allocatir info

* fixed and coded maps and sequences across the abi

* cleaned up w4's
cleaned up the model info ABI
delayload directml.dll from winml

* cleaned up namepsace aliases.
renamed _winmla to winmla
this was good PR feedback from tiago a while back.

* moved files from inc to lib\api.core
cleaned up some of the cmake

* staged changes

* Spawn child process to run DeviceLostRecovery scenario test (#2530)

* Spawn child process to run DeviceLostRecovery scenario test

* Layer dev paulm (#2536)

ori said yes

* add missing namespace to winml_trace_logging_provider in lotusenvironment.h (#2542)

* Handle exception thrown from all apis in WinMLAdapter (#2539)

* various changes to unblock windowsai ADO build

* Fix custom ops scenario tests (#2562)

* Do not shutdown protobuf after ort environment gets destroyed. Lazy load lotus environment first time it is needed

* comment typo

* pr comment  about calling phoenix singleton

* Make lotus_environment static in winmladapter

* Layer dev paulm (#2567)

* commetns for dml graph transformer
fixed ort value passing using the allocatir info

* fixed and coded maps and sequences across the abi

* cleaned up w4's
cleaned up the model info ABI
delayload directml.dll from winml

* cleaned up namepsace aliases.
renamed _winmla to winmla
this was good PR feedback from tiago a while back.

* moved files from inc to lib\api.core
cleaned up some of the cmake

* staged changes

* making windowsAI azure dev ops work.

* code review comments.

* revert changes

* Cmake and preprocessor fixes that where uncovered by building on agents without DML available via SDK

* Layer dev dml delayload (#2580)

* Brianma/cpu (#2583)

* don't include dml stuff in cpu builds

* tests that link the image lib also need the telemetry lib now

* Throw Winml_err_invalid_binding if binding gpu resource on cpu device (#2589)

* Throw Winml_err_invalid_binding if binding gpu resource on cpu device

* PR comments. No need to query executionprovider for is gpu device

* User/xianz/ortthrow (#2596)

* thrown and handle onnxruntime exceptions

* handle exception thrown from ort in winmladapter

* undo changes in error.h

* add message to HRESULT

* User/xianz/ortthrow (#2599)

* thrown and handle onnxruntime exceptions

* handle exception thrown from ort in winmladapter

* undo changes in error.h

* add message to HRESULT

* add status error message

* Remove uwp onsuspending winrt call because logruntimeperf is getting removed (#2630)

* User/xianz/dedup telemetry (#2631)

* investigate duplication of telemetry in winml and ort

* remove winml telemetry events

* telemetry executionProviderEvent

* remove unneccessary file and refactor code little bit

* Revert back TelemetryEvent, which send up ETW event.

* merge changes from layer_dev to windowsai (#2638)

* Remove underscore from googletest names (#2616)

* Fix leaking memory allocator

Fix https://microsoft.visualstudio.com/OS/_workitems/edit/24278761
and https://microsoft.visualstudio.com/OS/_workitems/edit/24330198

* Explicitly initialize Ort::Value with nullptr

* Cache WinML adapter

* bad merge

* define private version of dxcore enum that is added in 19H1 SDK. (#2654)

* add comment for explaning private definition of dxcore d3d feature level ennum value. (#2672)

* do not package directml.pdb for redist packages. (#2676)

* Fix leaking operator registry (#2645)

Fix https://microsoft.visualstudio.com/OS/_workitems/edit/24354916

* User/orilevari/windowsai master merge (#2674)

merge resolutions included pulling in telemetry logic that was merged to master and not windowsai and dereferencing InferenceSession::sessionstate now that it is a unique pointer

* Delete Ort Allocator in LearningModelBinding (#2653)

* Delete OrtAllocator in LearningModelBinding

* PR comments to make Ort::Allocator a smart pointer

* Small comment change

* PR feedback to clean up code

* PR feedback on move semantics

* Clean up std::move

* Fix memory leaks (#2679)

Fix https://microsoft.visualstudio.com/OS/_workitems/edit/24356109,
https://microsoft.visualstudio.com/OS/_workitems/edit/24388361 and
https://microsoft.visualstudio.com/OS/_workitems/edit/24388596

* various changes to properly organize and skip GPU tests. For now for No DML builds we will not run GPU tests at all. In the future we should adapt the tests to expect the appropiate errors. (#2695)

* Windowsai without fi (#2701)

* Disable Attention fusion tests when DISABLE_CONTRIB_OPS is defined (#2529)

* Setup java ci (#2528)

* Add provision in ORT for session options to be parsed when available via model file  (#2449)

* Initial commit

* Fix gitmodules

* Nits

* Nits

* Updates

* Update

* More changes

* Updates

* Update

* Some updates

* More changes

* Update

* Update

* Merge

* Update

* Updates

* More changes

* Update

* Fix nits

* Updates

* Fix warning

* Fix build

* Add comment

* PR feedback

* PR feedback

* Updates

* Updates

* Update

* More changes

* Fix build break

* Comment test for now

* Updates

* Updates

* PR feedback

* Updates

* Nits

* Add tests

* Fix build

* Fix build

* Fix build

* Fix build break

* Fix build

* Nits

* PR feedback

* More change

* Expose GetSessionOptions in pybind logic and add unit test for python

* Fix build

* PR feedback

* PR feedback

* Revert "Disable thread pool creation when enabled OpenMP (#2485)" (#2535)

This reverts commit 7c7d5a149c.

* Add dynamic shape support in TensorRT execution provider (#2450)

* remove onnx-tensorrt submodule

* add new onnx-tensorrt submodule (experiment) for trt6

* update engine build for trt6

* update compile and compute for tensorrt6.0

* Update tensorrt_execution_provider.cc

* Update tensorrt_execution_provider.cc

* Update tensorrt_execution_provider.cc

* Update tensorrt_execution_provider.cc

* switch to onnx-tensorrt master for TensorRT6'

* Update tensorrt_execution_provider.cc

* Handle dynamic batch size and add memcpy in TensorRT EP

* update test cases

* Update tensorrt_execution_provider.cc

* update onnx-tensorrt submodule

* Update Dockerfile.ubuntu_tensorrt

* Update Dockerfile.ubuntu_tensorrt

* Update run_dockerbuild.sh

* Update run_dockerbuild.sh

* Update install_ubuntu.sh

* Update concat_op_test.cc

* Update tensorrt_execution_provider.cc

* Upgrade TensorRT to version 6.0.1.5

* Update onnxruntime_providers.cmake

* Update CMakeLists.txt

* Update reduction_ops_test.cc

* Update install_ubuntu.sh

* Update Dockerfile.ubuntu_tensorrt

* Update Dockerfile.tensorrt

* Update BUILD.md

* Update run_dockerbuild.sh

* Update install_ubuntu.sh

* Update onnxruntime_providers.cmake

* Update install_ubuntu.sh

* Update install_ubuntu.sh

* Update gemm_test.cc

* Update gather_op_test.cc

* Update CMakeLists.txt

* Removed submodule

* update onnx-tensorrt submodule

* update header file

* Removed submodule

* add submodule onnx-tensorrt kevin's branch shape-test'

* add debugging code

* Update tensorrt_execution_provider.cc

* Update tensorrt_execution_provider.cc

* merge master

* Removed submodule

* update onnx-tensorrt submodule

* add more changes for dynamic shapes

* Update tensorrt_execution_provider.cc

* update for dynamic shape

* update dynamic shape processing

* fix logger issue

* remove submodule onnx-tensorrt

* add submodule onnx-tensorrt

* add env variable min_subgraph_size

* remove redundency

* update document

* use onnxruntime::make_unique

* fix multi-run issue

* remove some tests to save CI build time

* Add dynamic shape test

* Update TensorRT-ExecutionProvider.md

* Add example of running Faster R-CNN model on TensorRT EP

* Add more details on env variables

* update environment variables

* Update tensorrt_basic_test.cc

* Update model tests

* Update tensor_op_test.cc

* remove --use_full_protobuf

* Update build.py

* User/xianz/telemetry (#2458)

* enabme telemetry

* enable telemetry

* set enable telemetry as default

* for debugging

* remove log and set disable telemetry as default back

* delete private file while testing

* resolve comment: mainly add license header, rename macro and update docs

* rewording in privacy.md

* Fix integer overflow in cuda NonMaxSuppression implementation (#2540)

* add test case that should pass but fail

* fix nms

* extract int_max_output_boxes_per_class

* Introduce container type runtime checks and other improvements (#2522)

Rework TensorSeq in a manner consistent with Tensor and SparseTensor
  in terms of type system setup.
  Reduce templating. Introduce helpers to ensure the same
  data type.
  Make OrtValue __dtor not virtual.
  Introduce ContainerChecker

* Fix C API tests for centos and mac (#2544)

* change c++14 to c++11

* add ld lib path for centos

* enable csharp tests on macos

* fix C API test on MacOS + fix manylinux dotnet install

* fix manylinux dotnet install

* fix lib link

* Add back executable bit to build.py

* Fix a bug handling negative begin pad values in Pad op (#2550)

* Fix bug in Pad op

* Update

* DNNL CMAKE update (#2548)

* Fix android build (#2558)

* Update win-x86-ci.yml (#2557)

Fix build pipeline break

* Re-enable Windows C# tests (#2564)

* disable onnx_test_runner -x invocations for dnnl (#2568)

* Allow sequence length to be symbolic (#2559)

* setup java ci mac (#2570)

* make layernorm fusion to support opset 11 (#2545)

* Fix a warning found in the latest VS release

* Add more check on SkipLayerNorm and BiasGelu fusion (#2574)

* Fix file not found error during docker build. (#2569)

* Add ConvTranspose1D (#2578)

* Ryanunderhill/packagename test (#2582)

* [Nuphar EP] fixes for some object detection models (#2581)

Update notebook tutorial with multi-threaded int8 GEMM from #2517

* EmbedLayerNormalization Fusion Improvement (#2553)

Embedding layer norm fusion improvements - add more checks

* Update version (#2584)

* Temporarily exclude vgg19 test from Python backend test

1. temporarily exclude vgg19 test which comsumes too much memory, run out of memory on Upsquared device. Single test pass for vgg19, need furture investigation (#2588)
2. Update docker file to decrease the docker image size

* Update docs for Android NNAPI EP (#2586)

* Fix lto bug for protobuf and ubuntu

* add path to build dir before test run (#2590)

* Add missig env variables for mac pipeline test (#2595)

* Fixed an issue in updating realized dims (#2597)

when we update realized dims for scan's output, the sliced axis also
needs to be inclusive, i.e. we should check with "dim >= insert_inclusive_axis",
because the offset in the symbols are based on Scan sugraph.
Otherwise, we would end up with shape mismatch later.

* Java API for onnxruntime (#2215)

* Add support for opset 11 in reshape fusion (#2592)

 Support opset verion 11 in reshape fusion

* Rename automl python tools folder to featurizer_ops. (#2593)

* Support opset 11 subgraph of Squad model in Embed Layer Normalization (#2605)

Support opset 11 Squad model that is exported from PyTorch nightly. The embed layer uses Range op which is missed in the transformer.

* symbolic shape inference: fix warnings in GPT-2 model (#2608)

And revise nuphar perf test on BERT squad

* Dump subgraph ID and fused graph ID (#2607)

* Dump subgraph ID and fused graph ID

Dump subgraph ID and fused graph ID for better debugging

* Remove local static fused_count

added a field global_fused_count_ to NupharExecutionProvider class

* EmbedLayerNormalization Fusion For Dynamic Squad Model Opset 10 (#2613)

Support subgraph of SQuAD model exported from pytorch with dynamic input axes

* Allow providers to be set for InferenceSession at construction (#2606)

* Remove unnecessary parameter in some places in GatherElements implementation (#2612)

* Remove unnecessary parameter in some places

* Update

* Update

* Make sure fenced tensor could not reuse other tensor. (#2561)

Fix random error caused by this.

* Improve Embed Layer Norm Fusion for SQuAD with static input shape  (#2621)

* fix float16 comparison in initializer (#2629)

* epsilon attribute for layernormalization fusion (#2639)

* removed unnecessary batch file and fix path (#2640)

* Add shape inference to ConvTransposeWithDynamicPads schema (#2632)

* Improve cuda expand() opeator's performance. (#2624)

* Cuda pad optimize when no padding is needed. (#2625)

* Shortcut cuda Pad() when no padding is needed.

* Optimize cuda scatter() on 2D compatible. (#2628)

* Optimize cuda scatter() on 2D compatible.

* Add some comments.

* fix build error for ARM (#2648)

* Improve performance of resize() in Nearest mode (#2626)

Special treatment for 2D, check same size as input image.
And in 2d kernel, template use_expolation.

* Fix memory exception in Layer Norm Fusion (#2644)

* Windows CI changes(#2650)

* Revert "User/orilevari/windowsai master merge (#2674)"

This reverts commit fe26146311.

* Revert "Windowsai without fi (#2701)"

This reverts commit 285d4c85ff.

* Revert "User/orilevari/windowsai master merge (#2674)"

This reverts commit fe26146311.

* Deref unique pointer for session_state

* send shutdown event when dll is unloaded and EvaluationStop, SessionC… (#2704)

* send shutdown event when dll is unloaded and EvaluationStop, SessionCreationStart Events.

* Add EvalutationStart Event

* add comment

* use correct type for for loop (#2755)

* ARM CI (#2759)

* Set ARM agent pool

* Set CMake generator to VS 2019 in ARM

* Use system-wide CMake instead of custom version

Our custom version is too old for VS 2019

* Use DML and build shared lib in ARM CI

* Restore nuget packages in ARM CI

* Disable DML

* Refactor ARM debug/release builds

* Use system packaged Python version

* Remove hardcoded Python path

* Downgrade Python to 3.7 for build

* Remove explicit CMake path

* Fix invalid JSON in cgmanifest.json (#2760)

* Fix cgmanifest.json generating script (#2770)

* Fix protobuf submodule name

* Workaround pygit2 bug

* Remove usage of WHOLEARCHIVE in WinML CMake and add WinMLAdapterFactory (#2726)

* Remove usage of WHOLEARCHIVE in WinMLAdapter CMake and add WinMLAdapterFactory

* PR feedback, no need for dll(export) since using def file

* PR comments

* Small comment in gen_def.py

* User/orilevari/32bit comparison warning (#2800)

* use correct type for for loop

* explicitly specify void for parameters of OrtGetApiBase because the function is defined in c, so when the function is just (), it is interpreted as having an unknown number of parameters. This was causing compiler warning C4276.

* Move winml_provider_factory.h to proper location (#2801)

* Scneario Test : Build Google Test and Taef Test based on preprocessor definition (#2809)

* Add winml macro wrappers on top of google test macros

* change test methods to disabled

* Add custom winml macros for both taef and google tests

* PR comments

* Filter CPU case for IsFloat16Supported (#2802)

* Merge fixes

* CMake cross-generator fixes (#2790)

* Fix compilation w/ non-VS CMake generators

* Fix custom WINMD target in Ninja

* Remove usage of msbuild .targets file

* Fix linking using DML in Ninja

* Automate SDK kit version choice

* Cleanup DML package install

* Fix SDK version detection

* Fix comment

* Revert unittest linkage changes

* Fix latest SDK detection

* Don't link to non-uapcore libraries

* Remove MessageBoxA reference and unused link libs

* Refactor WinMLAPI Tests to build both google and taef test based on preprocessor definition (#2829)

* Add winml macro wrappers on top of google test macros

* change test methods to disabled

* Add custom winml macros for both taef and google tests

* PR comments

* Refactor winml api tests

* Move additional gtest specific macro definition into googleTestMacros.h

* Fix test build break since winml_lib_api needs to be statically linked to tests since winmlp::learningmodeldevice::iscpu() is being used in devicehelpers.cpp (#2837)

* Enforce WINML_TEST_CLASS_BEGIN_* matches w/ a WINML_TEST_CLASS_END (#2841)

* Fix warnings that cause build to fail

* Fix test warnings and delayload linking (#2843)

* Ortmemoryinfo struct changed

* mark the camera scenario test as edgecore because it uses d3d11 (#2852)

* User/orilevari/pipeline fi breaks (#2853)

* remove conflicting artifact names. Decided to stop using drop-nuget-cuda since this may have implications on other dependent pipelines.

* change job name in gpu.yml back to Windows_CI_GPU_CUDA_Dev

* Remove internal libs from tests (#2864)

* Support custom DML in onnxruntime_providers.cmake (#2867)

* Make DML include path global (#2882)

* Make DML include path global

* Add generated cppwinrt headers to winml_lib_common

* Integrate changes to WindowsAI to make ADO Build (#2886)

* Revert "CMake cross-generator fixes (#2790)"

This reverts commit dbe7d97fa1.

*  add additional suppress warning in onnx_proto

* ignore /wd4996 warning

* DML execution provider fixes

* Revert "Revert "CMake cross-generator fixes (#2790)""

This reverts commit 1ae7b4bcbc.

* Update func signature of custom op function overloads

* common devicehelpers fixes

* Add pch.h for winml_lib_common

* re-add winml_lib_common_dir/inc to include path for winml_adapter

* User/orilevari/dml redist shared folder (#2890)

* move dml nuget package directory up one level to make it shared between build flavors

* Merge conflict fix

* Revert "Merge conflict fix"

This reverts commit 142fa72cf9ce4344ad717b50b7ea2b8582aadc7c.

* Revert "Merge remote-tracking branch 'origin/master' into windowsai"

This reverts commit 6e2126d46e5e5f564d65da37dd4f70c93dd81165, reversing
changes made to b3f5583dc9249834b947c8ea905f6a98060d5bd6.

* Make winml_test_common free of test macros (#2902)

* Add option to build winml_test_common without googletest specifics

* remove test macros from squeezenet

* comment change

* Make cmake functions to get scenario and api source

* PRcomments about hresult

* Build errors fixed

* Fix cmake variable

* Make winml_google_test_lib to build main.cpp once

* PRcomments

* Don't generate files outside the build root (#2914)

* Don't generate files outside the build root

* Add onnxruntime_EXTERNAL_DEPENDENCIES to WinML

* Add DML depedency on RESTORE_PACKAGES

* User/orilevari/fix yaml merge bugs (#2918)

* Add winml test source parameter into cmake function (#2919)

* Add option to build winml_test_common without googletest specifics

* remove test macros from squeezenet

* comment change

* Make cmake functions to get scenario and api source

* PRcomments about hresult

* Build errors fixed

* Fix cmake variable

* Make winml_google_test_lib to build main.cpp once

* PRcomments

* Add arguments to unittest cmake functions

* remove comment

* Revert "Revert "Merge remote-tracking branch 'origin/master' into windowsai""

This reverts commit ade5abe72a4234fdbc3623093c61c02c6b0bdc26.

* Fix breaks from merge with ORT master

* Brianma/linux (#2917)

* don't include windows.h in cross-plat header

* add default case for switch statement

* signed/unsigned mismatch fix

Co-authored-by: Brian Martin <42186431+martinb35@users.noreply.github.com>

* User/sheilk/winml adapter c api (#2891)

* Create winml adapter c api

* fix build

* make it build

* move adapter into onnxruntime core/session

* entry point not exported

* minor changes

* make model metadata work

* make tests pass

* implement all the model reflection apis on the adapter c abi

* update the new ort interface to create a lotus ennvironment with a logging sink

* start adding ort env

* move all winml code into adapter folder/lib to isolate it

* ensure a single logging manager at a time

* start refactoring session

* refactor session creation interface

* add cpu and dml session option methods to adapter

* finish session init

* stub out interfaces in ort lib to perform similar mechanics of iinference session

* enable profiling, and enable schema override

* update session register graph transformers

* turn back on custom registry for custom ops

* Add sync api

* add last c api stubs

* should build... but all feature values are broken since this is in flight to moving all implementation details into ivalue

* remove ep adapter header

* Implement DML execution provider functions from adapter (#2846)

* Implement DML execution provider functions from adapter

* Use functions in OnnxruntimeEngine.cpp

* make map/sequence type_infos freeable, and start implementing ivalue

* make it build again

* implement value methods

* implement remaining methods

* remove com adapter abi

* check dml session

* cache the allocator on ivalue

* check if resource is cpu/gpu when access its mutable data

* update tensor

* mismatched parentheses

* fix tensor base and binding obj

* it evaluates tensors! sometimes...

* minor fixes

* enable gpu evals

* wrapper all existing winml adapter apis with API_IMPL to try catch (#2854)

* update winml... tensor strings are broken, need to template tensorbase to do different things for strings

* make tensor strings work with 2 copies in/2 copies out

* Fix tensor string and allocator bug

* make maps work again... needs some fixes still

* Make it build!

* enable map inputs

* map outputs

* unbound outputs for sequences and maps

* User/xianz/merge windowsai (#2883)

* Packaging pipeline changes for VS 2019 (#2711)

* Tiny fix to codegen

* Simplify cache implementation and avoid static variables that may carry over between models

* Extend DML kernels (#2641)

* Additional DML operators

* Check unsupported attributes and inputs

* Address PR comments

* Add kernel capability function used for partitioning, and re-enable stride-based int64 support based on value range

* Fix test failures

* Build fix

* PR comments

* Update Nuphar tutorial notebook (#2721)

1. Reflect int8 GEMV improvements for multi-threading from #2696
2. Add notes on multi-threading control using OpenMP
3. Add samples of running multi-isa AOT, and show int8 GEMM differences between AVX and AVX2
4. Add rnn_benchmark example to resolve #1993

* Add schema for new Qops (#2611)

* Add schema for new Qops

* adding shape inference + qlinearaveragepool

* plus review comments

* plus review comments

* updates per review comments

* plus review comments

* [server] Add supposed for model_name and model_version as cli parameter (#2708)

* remove 64bit warning message from python validation. (#2727)

* MLAS: ARM64 build fix (#2734)

fix bad usage of vreinterpret to cast vector element types

* Fix broken python docs links (#2740)

* Fix build on Mac OS (#2731)

mac os ld doesn't support --while-archive, correct option is -all_load

* fix ngraph wheel (#2737)

* fix ngraph wheel

1.1.0 onnxruntime_ngraph wheel doesn't work

* remove libdnnl.so in nGraph Libs

* make it easy to compare

* Split onnxruntime server to a separated folder (#2744)

* Fix build for Python 3.8 (#2747)

* Fix build for Python 3.8

* Update protobuf to 3.11.2 (#1928)

Update protobuf to 3.11.2 (#1928)

* Change default optimization level to All (from Basic) (#2745)

* change default optimization level to All (from Basic)

* fix test

* fix c# test

* Update numpy to 1.18 (#2758)

* Update numpy to 1.18

* Pipeline changes for python 3.8 (#2753)

1. Pipeline changes for python 3.8
2. Fix a regression in setup.py which was just introduced in the previous commit.

Please notice, we still haven't made python 3.8 + Windows + CUDA work.

* Add basic stacktrace output for posix debug builds. (#2749)

* [NupharEP] fix a race condition when multiple sessions running different models concurrently (#2772)

* Revert "Change default optimization level to All (from Basic) (#2745)"

This reverts commit 56bb503c2f.

* Fix typo in error message (#2736)

* Rename MKL-DNN to DNNL to fix broken link (#2730)

* Fix nightly build version number issue

* Pass BUILD_BUILDNUMBER to linux docker

* Disable featurizers in python packages

* Import more featurizers (#2781)

Make kernels non-template. Add input constraint for learnt data.
  Add min_max_scalar_transformer, robust_scalar_transformer,
  inputation_marker_transfomer, label_encoder_transformer,
 missing_dummies_transformer along with tests.
 Advance Featurizers library commit.

* Implement a more stable softmax (#2715)

* Implement a more stable SoftMax
 e^x is represented as infinity if x is large enough, like 100.f. Infinity divided by Infinity is a NAN. Thus, softmax gets a NAN if one or more item are large enough.
A math transform as below is leveraged to get a stable softmax:
e^xi/(e^x1 + ...e^xn) = e^(xi - max) / (e^(x1 - max) + ... + e^(xn - max))

And for convenience, force max to 0.f if all xi are negative

* Contributing: Fix a typo (#2784)

* ACL EP GEMM improvements (#2780)

When it is posible we use a fully connected layer instead of the gemm implementation.
This will let the library use the best implementation based on the input data.

* ACL EP convolution improvements (#2774)

Added the optimized implementation for depthwise convolution for both ACL v19.02 and ACL 19.05.
Also the pointwise convolution seems to be more optimal in the CPU implementation so we opted for that instead.

* Add script for release Nuget validation (#2719)

* Initial commit

* Nits

* Disable a test temporarily

* Change working directory

* Test

* Add download python step

* Test update

* More changes

* Fix space issue

* Fix

* Verify nuget signing

* Fix

* Spaces

* PR feedback

* Nit

* Fix

* Fix

* Remove temporary changes

* add uint8 support to where op (#2792)

* Improve bert optimization script: (#2712)

(1) Move input int64=>int32 conversion to embed layer fusion.
(2) Output epsilon attribute for LayerNormalization fusion.

* add session creation time cost. (#2798)

* ML.NET team needs featurizers within a package (#2789)

Add auto ml featurizers to Windows, MacOS as well as to GPU  packaging-pipelines.

* Initialize max of softmax with lowest of float (#2786)

* MLAS: update SGEMM threading parameters (#2808)

* add interface to copy batch tensors. (#2807)

* add interface to copy batch tensors.

* onnxruntime

* speed up Windows TRT CI (#2811)

* don't run cuda tests if building with tensorrt

* remove unnecessary build options for win trt ci

* refactor win gpu tensorrt ci yml

* --numpy_version=1.17

* update

* update

* azcopy and cuda path

* Update test data (#2356)

* Add timeseries imputer transformer featurizer kernel (#2813)

 Make kernels non-template. Add input constraint for learnt data.
  Fixup tests.
  Add two more featurizers along with tests. Tests fail.
  min_max_scalar_transformer
  robust_scalar_transformer
  Fix tests serialized stream by prepending version bytes.
  Add inputation_marker_transfomer and the test.
  Fix up float/double type designations.
 Added label_encoder_transformer along with a test.
  string_throw case is broken at the momement.
  Fix labelencodertransfomer_test.cc string_throw case
  Rename maxabsscalertransformer_test.cc
  Add MissingDummiesTransformer along with the test.
  Update manifest.
  Add TimeSeriesImputerTransformer definition, implementation and tests

* Fix memory leak in TRT (#2815)

* fix memory leak issue

* revert EP_FAIL on enueueV2

* Add manifest missing comma

* Run static code analyzer on most of our code (#2817)

* Scneario Test : Build Google Test and Taef Test based on preprocessor definition (#2809)

* Add winml macro wrappers on top of google test macros

* change test methods to disabled

* Add custom winml macros for both taef and google tests

* PR comments

* update quantization doc (#2783)

* update documentation for quantization script

* plus some spell corrections

* Filter CPU case for IsFloat16Supported (#2802)

* update default optimization level + fix gemm_activation fusion (#2791)

* update defualt optimization level + fix gemm_activation fusion

* fix typo

* add unit test and incorporate review comments

* fix test comment

* Fix dnnl wheel package name (#2823)

* Append '-dnnl' to whl package name when --use_dnnl

* Update build.py

* Update Ubuntu & TensorRT version  in README (#2820)

Dockerfile.tensorrt is using nvcr.io/nvidia/tensorrt:19.09-py3 as base Image, update Ubuntu and TensorRT version according to
https://docs.nvidia.com/deeplearning/sdk/tensorrt-container-release-notes/rel_19-09.html#rel_19-09

* Merge fixes

* Add OneHotEncoder and HashOneHotEncoder kernels. (#2830)

 Add defs and imlementation for OneHotEncoders, adjuist date_time_transformer kernel and test.
  Add OneHotEncoder kernel test.
  Add HashOneHotVectorizerTransformer unit test.
  This does not link due to multiple definitions of functions
  that are included into header from a CPP file.

* Upgrade gtest to the latest version (#2827)

WinML would like to update the googletest submodule. They want some newer features (namely GTEST_SKIP to skip tests programmatically and be able to skip entire fixtures easily) and would need to update the submodule version.

However, because the new version of code hit a bug in gcc, even though the bug is already fixed in the latest gcc but we're using gcc 4.8.x and it won't get patched for the bug, so we have to do a compromise, change our code a little bit to make it work.

The gcc bug:  https://gcc.gnu.org/bugzilla/show_bug.cgi?id=51213

* Add support for int64_t for topk CPU. Fixes github issue #2806. (#2833)

* Ignore allocator type in ExecutionProviders allocator map. Make default initialization of OrtMemoryInfo more clearly invalid. (#2768)

* Remove allocator type from the key comparison in ExecutionProviders.
Remove usage of DummyArena as it's no longer necessary.

* Fix x86 tests where arena allocator is disabled.
Make initialization of OrtMemoryInfo clearer by adding Invalid enum value.

* Make OrtValueNameIdxMap::MaxIdx more intuitive.

* Convert ExternalProject Featurizers into git submodule (#2834)

Add git submodule for Featurizer library.
  Update cmake to build for git submodule.

* add domain check for nodes + update documentation (#2831)

* Fix cgmanifest.json generating script (#2770)

* Fix protobuf submodule name

* Workaround pygit2 bug

* User/orilevari/32bit comparison warning (#2800)

* use correct type for for loop

* explicitly specify void for parameters of OrtGetApiBase because the function is defined in c, so when the function is just (), it is interpreted as having an unknown number of parameters. This was causing compiler warning C4276.

* CMake cross-generator fixes (#2790)

* Fix compilation w/ non-VS CMake generators

* Fix custom WINMD target in Ninja

* Remove usage of msbuild .targets file

* Fix linking using DML in Ninja

* Automate SDK kit version choice

* Cleanup DML package install

* Fix SDK version detection

* Fix comment

* Revert unittest linkage changes

* Fix latest SDK detection

* Don't link to non-uapcore libraries

* Remove MessageBoxA reference and unused link libs

* Fix Linux CUDA nuget packaging pipeline break

* Refactor WinMLAPI Tests to build both google and taef test based on preprocessor definition (#2829)

* Add winml macro wrappers on top of google test macros

* change test methods to disabled

* Add custom winml macros for both taef and google tests

* PR comments

* Refactor winml api tests

* Move additional gtest specific macro definition into googleTestMacros.h

* Fix test build break since winml_lib_api needs to be statically linked to tests since winmlp::learningmodeldevice::iscpu() is being used in devicehelpers.cpp (#2837)

* Enforce WINML_TEST_CLASS_BEGIN_* matches w/ a WINML_TEST_CLASS_END (#2841)

* update optimization doc for BERT related fusions  (#2819)

* Add bert related transformers to doc
* Add execution provider and comment for bert optimizations
* Add comment about accuracy impact of approximation

* Fix warnings that cause build to fail

* MLAS: enable threading for quantized GEMMs (#2844)

* Fix test warnings and delayload linking (#2843)

* Ortmemoryinfo struct changed

* mark the camera scenario test as edgecore because it uses d3d11 (#2852)

* User/orilevari/pipeline fi breaks (#2853)

* remove conflicting artifact names. Decided to stop using drop-nuget-cuda since this may have implications on other dependent pipelines.

* change job name in gpu.yml back to Windows_CI_GPU_CUDA_Dev

* Remove internal libs from tests (#2864)

* Support custom DML in onnxruntime_providers.cmake (#2867)

* remove old winmladapter cpp

Co-authored-by: Changming Sun <chasun@microsoft.com>
Co-authored-by: KeDengMS <kedeng@microsoft.com>
Co-authored-by: Jeff <38966965+jeffbloo@users.noreply.github.com>
Co-authored-by: Ashwini Khade <askhade@microsoft.com>
Co-authored-by: Andrey <andrey.lompart@gmail.com>
Co-authored-by: George Wu <jywu@microsoft.com>
Co-authored-by: Tracy Sharpe <42477615+tracysh@users.noreply.github.com>
Co-authored-by: Faith Xu <txsafx@gmail.com>
Co-authored-by: zhanyi-ms <zhanyi@microsoft.com>
Co-authored-by: Changyoung Koh <gkcy1019@gmail.com>
Co-authored-by: Scott McKay <Scott.McKay@microsoft.com>
Co-authored-by: Takeshi Watanabe <take-cheeze@users.noreply.github.com>
Co-authored-by: Dmitri Smirnov <yuslepukhin@users.noreply.github.com>
Co-authored-by: Yufeng Li <liyufeng1987@gmail.com>
Co-authored-by: Maher Jendoubi <maher.jendoubi@gmail.com>
Co-authored-by: Andrews548 <32704142+Andrews548@users.noreply.github.com>
Co-authored-by: Hariharan Seshadri <shariharan91@gmail.com>
Co-authored-by: Nathan <7902510+ybrnathan@users.noreply.github.com>
Co-authored-by: Tianlei Wu <tlwu@microsoft.com>
Co-authored-by: Ke Zhang <kezhan@microsoft.com>
Co-authored-by: stevenlix <38092805+stevenlix@users.noreply.github.com>
Co-authored-by: Ryan Lai <ryalai96@gmail.com>
Co-authored-by: Ori Levari <ori.levari@microsoft.com>
Co-authored-by: Yingge WAN <y-wan@users.noreply.github.com>
Co-authored-by: Qing <cwq1913@gmail.com>
Co-authored-by: Pranav Sharma <emailpranav@gmail.com>
Co-authored-by: Tiago Koji Castro Shibata <tiago.shibata@gmail.com>

* move sequence implementation into ort lib... still commented out... need to turn back on...

* begin sequence implementation

* make maps and sequences work

* fix broken tests

* remove dead code

* misc cleanup

* CR feedback

* User/xianz/winml adapter c api (#2869)

* wrapper all existing winml adapter apis with API_IMPL to try catch

* Return HR or Throw for WinML adapter APIs if failed

* undo macro wrapper for two places

* Wrap error macros around ort apis, too.

* address CR feedback #2

* add more api throw/return macros

* Revert changes no longer needed

* revert changes to cxx api

* format winml lib.ort and winml adapter

* remove static pheonix singleton

Co-authored-by: Ryan Lai <ryalai96@gmail.com>
Co-authored-by: Xiang Zhang <xianz@microsoft.com>
Co-authored-by: Changming Sun <chasun@microsoft.com>
Co-authored-by: KeDengMS <kedeng@microsoft.com>
Co-authored-by: Jeff <38966965+jeffbloo@users.noreply.github.com>
Co-authored-by: Ashwini Khade <askhade@microsoft.com>
Co-authored-by: Andrey <andrey.lompart@gmail.com>
Co-authored-by: George Wu <jywu@microsoft.com>
Co-authored-by: Tracy Sharpe <42477615+tracysh@users.noreply.github.com>
Co-authored-by: Faith Xu <txsafx@gmail.com>
Co-authored-by: zhanyi-ms <zhanyi@microsoft.com>
Co-authored-by: Changyoung Koh <gkcy1019@gmail.com>
Co-authored-by: Scott McKay <Scott.McKay@microsoft.com>
Co-authored-by: Takeshi Watanabe <take-cheeze@users.noreply.github.com>
Co-authored-by: Dmitri Smirnov <yuslepukhin@users.noreply.github.com>
Co-authored-by: Yufeng Li <liyufeng1987@gmail.com>
Co-authored-by: Maher Jendoubi <maher.jendoubi@gmail.com>
Co-authored-by: Andrews548 <32704142+Andrews548@users.noreply.github.com>
Co-authored-by: Hariharan Seshadri <shariharan91@gmail.com>
Co-authored-by: Nathan <7902510+ybrnathan@users.noreply.github.com>
Co-authored-by: Tianlei Wu <tlwu@microsoft.com>
Co-authored-by: Ke Zhang <kezhan@microsoft.com>
Co-authored-by: stevenlix <38092805+stevenlix@users.noreply.github.com>
Co-authored-by: Ori Levari <ori.levari@microsoft.com>
Co-authored-by: Yingge WAN <y-wan@users.noreply.github.com>
Co-authored-by: Qing <cwq1913@gmail.com>
Co-authored-by: Pranav Sharma <emailpranav@gmail.com>
Co-authored-by: Tiago Koji Castro Shibata <tiago.shibata@gmail.com>

* missing use_dml check in winml_adapter_session (#2930)

* --use_dnnl flag was mangled in merge (#2931)

* use dml macro not wrapping custom registry code (#2934)

* Disable LNK4199 winml_dll to enable cuda builds (#2936)

* Disable LNK4199 in winml_dll

* linkler->linker

* LearningModelSessionAPITestGpu.CreateSessionWithCastToFloat16InModel should return DXGI_ERROR_UNSUPPORTED when FP16 not supported (#2937)

* Disable LNK4199 in winml_dll

* linkler->linker

* Need to return DXGI_ERROR_UNSUPPORTED when Model does not support fp16

* Publish build symbols (#2939)

* Publish build symbols

* Don't upload PDBs for .exe files

* Make x86 build (#2943)

* fix last remaining size_t/int64_t warnings->errors (#2948)

* TensorString, Sequences and Maps use the first allocator, but should use the cpu default allocator. (#2952)

* fix tensor string allcoator

* clean up default allocator usage for strings in winml lib/api.ort

Co-authored-by: Ryan Lai <ryalai96@gmail.com>

* Handle tensor shape of zero (#2954)

Co-authored-by: Ryan Lai <ryalai96@gmail.com>

* CR feedback (#2970)

* CR feedback

* fix weird formatting on privacy readme

* Add 'All rights reserved.' everywhere

* readd all rights reserved to winml_provider_factory.h

* remove extra space in comment

* remove extra whitespace

* fixes post master merge

* remove winml from nuget gpu pipeline

* set IR VERSION on generated_model in rnn_benchmark (#2972)

* Fix slice conformance failures (#2908)

Co-authored-by: Adrian Tsai <adtsai@microsoft.com>
Co-authored-by: Brian Martin <42186431+martinb35@users.noreply.github.com>
Co-authored-by: Ryan Lai <ryalai96@gmail.com>
Co-authored-by: Paul McDaniel <paul_mcdaniel@hotmail.com>
Co-authored-by: Xiang Zhang <xianz@microsoft.com>
Co-authored-by: Dwayne Robinson <fdwr@hotmail.com>
Co-authored-by: Tiago Koji Castro Shibata <tiago.shibata@gmail.com>
Co-authored-by: Ori Levari <ori.levari@microsoft.com>
Co-authored-by: Jeff <38966965+jeffbloo@users.noreply.github.com>
Co-authored-by: Changming Sun <chasun@microsoft.com>
Co-authored-by: KeDengMS <kedeng@microsoft.com>
Co-authored-by: Ashwini Khade <askhade@microsoft.com>
Co-authored-by: Andrey <andrey.lompart@gmail.com>
Co-authored-by: George Wu <jywu@microsoft.com>
Co-authored-by: Tracy Sharpe <42477615+tracysh@users.noreply.github.com>
Co-authored-by: Faith Xu <txsafx@gmail.com>
Co-authored-by: zhanyi-ms <zhanyi@microsoft.com>
Co-authored-by: Changyoung Koh <gkcy1019@gmail.com>
Co-authored-by: Scott McKay <Scott.McKay@microsoft.com>
Co-authored-by: Takeshi Watanabe <take-cheeze@users.noreply.github.com>
Co-authored-by: Dmitri Smirnov <yuslepukhin@users.noreply.github.com>
Co-authored-by: Yufeng Li <liyufeng1987@gmail.com>
Co-authored-by: Maher Jendoubi <maher.jendoubi@gmail.com>
Co-authored-by: Andrews548 <32704142+Andrews548@users.noreply.github.com>
Co-authored-by: Hariharan Seshadri <shariharan91@gmail.com>
Co-authored-by: Nathan <7902510+ybrnathan@users.noreply.github.com>
Co-authored-by: Tianlei Wu <tlwu@microsoft.com>
Co-authored-by: Ke Zhang <kezhan@microsoft.com>
Co-authored-by: stevenlix <38092805+stevenlix@users.noreply.github.com>
Co-authored-by: Yingge WAN <y-wan@users.noreply.github.com>
Co-authored-by: Qing <cwq1913@gmail.com>
Co-authored-by: Pranav Sharma <emailpranav@gmail.com>
2020-02-04 17:12:19 -08:00
Maher Jendoubi
546d8f71ab Contributing: fix typos (#2905) 2020-01-27 13:39:08 -08:00
Ashwini Khade
7c6242b024
update default optimization level + fix gemm_activation fusion (#2791)
* update defualt optimization level + fix gemm_activation fusion

* fix typo

* add unit test and incorporate review comments

* fix test comment
2020-01-13 14:05:38 -08:00
Changming Sun
48e042868f
Update test data (#2356) 2020-01-10 10:52:23 -08:00
Maher Jendoubi
f22bffe0f6 Contributing: Fix a typo (#2784) 2020-01-07 06:32:13 -10:00
Changming Sun
013642ed37 Revert "Change default optimization level to All (from Basic) (#2745)"
This reverts commit 56bb503c2f.
2020-01-03 15:28:23 -08:00
Ashwini Khade
56bb503c2f
Change default optimization level to All (from Basic) (#2745)
* change default optimization level to All (from Basic)

* fix test

* fix c# test
2019-12-30 12:31:44 -08:00
Changming Sun
90b708f8a9
Update protobuf to 3.11.2 (#1928)
Update protobuf to 3.11.2 (#1928)
2019-12-27 18:28:18 -08:00
Changming Sun
b42cb61904
Packaging pipeline changes for VS 2019 (#2711) 2019-12-20 19:53:51 -08:00
jignparm
64112db346
Fix C# handling of unicode strings (#2697)
* Fix C# handling of unicode strings

* more tests

* check for handle before freesing

* variable reuse efficiency

* refactor and cleanup utf8 o utf16 conversion block
2019-12-19 21:02:54 -08:00
Ryan Hill
cbc398bb75
Ryanunderhill/packagename test (#2582) 2019-12-07 12:08:46 -08:00
Ashwini Khade
281933fa1c
Fix C API tests for centos and mac (#2544)
* change c++14 to c++11

* add ld lib path for centos

* enable csharp tests on macos

* fix C API test on MacOS + fix manylinux dotnet install

* fix manylinux dotnet install

* fix lib link
2019-12-04 18:01:35 -08:00
Ashwini Khade
e32eff826c
enable nuget package testing on centos7 (#2527)
* add centos tests to linux cpu ci pipeline

* Disable failing test

* use centos6 instead of centos7

* change back to centos7

* add dotnet runtime dependency

* fix dotnet runtime dependencies

* install dotnet sdk instead of runtimes

* add more dotnet dependencies

* temporary skip failing test

* ix lib path

* reenable failing test
2019-12-03 10:16:45 -08:00
Sreekanth Yalachigere
31ea11a696 Renaming MKL-DNN as DNNL (#2515)
* DNNL: Moving Files to rename file names

* DNNL name change

* azure pipeline updated

* disable ceil/dialation and enable Opset10

* disable ceil/dialation tests in Python

* mlperf_ssd_resnet34_1200 disabled
2019-12-03 07:34:23 -08:00
shahasad
882f28a74b
Fix NuGet end to end tests for custom op dll (#2472) 2019-11-25 15:26:09 -08:00
shahasad
ca0ed96621
CSharp api and test for loading custom op shared library (#2420)
- Added C-API test for loading custom op shared lib.
- Made some changes in C++ api header and C-api implementation to get it working.
- Added C# API and corresponding test for loading custom op shared library.
2019-11-21 15:45:49 -08:00
jignparm
fa30b1e758
Set ElementType to String type of node metadata, instead of byte[] (#2348)
* Set ElementType to String type of node metadata, instead of byte[]

* Fix spacing
2019-11-08 14:52:56 -08:00
Changming Sun
080a0a3186
Nuget pipeline changes (#2305)
1. refactor the pipeline, remove some duplicated code
2. Move Windows_py_GPU_Wheels job to Win-GPU-CUDA10. We'll deprecated the "Win-GPU" pool
3. Delete cpu-nocontribops-esrp-pipeline.yml and cpu-nocontribops-pipeline.yml
4. In Linux nuget jobs, run "make install" before creating the package. So that extra RPAH info will be removed
2019-11-08 09:45:52 -08:00
Changming Sun
2172a9e5ed Fix an issue in the nuget run tests scripts 2019-10-30 08:13:09 -07:00
Ashwini Khade
8d231a32f2 Remove the libc version check in C# code (#2282) 2019-10-29 21:31:38 -07:00
pulkittomar
1fa956fb3f Undo integration test skip (#1917) 2019-10-27 09:47:31 -07:00
shahasad
6a0ee7eff6
Fix model path marshalling in csharp, and re-enable the pretrained model tests (#2236) 2019-10-24 20:39:16 -07:00
Ryan Hill
7494500221 Fix csharp CXX sample (#2251) 2019-10-24 15:47:51 -07:00
Ryan Hill
77d8d6f767
Remove the OrtApiBase base_ member from OrtApi (#2242)
* Remove the OrtApiBase base_ member from OrtApi

* Forgot about C#
2019-10-24 11:36:23 -07:00
Ashwini Khade
81d901cb60 remove nuphar scripts (#2233) 2019-10-23 13:47:26 -07:00
Pranav Sharma
69970d1f2a
Include the new Privacy.md file in all release packages. (#2200) 2019-10-20 07:58:36 -07:00
Paul McDaniel
d1159b7008 Adding platform telemetry (#2109) 2019-10-19 18:25:57 -07:00
shahasad
35dae992f1
Fix nuget gpu ci test error (#2164)
* fix nuget version extraction script for Gpu packages
* fix cuda version in gpu end-to-end test
2019-10-18 23:01:26 -07:00
Ashwini Khade
ecf5ae8b76 Askhade/disable csharptests (#2172)
Disable flaky c# test
  For agility
2019-10-18 11:00:50 -07:00
Ashwini Khade
5eb4e81f80
move some optimizers to level1 (#1566)
* move some optimizers to level1

* move matmul add fusion to level 1

* bug fix in the test code

* fix make_uniques + add test exceptions

* add exception for tests in c# too
2019-10-18 09:29:31 -07:00
shahasad
7ef02f14d2
Add missing test model file for symbolic dimensions (#2123) 2019-10-15 06:55:51 -07:00
Hariharan Seshadri
95ab5ad39f
Support non-spatial mode in BatchNormalization (#2092)
* Initial commit

* Update

* Update

* Fix build break

* Update

* More changes

* Update type

* Exclude Nuphar for non-spatial tests

* Update

* Resolve PR comments
2019-10-14 18:14:14 -07:00
Hariharan Seshadri
80d09f0c59 Allow creation of empty tensors in c# (#1976)
* Allow creation of empty tensors in c#

* Keep test with updated behavior

* Add more empty tensor tests

* Nits
2019-10-14 14:47:02 -07:00
Pranav Sharma
91db840b6b
Introduce execution mode enum for clarity and extensibility; Change Python, C and C# APIs accordingly; Removed EnableSequentialExecution, DisableSequentialExecution in favor of the more general SetExecutionModeAPI. (#2098)
* Introduce execution mode for clarity and extensibility; Change Python APIs accordingly; Replace DisableSequentialExecution API with EnableParallelExecution for clarity.

* Fix cuda build

* Modify the test slightly

* Make C and C# APIs consistent with Python.
2019-10-14 09:48:19 -07:00
Scott McKay
eb24617d2e Add ability to get symbolic dimension info for graph inputs and outputs. (#2051)
* Add ability to get symbolic dimension info for graph inputs and outputs.
WIP to get initial feedback.

* Fix linxu build error.
Update C# API and add unit test

* Clarify the two different ways Tensor shape and type info is created. One is from concrete values and one is from a type proto where symbolic dimensions may exist. Doing so allows a change to default to empty strings for the symbolic dimensions if not provided.
2019-10-12 15:46:28 -07:00
Ryan Hill
e8e33977da
Ryanunderhill/customop dll (#2002)
* Add OrtApiBase
* Add RegisterCustomOpsLibrary API
2019-10-11 11:12:51 -07:00
shahasad
8803f6fff4
C# end to end test fix, and make end to end tests mandatory (#2079) 2019-10-10 19:23:43 -07:00
shahasad
b70fc34fae
Fix C# end to end tests in NuGet pipeline, failing for missing test data file 2019-10-07 20:14:20 -07:00
shahasad
b0feaef9de
Update the C# pretrained model test to include opset9 and 10 models (#2003) 2019-10-07 19:14:34 -07:00
shahasad
b322e072b9
added the overridableinitializers api (#1977) 2019-10-04 16:38:00 -07:00
shahasad
b355193841
Add Date-time stamp in NuGet package versioning for appropriate ordering of the packages (#1951) 2019-09-30 16:24:16 -07:00
Ryan Hill
7e22ed41b9
Fix sample tests (#1926) 2019-09-26 10:31:48 -07:00
shahasad
30c7c76552
fix the output size param's location in the csharp OrtRun interop call (#1903) 2019-09-25 09:34:25 -07:00
yeohan
034aa80167 InferenceSession ctor with byte array in C# (#1883)
* add ctor overloads that accept model byte array

* doxygen. mark Init method as private.

* doxygen

* rename test method for clarity

* PR feedback - add two overloads that accept either model path or model byte array

* update native signature to align with latest codebase

* fix native call
2019-09-24 11:59:04 -07:00
Pranav Sharma
1a3ded6a7b
Add C API for free dim override, fix missing API mention in InferenceTest.cs, fix confusing print statement in perf_test. (#1884)
* Mention OrtCreateSessionFromArray in C API doc

* Add C API for free dim override

* Add C API for free dim override, fix missing API mention in InferenceTest.cs, fix confusing print statement in perf_test.

* Remaining C#files

* fix c# build

* Run the tests in blame mode. This option is helpful in isolating a problematic test causing the test host to crash.

* fix order
2019-09-23 17:58:20 -07:00
Hariharan Seshadri
aacfa2af65
Bump up ONNX to the latest commit (#1868)
* Initial commit

* Delete unnecessary files

* Update generated proto files

* Update server proto file

* Update submodule onnx

* Update OnnxMl.cs

* update OnnxMl.cs

* Update OnnxMl.cs

* Comment one test

* Update disabled test list

* Update backend tests

* Formatting fix

* Formatting

* Disable a test

* More tests updated

* commit id update

* Update to a newer commit

* More updates

* More test updates

* Update

* Update

* Updates

* Update
2019-09-20 18:15:16 -07:00
Ryan Hill
5781222456
Ryanunderhill/api interface (#1855)
* Convert ABI to a versioned interface.
* Convert ORT_THROW_ON_ERROR to inline function to fix link errors.
2019-09-20 13:39:11 -07:00
Pranav Sharma
a9ce941579
Refine threading control options and move inter op thread pool to session state. (#1841)
Description: Refine threading control options and move inter op thread pool to session state.
Added thread_utils.h/cc to centralize the decision around the thread pool size under various conditions.

Motivation and Context
Currently the thread pool size of the parallel executor is hardcoded to 32 for some reason. This PR makes the options to configure the thread pool sizes clearer.
2019-09-18 22:36:23 -07:00
shahasad
6e4e764146
upgraded CSharp test and sample projects to netcoreapp2.1 (#1869) 2019-09-17 21:35:04 -07:00
KeDengMS
a1eecd8087
Fix C# build (#1834) 2019-09-13 23:50:06 -07:00
Pranav Sharma
f8c3442880
Part 2 of renaming AllocatorInfo to MemoryInfo. (#1804)
* Mention OrtCreateSessionFromArray in C API doc

* Part 2 of renaming AllocatorInfo to MemoryInfo.

* pr comments

* fix comment
2019-09-12 08:19:29 -07:00
Scott McKay
2e242a4089
Clarify naming of the API involving the RunOptions terminate flag. (#1768)
* Clarify naming of the RunOptions terminate flag.

* Update C# code to use new names.
2019-09-10 08:32:33 +10:00
shahasad
6a5b11756b
Conditionally export execution provider apis in chsarp (#1724) 2019-09-09 11:17:44 -07:00
Pranav Sharma
52fe574fed
Rename OrtAllocatorInfo to OrtMemoryInfo to make it more obvious. (#1758)
* Mention OrtCreateSessionFromArray in C API doc

* Rename OrtAllocatorInfo to OrtMemoryInfo to avoid confusion
2019-09-05 14:20:37 -07:00
Pranav Sharma
ad7ab3d880
Enforce shape validation. (#1716)
* Mention OrtCreateSessionFromArray in C API doc

* Enforce shape validation.

* Update broken models
2019-09-02 20:00:37 -07:00
KeDengMS
c9240f4e93
Implementation of Nuphar execution provider (#881)
* Implement Nuphar execution provider

Nuphar execution provider is a TVM-based compilation provider. It has shown great speedups for RNN models using Scan.
This PR is mainly for a preview of the shared codegen library for other TVM-based providers.

* Fix submodules

* Fix TVM submodule

* Update Nuphar to latest and resolve confliction

* Remove stale files caused by merge -X theirs

* Revert heap buffer change to not introduce onnxruntime_framework into onnxruntime_perf_test

* Fix bad merge

* Merge from Nuphar

* Fix warning treated as error, revert some unnecessary changes

* Revert some more test changes

* Some more test revert or comments to make review easier
New tests could be added later

* One more revert of unnecessary changes

* More change revert. Test could be added back later.
2019-09-01 23:01:47 -07:00
shahasad
833e18345d
Publish perf tool with nightly build (#1728) 2019-08-30 11:25:55 -07:00
shahasad
f25847bccd
More fixes on the NuGet CPU CI pipeline (#1688)
- Fix the Windows end-to-end test in NuGet CI
- Skip the TestModelSerialization, because it is failing on Linux. Must be fixed before API is released for use. Owner is notified.
2019-08-23 18:13:13 -07:00
Pranav Sharma
4035fe842e
Don't create the default allocator every single time. Rename API accordingly. Expose Session/Run log severity levels. (#1615)
* Mention OrtCreateSessionFromArray in C API doc

* Don't create the default allocator every single time. Rename API accordingly.

* Don't create the default allocator every single time. Rename API accordingly.

* updates...

* updates...

* PR comments

* fix typo in license header

* fix build
2019-08-23 10:33:20 -07:00
shahasad
a818740d91
Support Tensor<bool> and Tensor<Int8> in C# API. Support Tensor<string> as input. Fix a bug in the InferenceSession Run() with RunOptions (#1671)
- Support bool-Tensor and int8-Tensor in input-output of C# api
- Support string-tensor as input in C# api
- Fix a bug in InferenceSession.Run() -- RunOptions was not passed into the native call
2019-08-22 10:14:50 -07:00
Changming Sun
224dde7ef1
Allow user disable multiple threading (#1647) 2019-08-19 18:12:39 -07:00
shahasad
c9eb13a638
Copy System.Numerics.Tensors sources from dotnet/corefx into onnxruntime (#1605)
Copy System.Numerics.Tensors sources from dotnet/corefx into onnxruntime
2019-08-15 17:28:47 -07:00
Pranav Sharma
8d12ce45cf
Use a friendly enum for graph optimization level. (#1586)
* Mention OrtCreateSessionFromArray in C API doc

* review changes

* use enum for graph optimization level

* Use explicit values for enums

* updates...

* Add friendly enum for graph optimization levels in C, C# and Python APIs.

* Fix linux build

* Fix build breakage due to master merge

* PR comments
2019-08-14 17:12:08 -07:00
shahasad
a6a5acedda
Cleanup csharp API SessionOptions and RunOptions to be consistent with other APIs (#1570)
- Updated SessionOptions API to use properties instead of setter/getter methods. 
- Added missing APIs. 
- Added RunOptions.
2019-08-14 12:02:02 -07:00
pulkittomar
a50a63aa9e Serialize optimized onnx model (#1470)
* Model serialization

* Removed duplicate symbol

* Minor update

* Review comments

* add tests

* Model serialization

* Removed duplicate symbol

* Minor update

* Merged PR 1106437: Model Serialization in onnxruntime

* Review comments

* Merged PR 1107226: Review comments

Review comments

* add tests

* Fixed merge conflict

* Correct python tests

* InferenceSesssion Refeed Test

* Replace use of widechar const literal-L

* Fixed failing tests

* Updated comment

* Removed unnecessary session options

* Spell check on comments

* Do not serialize when level 3 optimization specified

* Updated error logs

* Changed log severity to WARN
2019-08-12 18:43:40 -07:00
Pranav Sharma
44ab301586
More C API changes. (#1519)
* Mention OrtCreateSessionFromArray in C API doc

* Cleanup a few inconsistencies in the C API.

* updates

* More updates
2019-07-29 18:35:28 -07:00
Hariharan Seshadri
6df4bc2ebe
Update scripts to access pipeline variables correctly (#1499)
* Update scripts to access IsReleaseBuild pipeline variable correctly

* Correct access of PACKAGENAME pipeline variable

* Fix Linux CUDA 10 package tests

* Enable C# GPU test

* Update
2019-07-25 15:30:32 -07:00
shahasad
768ced703c
Expose provider factory C API, especially for CUDA users (#1461)
Exposed provider factory C API, for cpu and cuda providers, into the published packages.
2019-07-22 19:03:06 -07:00
Pranav Sharma
4cbc6e1cf5
Validate input shapes. (#1352)
* Validate input shapes.

* Cache some input def metadata

* Make some methods const and check for negative values of dims instead of just -1.

* Fix shape inferencing test.

* Fix testLabelEncoder test

* Fix more tests

* Fix more tests

* Use size_t for loop variable
2019-07-19 13:42:34 -07:00
jignparm
57225cd4ee
Add C++ API test for NuGet package (#1364) 2019-07-09 13:51:51 -07:00
xkszltl
98ea675e40 Fix typo: op[s]iops -> op[t]ions. (#1329)
Resolve https://github.com/microsoft/onnxruntime/issues/1322
2019-07-01 21:25:38 -07:00
Ryan Hill
c8db2d507e
Actualy add the C++ headers to the nuget packages (#1267) 2019-06-21 14:36:50 -07:00
jignparm
d3e5474c1d
Refactor CI pipelines - add GPU NuGet pipelines and ESRP code signing steps (#1247)
* Simplify linux gpu pipeline

* Refactor win-gpu-ci-pipeline.yml

* Set cuda environment variables for testing and version

* Remove variables from starter script

* minor fix

* Add GPU Nuget pipeline

* Set DisableContribOps environment variable for Linux package tests

* Add ESRP tasks

* Add ESRP signing templates

* Test out hardcode value of ERSP

* Test out hardcode value of ERSP

* Test out hardcode value of ERSP

* Test out hardcode value of ERSP

* test variable expansion

* test variable expansion

* test variable expansion

* test variable expansion

* test variable expansion

* test variable expansion

* test out variable expansion

* test variable expansion

* test variable expansion

* test variable expansion

* test variable expansion

* test variable expansion

* test variable expansion

* test variable expansion

* test variable expansion

* update cpu pipeline to conditionally esrp sign

* Set C# GPU tests to run only if env var is set

* Refactor for easy parameter passing

* refactored esrp templates

* remove variables from template

* Add packaging variables back to pipelines

* update C# for cuda 10

* Merge vars ana parameters for gpu pipeline

* remove vars from mklml pipeline

* display envvars on terminal

* Clean up C# cuda tests, and upgrade to Cuda10

* Introduce CUDNN_PATH pipeline varaible

* YAML variable are always uppercased (not true with classic)

* Update C# GPU test to be more meaningful

* remove macos from gpu tests

* remove debugging info for DisableContribOps option

* Remove DisableContrib ops parameters -- use variables only

* Fix typo from = to -

* remove debug steps

* fix typo

* remove unused variable TESTONGPU from some templates

* clean up CUDA env setup scripts

* Remove CUDNN_PATH from setup_env_cuda.bat
2019-06-20 19:41:30 -07:00
Zhang Lei
23838d9c2a Add enable/disable mem pattern api for python and csharp. (#1227) 2019-06-19 11:17:21 -07:00
jignparm
08731589c9
Refactor CI pipelines, and add YAML NuGet package generation pipelines ( for CPU, MKLML, NoContribOps) (#1223)
* Initial check in

* Add win x86

* minor update to x86

* update win-ci

* update win-ci

* update win-x86ci

* add linux and mac templates

* add nuget pipelines and test templates

* remove buildConfig

* add compliance template

* fix minor typos

* update pool for macos

* update mac agent pool

* update macos pool

* update agent pools for tests

* turn off debug build for testing

* some modifications to packaging scripts

* change ordering of compliance tasks

* Add mklml pipeline

* Add packagename variable to mklml pipeline

* remove unrequired dependent jobs from mklml pipeline

* Update build command for macOS legs in mklml and cpu pipeline

* Set vcvars to true

* Add no contrib ops pipeline

* Add no-contrib-ops pipeline

* set vcvars to true for package tests

* remove repetition in nuget templates

* get buildarch correct

* get name of test template correct

* remove steps from test_all_os.yml

* add parameters to test_all_os.yml

* Need jobs, not steps

* set envars for disablecontrib ops

* add cleanup tasks and CG to package tests

* fix path to cleanup script for macos

* remove buildDirectory -- not needed

* remove fp16tiny_yolov2 model from nocontribops tests

* remove debugging info

* fix individual linux pipelines to use correct template

* remove unneeded bak_latest2

* increase timeout to 120 to allow for variance

* turn off code coverage report
2019-06-14 14:51:03 -07:00
Ryan Hill
3c3186c761
Convert more C APIs to return OrtStatus (#1194)
* Change SessionOptions APIs to always return a status, for consistency and ease of use (a couple returned 0 or -1 for success/failure)
2019-06-10 18:36:04 -07:00
Ryan Hill
b68bb51dd0
Change SessionOptions APIs to always return a status (#1171)
* Change SessionOptions APIs to always return a status, for consistency and ease of use (a couple returned 0 or -1 for success/failure)
2019-06-06 13:24:24 -07:00
Torkel
10ea77a3d1 add details aboud adding execution providers in the C api to comments and docs (i.e. need OrtSessionOptionsAppendExecutionProvider_CUDA to get CUDA) 2019-06-02 17:38:36 -07:00
jignparm
2cf56639ed
Minor update to NuGet package tests -- allow model download in separate step (#1115)
* Update docker scripts to not fetch model data

* Update related files
2019-05-28 03:01:10 -07:00
Ryan Hill
9129a652c5
Ryanunderhill/cxx api2 (#1091)
More C++ API improvements and cleanup
Add templates to tensor creation
Add run method that allows preallocated outputs
Simplify CreateTensor<T> to multiply by sizeof(T)
Convert io_types code
Optimize away vector copies in Session::Run
2019-05-24 11:15:51 -07:00
jignparm
9673f3d494
Jignparm/minor update linux test (#1074)
* Minor update for pipeline tests

* uncomment data download
2019-05-21 19:32:29 -07:00
jignparm
32da12491d
x86 support for C# API (#962)
* Refactor C# to handle x86

* update run script

* Add Native win x86 tests

* Add native x86 tests for Linux

* Update linux tests scripts to control which tests are run

* update linux image name for x86 to prevent using cached image

* update to not run unit python unit tests unless pybind is specified

* remove --build_wheel as a core common arg. Python cannot run on x86 build

* update OrtGetNumOfDimensions to OrtGetDimensionsCount in rest of C#
2019-05-20 15:48:14 -07:00
Ryan Hill
3a32b0eb99
Change function kernels to use CustomOp APIs (#1020)
* Change function signature
* Convert compute to use custom op style APIs
* Remove dead CustomOp function
* Use CustomOp API in TensorRT EP
* Switch to new API in ngraph
2019-05-20 14:57:43 -07:00
jignparm
11069765dc
Fix C-API sample (causing internal build failure (#1047) 2019-05-16 15:49:39 -07:00
Ryan Hill
3408494407
More C++ API improvements and conversions (#998)
* More C++ API improvements and conversions
* Mark more constructors as explicit
* Fix CSharp function name changes
* Change more test cases to use C++ API
2019-05-13 13:56:54 -07:00
Ryan Hill
f73ce305e9
C++ wrapper for ABI (#958) 2019-05-03 19:32:46 -07:00
jignparm
861b9fda45
Add link to build within Nuget package (#926)
* Add link to build within Nuget package

* Update buildID to build uri

* add url prefix to build id
2019-04-27 13:41:20 -07:00
Hariharan Seshadri
06e0f7e3e7
Minor changes to support inclusion of x86 bits in the Nuget packaging pipeline (#916)
* initial commit

* More changes

* More changes

* Adding stuff back to the targets xml

* More changes v3

* More changes v4

* More changes v5

* More changes v6

* More changes v7

* More changes v9

* Disable CSharp tests for now

* More changes

* Revert file to same status

* Update props file for x86

* Change to usage of TargetArchitecture instead of PlatformTarget

* Update targets.xml

* Minor formatting nit fix

* Update based on PR comments
2019-04-27 00:41:26 -07:00
Hariharan Seshadri
9d89b23d81
BatchNorm CPU does not support non-spatial cases - explicitly handle such cases (#890)
* BatchNorm CPU does not support non-spatial cases

* skip test in c#

* Update comments
2019-04-23 21:37:21 -07:00
Changming Sun
11806529d0
Update test data (#864)
Add:

1. mxnet_arcface
2. tf_mobilenet_v1_1.0_224
3. tf_mobilenet_v2_1.0_224
4. tf_mobilenet_v2_1.4_224
5. tf_inception_v2
2019-04-23 13:24:24 -07:00
jignparm
b2268a6378
removing specific target framework for c-api test (#860) 2019-04-18 23:58:18 -07:00
Pranav Sharma
07a4ecbddb
Disable tests for certain models (Cherry pick from 0.3.1) (#842)
* Disable tests for certain models (Cherry pick from 0.3.1)

* Disable more tests

* More tests

* even more tests

* Fix gpu builds

* Disable L2 transformers

* Env variable to disable contrip ops for csharp tests
2019-04-18 23:57:52 -07:00
Ke Zhang
951c428ee1
Simplify the validation in Run call (#850)
* Simpplify Run()

* remove the lock

* remove a file added wrongly.

* fix tests

* fix c# test
2019-04-18 08:38:17 +08:00
jignparm
7775551a6f
Refactor C# and native packaging tests (#825)
* Refactor C# and native packaging tests

* Pass package name into docker

* add libiomp5ml.dll required by mklml.dll
2019-04-16 00:00:07 -07:00
Ashwini Khade
10b113f144
update onnx to bring in quantized ops (#808)
* update onnx + move quantized ops kernels and test to onnx + remove exp ops

* update onnx

* Revert "update onnx"

This reverts commit 533abfc297e75473a74505fb89921ffc05c46a1c.

* add generated csharp test file
2019-04-10 17:20:35 -07:00
jignparm
4e3391ef60
Refactor NuGet to allow arbitrary PackageId names (e.g. Microsoft.ML.OnnxRuntime.MKLML) (#797)
* Refactor NuGet to allow arbitrary namespaces

* Move csharp build to end of cmake
2019-04-09 22:48:00 -07:00
Yufeng Li
cea2a40bf1
Clean up ExecutionProvider in CSharp (#783) 2019-04-05 22:29:54 -07:00
Ryan Hill
fda1d0dce9
Ryanunderhill/ocr custom op (#744)
* Adding a custom op interface to the C API to remove shared library dependency.
* Remove old custom op test
* Rework how custom ops handle inputs/outputs to enable custom op output shape calculation in the compute method
* Add a nicer C++ API for custom ops and switch the tests to use it.
2019-04-05 18:53:20 -07:00
Hariharan Seshadri
ffd9071168
expose graph node name returning non-zero status code (#714)
* Initial commit
2019-04-05 12:50:58 -07:00
Yufeng Li
ef9a4d98cb
Expose parallel execution option in C# API (#767)
* Expose parallel execution option

* delete unnesary file

* add doc

* update nuget retore to 4.3.0

* resolve comments

* remove unnessary file

* make git ignore csharp/Directory.Build.props

* fix yaml config for nuget 4.3
2019-04-05 12:05:56 -07:00
Ashwini Khade
2dbce4ebcf
csharp api for graph transformers (#741)
* add graph optimization level to csharp api

* format documentation

* changes per review comments
2019-04-02 17:23:14 -07:00
jignparm
acc8ac58d2
Fix C-API sample. Update Issue template. (#750)
* Fix C-API sample. Update Issue template.

* switch back to signed int

* update from int to size_t
2019-04-02 13:37:50 -07:00
jignparm
73fc91dc59 Fix preFast native rules warnings (#682)
* Address preFast native rules warnings
2019-03-29 00:26:33 -07:00
jignparm
600dc9ecc5
Remove licenseurl and add licensefile, to fix issue 664 (#669)
* Remove licenseurl and add licensefile, to fix issue 664

* Added back LICENSE file, instead of LICENSE.txt
2019-03-21 20:27:57 -07:00
Ashwini Khade
2f1c3028b7
add capi to set graph optimization level (#657)
* add capi to set graph optimization level

* remove 1 unnecessary check + review comment

* plus updates
2019-03-20 17:14:46 -07:00
jignparm
819457dd45
Added netframework test (#658) 2019-03-20 10:37:09 -07:00
jignparm
de9f1ff1ff Add new C function OrtOnnxTypeFromTypeInfo (#585) 2019-03-12 10:11:14 -07:00
jignparm
a79c09388f
Fix GPu package testing for CAPI (#569) 2019-03-07 14:51:18 -08:00
jignparm
4635bcc624 Updating C_API end-to-end test and user samples (#564)
* Updating user sample and C_API unit test

* remove debugging info

* remove precompiled headers

* header file location changed in master...updating
2019-03-07 00:28:15 -08:00
shahasad
a4a459477a
Windows packaging build pipeline for C-api packages (CPU and GPU) (#535)
* added packaging pipeline

* Update win-ci-pipeline.yml for Azure Pipelines

* Update win-ci-pipeline.yml for Azure Pipelines

* Update win-ci-pipeline.yml for Azure Pipelines

* Update win-ci-pipeline.yml for Azure Pipelines

* Update win-ci-pipeline.yml for Azure Pipelines

* Update win-ci-pipeline.yml for Azure Pipelines

* Update win-ci-pipeline.yml for Azure Pipelines

* Update win-ci-pipeline.yml for Azure Pipelines

* put the c-api header file at root instead of under core/session

* Update win-ci-pipeline.yml for Azure Pipelines

* Update win-ci-pipeline.yml for Azure Pipelines

* Update win-ci-pipeline.yml for Azure Pipelines

* parameterize the windows build script

* Update win-package-pipeline.yml for Azure Pipelines

* fixed indenting

* fixed indenting

* fix parameter reference syntax

* try using arch = amd64 for the vcvarsall

* remove duplicate tasks

* use vcvarsall

* some more refactor

* fix typo

* fix typo

* factored out the packaging step into a template

* add x86 build to package pipeline

* use amd64 for vcvars arg

* added gpu pipeline. added msbuild platform param

* fix the msbuild platform

* use amd64 host for x86 build

* use buildarch=x86 for vcvarsall

* remove vcvars from setup steps

* add some logging for PNG lib, and disable fns_candy demo for win32

* set allocator alignment to 32 bit for win32 compiler

* disable parallel execution test for x86

* use 64 bit toolchain for x86 build

* add missing -T flag for toolset

* fix string delimietr in workingdirectory name for package build test step

* fix gpu pipeline

* make io_types test conditional

* use cuda 10 instead of cuda 9.1, similar to the ci build

* try some workaround on the io test

* undo inadvertent local change in build.py, also reenable the io test

* make all test run single threaded

* blacklist few failing tests for x86

* added some log in build.py

* edit build.py to disable parallel test

* add the failed tests into the blacklist for win32

* add tf_pasnet_large to blacklist

* change control flow for build.py onnx tests

* add README, license and TPN to the package

* updated build.py test sequence for parallel executor

* updated onnx test flow as per review comment

* add type checking log in the compare_mlvalue

* fix type cast

* blacklist some failed test as of now

* one more blacklisted test
2019-03-05 18:12:02 -08:00
jignparm
94bd74190a
Revert to cuda 9.1 for package release (#546) 2019-03-05 17:20:51 -08:00
Ke Zhang
47a9abd212
minor change for validating input types (#532)
* minor changes for ValidateInput in inferencesession level api.

* update

* valid real graph input and don't validate initializers

* refine the validation logic.

* remove the unnecessary validatation test.

* ensure that the exact input feeds provided from caller.

* fix tests.

* fix c# test failure.

* fix test case

* don't verify the error message which is hard to maintain.

* fix c# test case

* c# test

* c# test

* fix test cases.

* test update
2019-03-05 16:29:39 -08:00
jignparm
1288a8caed
Initial check-in to support non-tensor (sequence/map) types (#527)
* Initial check-in to support non-tensor (sequence/map) types

* Added support for String tensors

* address PR comments
2019-03-05 16:00:40 -08:00
Scott McKay
dfa21af302
Update C API to allow user to enable caching of feeds and fetches info across calls to Run (#522)
* Add ability to enable caching to the C API, and update the internals to pass the feed names and MLValue instances in vectors so the order is deterministic (so cache entry matching works as expected).

* Address PR comment and don't use 'bool'

* Remove meaningless C# test around duplicate input.

We _could_ check input names for duplicates (previously we did this via the usage of unordered_map), but the system will gracefully handle with the duplicate anyway (will just use the last value provided for the input name).

Based on that, I don't think the cost of checking for duplicates is worth it.

* Fix c-style cast in test_run_options.
2019-02-27 13:41:17 -08:00
shahasad
f9bae489bd
cleanup extra header from c api and sanitize C api test (#517)
* cleaned up the additional header in C-api

* ensure test failure surfaces in the build pipeline

* sanitized runtest.bat

* cleanup unneeded headers

* formatting and typos
2019-02-24 21:06:54 -08:00
jignparm
668fcf22d8
Update InferenceTestCapi.cpp (#516)
* Update InferenceTestCapi.cpp

* switch cwd to folder containing model

* Update

* minor logging
2019-02-24 17:30:33 -08:00
jignparm
9d14cbdb1a
Throw friendly error message when Linux distribution has libc version < 2.23 (#493)
* Add check for linux version supporting glibc 2.23 or higher

* Refactor the libc check to SessionOptions

* removed whitespace

* Update SessionOptions.cs
2019-02-21 11:34:44 -08:00
shahasad
ee702bd288
patched the logic of removing the ._*.onnx file, in case it comes in position other than the first in listdir (#484) 2019-02-15 16:08:20 -08:00
jignparm
1f1dcc352f
Add Native C API test from NuGet (#481)
* Initial check-in of Native Capi tests

* Minor update

* Updated with OrtCreateCpuAllocatorInfo working after including cpu_provider_factory.h

* Minor editw

* Minor update
2019-02-15 13:42:24 -08:00
jignparm
f6ffa1280a
Updated endtoendtests to not copy model files (#479) 2019-02-13 17:43:43 -08:00
jignparm
0c4fef9ac2
Jignparm/removemodelcopies (#471)
* Adding initial props file updates to support native projects

* remove unnecessary header files

* removed double backslashes

* only include c api header, drop cxx api

* Remove copying of test models
2019-02-12 13:04:51 -08:00
shahasad
88949485ff
removed MklDnn dependency from C# (#455) 2019-02-11 14:23:09 -08:00