* link to folder instead of READMEs inside folder (#3938)
otherwise hard to find the source code
* [Node.js binding] fix linux build (#3927)
* [Node.js binding] add build flag for node.js binding (#3948)
* [Nodejs binding] create a new pipeline to generate signed binaries (#4104)
* add yml files
* update pipeline
* fix yaml syntax
* yaml pop BuildCSharp
* udpate yaml
* do not stage codesign summary
* fix build: pipeline Node.js version to 12.16.3 (#4145)
* [Node.js binding] upgrade node-addon-api to 3.0 (#4148)
* [Node.js binding] add linux and mac package (#4157)
* try mac pipeline
* fix path separator
* copy prebuilds folder
* split esrp yaml for win/mac
* disable mac signing temporarily
* add linux
* fix indent
* add nodetool in linux
* add nodetool in win-ci-2019
* replace linux build by custom docker scripts
* use manylinux as node 12.16 not working on centos6
* try ubuntu
* loosen timeout for test case - multiple runs calls
* add script to support update nodejs binding version (#4164)
* [java] Adds a CUDA test (#3956)
* [java] - adding a cuda enabled test.
* Adding --build_java to the windows gpu ci pipeline.
* Removing a stray line from the unit tests that always enabled CUDA for Java.
* Update OnnxRuntime.java for OS X environment. (#3985)
onnxruntime init failure due to wrong path of reading native libraries. In OS X 64 system, the arch name is detected as x86 which generates invalid path to read native libraries.
Exception java.lang.UnsatisfiedLinkError: no onnxruntime in java.library.path
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867)
at java.lang.Runtime.loadLibrary0(Runtime.java:870)
at java.lang.System.loadLibrary(System.java:1122)
at ai.onnxruntime.OnnxRuntime.load(OnnxRuntime.java:174)
at ai.onnxruntime.OnnxRuntime.init(OnnxRuntime.java:81)
at ai.onnxruntime.OrtEnvironment.<clinit>(OrtEnvironment.java:24)
* Create Java publishing pipeline (#3944)
Create CPU and GPu Java publishing pipelines. Final jars are tested on all platforms. However, signing and publishing to maven are manual steps.
* Change group id to com.microsoft.onnxruntime per requirements.
* Java GPu artifact naming (#4179)
Modify gradle build so artifactID has _gpu for GPU builds.
Pass USE_CUDA flag on CUDA build
Adjust publishing pipelines to extract POM from a correct path.
Co-Authored-By: @Craigacp
* bump up ORT version to 1.3.1 (#4181)
* move back to toolset 14.16 to possibly work around nvcc bug (#4180)
* Symbolic shape inference exit on models without onnx opset used (#4090)
* Symbolic shape inference exit on models without onnx opset used
* Temporary fix for ConvTranspose with symbolic input dims
Co-authored-by: Changming Sun <me@sunchangming.com>
* Fix Nuphar test failure
* Enlarge the read buffer size in C#/Java test code (#4150)
1. Enlarge the read buffer size further, so that our code can run even faster. TODO: need apply the similar changes to python some other language bindings.
2. Add coreml_VGG16_ImageNet to the test exclusion set of x86_32. It is not a new model but previously we didn't run the test against x86_32.
* Temporarily disable windows static analysis CI job
* skip model coreml_Imputer-LogisticRegression_sklearn_load_breast_cancer
* Delete unused variable
Co-authored-by: Prasanth Pulavarthi <prasantp@microsoft.com>
Co-authored-by: Yulong Wang <yulongw@microsoft.com>
Co-authored-by: Adam Pocock <adam.pocock@oracle.com>
Co-authored-by: jji2019 <49252772+jji2019@users.noreply.github.com>
Co-authored-by: Dmitri Smirnov <yuslepukhin@users.noreply.github.com>
Co-authored-by: Dmitri Smirnov <dmitrism@microsoft.com>
Co-authored-by: George Wu <jywu@microsoft.com>
Co-authored-by: KeDengMS <kedeng@microsoft.com>
Co-authored-by: Changming Sun <me@sunchangming.com>
Co-authored-by: Changming Sun <chasun@microsoft.com>
1. Reflect int8 GEMV improvements for multi-threading from #2696
2. Add notes on multi-threading control using OpenMP
3. Add samples of running multi-isa AOT, and show int8 GEMM differences between AVX and AVX2
4. Add rnn_benchmark example to resolve#1993
* [NupharEP] Add parallel schedule to JIT function name
Update Nuphar docker to use Python 3.6 and ubuntu 18.04
* Update notebook
* Avoid JIT cache file name conflict
* Fixed a bug of missing tvm in python wheel
* Put Nuphar Python scripts into wheel
* Add note book tutorial
* Some improvements in symbolic shape inference for quantized models
* Update version number to 0.5.0 in preparation for release
* Update to README.md to direct to Versioning doc
* Resolve PR comment
* Remove incorrect line generation
* Minor updates to update version script
* Minor comment update
* Addl TPN updates (#403)
* Updated TPN
* Update batch_norm_op_test.cc
* Update ThirdPartyNotices.txt
* Update ThirdPartyNotices.txt
* Update readme with package links
* Update README.md
* Update README.md
* Update README.md
* Merged Ryan and TPN changes into single PR
* minor fix
* added mkldnn to GPU pipeline. Required by C# library as it is the default execution provider
* Bump up version number for 0.2.1 release (#420)
* updated nuget package metadata for MS compliance (#66)
* fixed metadata element -- use PackageProjectUrl instead of ProjectUrl (#67)
* Change version to 0.1.5