Commit graph

34 commits

Author SHA1 Message Date
stevenlix
814638cdff
Cherry pick PRs to Rel-1.3.1 (#4198)
* link to folder instead of READMEs inside folder (#3938)

otherwise hard to find the source code

* [Node.js binding] fix linux build (#3927)

* [Node.js binding] add build flag for node.js binding (#3948)

* [Nodejs binding] create a new pipeline to generate signed binaries (#4104)

* add yml files

* update pipeline

* fix yaml syntax

* yaml pop BuildCSharp

* udpate yaml

* do not stage codesign summary

* fix build: pipeline Node.js version to 12.16.3 (#4145)

* [Node.js binding] upgrade node-addon-api to 3.0 (#4148)

* [Node.js binding] add linux and mac package (#4157)

* try mac pipeline

* fix path separator

* copy prebuilds folder

* split esrp yaml for win/mac

* disable mac signing temporarily

* add linux

* fix indent

* add nodetool in linux

* add nodetool in win-ci-2019

* replace linux build by custom docker scripts

* use manylinux as node 12.16 not working on centos6

* try ubuntu

* loosen timeout for test case - multiple runs calls

* add script to support update nodejs binding version (#4164)

* [java] Adds a CUDA test (#3956)

* [java] - adding a cuda enabled test.

* Adding --build_java to the windows gpu ci pipeline.

* Removing a stray line from the unit tests that always enabled CUDA for Java.

* Update OnnxRuntime.java for OS X environment. (#3985)

onnxruntime init failure due to wrong path of reading native libraries. In OS X 64 system, the arch name is detected as x86 which generates invalid path to read native libraries.

Exception java.lang.UnsatisfiedLinkError: no onnxruntime in java.library.path
	at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867)
	at java.lang.Runtime.loadLibrary0(Runtime.java:870)
	at java.lang.System.loadLibrary(System.java:1122)
	at ai.onnxruntime.OnnxRuntime.load(OnnxRuntime.java:174)
	at ai.onnxruntime.OnnxRuntime.init(OnnxRuntime.java:81)
	at ai.onnxruntime.OrtEnvironment.<clinit>(OrtEnvironment.java:24)

* Create Java publishing pipeline (#3944)

Create CPU and GPu Java publishing pipelines. Final jars are tested on all platforms. However, signing and publishing to maven are manual steps.

* Change group id to com.microsoft.onnxruntime per requirements.

* Java GPu artifact naming (#4179)

Modify gradle build so artifactID has _gpu for GPU builds.
  Pass USE_CUDA flag on CUDA build
  Adjust publishing pipelines to extract POM from a correct path.

Co-Authored-By: @Craigacp

* bump up ORT version to 1.3.1 (#4181)

* move back to toolset 14.16 to possibly work around nvcc bug (#4180)

* Symbolic shape inference exit on models without onnx opset used (#4090)

* Symbolic shape inference exit on models without onnx opset used

* Temporary fix for ConvTranspose with symbolic input dims

Co-authored-by: Changming Sun <me@sunchangming.com>

* Fix Nuphar test failure

* Enlarge the read buffer size in C#/Java test code (#4150)

1. Enlarge the read buffer size further, so that our code can run even faster. TODO: need apply the similar changes to python some other language bindings.
2. Add coreml_VGG16_ImageNet to the test exclusion set of x86_32. It is not a new model but previously we didn't run the test against x86_32.

* Temporarily disable windows static analysis CI job

* skip model coreml_Imputer-LogisticRegression_sklearn_load_breast_cancer

* Delete unused variable

Co-authored-by: Prasanth Pulavarthi <prasantp@microsoft.com>
Co-authored-by: Yulong Wang <yulongw@microsoft.com>
Co-authored-by: Adam Pocock <adam.pocock@oracle.com>
Co-authored-by: jji2019 <49252772+jji2019@users.noreply.github.com>
Co-authored-by: Dmitri Smirnov <yuslepukhin@users.noreply.github.com>
Co-authored-by: Dmitri Smirnov <dmitrism@microsoft.com>
Co-authored-by: George Wu <jywu@microsoft.com>
Co-authored-by: KeDengMS <kedeng@microsoft.com>
Co-authored-by: Changming Sun <me@sunchangming.com>
Co-authored-by: Changming Sun <chasun@microsoft.com>
2020-06-12 11:27:02 -07:00
stevenlix
4ea10c9202
bump up ORT version and extend time limit for windows cpu packaging pipelines (#3852) 2020-05-07 14:22:20 -07:00
Xavier Dupré
edec8043d4
Fix python examples in documentation (#3379) 2020-04-01 22:48:32 +02:00
Faith Xu
2e875f4e67
Delete outdated page (#3320) 2020-03-26 18:24:02 -07:00
Yufeng Li
ca2ed17ba7
Bump up version number to 1.2 (#3097) 2020-02-26 17:25:16 -08:00
KeDengMS
71940c0915
Update Nuphar tutorial notebook (#2721)
1. Reflect int8 GEMV improvements for multi-threading from #2696
2. Add notes on multi-threading control using OpenMP
3. Add samples of running multi-isa AOT, and show int8 GEMM differences between AVX and AVX2
4. Add rnn_benchmark example to resolve #1993
2019-12-22 22:42:03 -08:00
Xavier Dupré
7c0235c15a
Propagate documentation modification from rel-1.0.0 (#2713) 2019-12-21 00:25:45 +01:00
KeDengMS
c767e264c5
[NupharEP] update tutorial with GPT-2 (#2677) 2019-12-16 17:57:34 -08:00
Ryan Hill
36eb1771ba
Update version (#2584) 2019-12-08 18:00:12 -08:00
KeDengMS
0f12346d76
[Nuphar EP] fixes for some object detection models (#2581)
Update notebook tutorial with multi-threaded int8 GEMM from #2517
2019-12-07 13:37:00 -08:00
KeDengMS
c1be615c45
[NupharEP] refine parallel schedule control (#2514)
* [NupharEP] Add parallel schedule to JIT function name
Update Nuphar docker to use Python 3.6 and ubuntu 18.04

* Update notebook

* Avoid JIT cache file name conflict
2019-12-02 17:40:51 -08:00
KeDengMS
aa7c79eac9 [NupharEP] Update notebook and docker image (#2416)
Add BERT squad in Nuphar tutorial
Enhance speed comparsion readability
2019-11-18 10:38:14 -08:00
Changming Sun
7b11f05a97 Update version number 2019-10-30 08:13:09 -07:00
Faith Xu
303a78c301 Update Python documentation (#2210) 2019-10-21 16:56:31 -07:00
Xavier Dupré
836d22cd4c Update readme.rst for pypi, change documentation style (#1663) 2019-10-19 18:26:34 -07:00
KeDengMS
e361174f78
Add nuphar python scripts to wheel, and notebook tutorial (#1952)
* Fixed a bug of missing tvm in python wheel
* Put Nuphar Python scripts into wheel
* Add note book tutorial
* Some improvements in symbolic shape inference for quantized models
2019-09-30 10:39:02 -07:00
Xavier Dupré
2ecac41614 update python examples (#1935) 2019-09-26 11:25:59 -07:00
manashgoswami
3d44c55092 Updated docs related to base images (#1753)
* Update README.md

* Update onnx-inference-byoc-gpu-cpu-aks.ipynb

* Update README.md
2019-09-04 10:33:41 -07:00
Hariharan Seshadri
c5f2f0f15b
Upgrade version number for ORT in preparation for release (#1468)
* Update version number to 0.5.0 in preparation for release

* Update to README.md to direct to Versioning doc

* Resolve PR comment

* Remove incorrect line generation

* Minor updates to update version script

* Minor comment update
2019-07-23 16:33:06 -07:00
Scott McKay
e3919d3fce
Cleanup naming of test input to use .onnx for models. (#1337)
* Cleanup naming of test input to use .onnx for models.

* Remove file deleted on master
2019-07-04 13:10:29 +10:00
Xavier Dupré
d33dbb23b2
replace onnxmltools by keras-onnx in one example (#1151) 2019-06-07 12:03:46 +02:00
Vinitra Swamy
c7cb0c052d
Add the onnx inference on AKS (Azure ML) notebook from //build (#1071) 2019-05-21 17:39:20 -07:00
Ashwini Khade
90544ed766
bump version number for release (#911)
* bump version number for release

* + review comments
2019-04-26 16:28:16 -07:00
jignparm
9467c5f967
Update version to 0.3.1 (patch release) (#798)
* bump up version number (#752)

* bump up version number

* Minor change to kick off build

* update version to 0.3.1
2019-04-09 14:48:56 -07:00
Pranav Sharma
714d4100bd
Update documentation to include openmp dependency. (#545)
* Update documentation to include openmp dependency.

* Update python docs as well
2019-03-05 22:38:40 -08:00
Randy
4c684a133a
bump up version to 0.3.0 (#536)
* bump up version to 0.3.0

* change to op9 and cuda9.1
2019-03-04 13:41:53 -08:00
Raymond Yang
011a784eaa
Merge back from rel-0.2.1 (#422)
* Addl TPN updates (#403)

* Updated TPN

* Update batch_norm_op_test.cc

* Update ThirdPartyNotices.txt

* Update ThirdPartyNotices.txt

* Update readme with package links

* Update README.md

* Update README.md

* Update README.md

* Merged Ryan and TPN changes into single PR

* minor fix

* added mkldnn to GPU pipeline. Required by C# library as it is the default execution provider

* Bump up version number for 0.2.1 release (#420)
2019-01-31 19:04:33 -08:00
Xavier Dupré
439dbbada9
Adds OnnxTransformer to plug onnxruntime in sckit-learn's pipeline (#389)
Useful for transfer learning
2019-01-29 18:51:24 +01:00
jignparm
571e1e9a6c
Jignparm/updateversion 2.0 (#394)
* Update version to 2.0

* added __init__.pu
2019-01-28 21:22:45 -08:00
Xavier Dupré
8c40313e28
Update documentation to reflect the latest changes (#311)
- removes markdown output
- rename intro into index
- uses skl2onnx anywhere possible instead of onnxmltools
2019-01-11 12:41:42 +01:00
Xavier Dupré
0573952499 Update the documentation, run all examples during the generation of the documentation (replace #89) (#103)
* Minor update in the documentation

* Run examples during the generation of the documentation.
2018-12-05 10:12:25 -08:00
Pranav Sharma
6624dd2778
Rel 0.1.5 (#70)
* updated nuget package metadata for MS compliance (#66)

* fixed metadata element -- use PackageProjectUrl instead of ProjectUrl (#67)

* Change version to 0.1.5
2018-11-30 16:23:47 -08:00
Pranav Sharma
39ebccbc8b
Fix sample example documentation for python pkg (#61) 2018-11-29 17:50:12 -08:00
Pranav Sharma
89618e8f1e Initial bootstrap commit. 2018-11-19 16:48:22 -08:00