* test
* [gwang] make cmake compile work
* [gwang] enble build apks
* some build update
* add simple sigmoid test android project and cmake
* add build.py
* refine and remove unused import lib
* address CR comments
* remove unnecessary files
* add README.md
* minor update
* remove
* minor change
* fix ci failure and minor update
* fix typo in project folder
* remove
* remove and minor update
* refine
* minor fix
* fix
* fix typo
* add gradle spotlessApply task to fix CI failure
* fix
* enable spotlessApply in build gradle
* revert some changes
* minor fix
* run spotless apply for format
* address CR comments and fix CI version and format
* refine
* Refine
* address comments
* refine
* refine
* modify
* reformat
* resolve version conflicts
* minor update
* minor update
* address comments
* minor update
Co-authored-by: Guoyu Wang <wanggy@outlook.com>
Add providers for CoreML, ROCM, NNAPI, ArmNN
Adding the structs for OrtCUDAProviderOptions and OrtOpenVINOProviderOptions
Updating NNAPI flags.
Adding the new CoreML flag.
Adding hooks to the build system to tell Java about the new providers.
* Updates for Gradle 7.
* Adding support for OrtThreadingOptions into the Java API.
* Fixing a typo in the JNI code.
* Adding a test for the environment's thread pool.
* Fix cuda test, add comment to failure.
* Updating build.gradle
* Adding Java support for getAvailableProviders, addFreeDimensionOverrideByName, disablePerSessionThreads and getProfilingStartTimeNs.
* Fixing copyright years, running spotless and adding javadoc and an accessor to OrtProvider.
* Renaming OrtSession.getProfilingStartTimeInNs.
* Removing ngraph as it's been deprecated.
* Rearranging checks in onnxruntime_mlas.cmake to pickup Apple Silicon.
On an M1 Macbook Pro clang reports:
$ clang -dumpmachine
arm64-apple-darwin20.1.0
So the regex check needs to look for "arm64" first, as otherwise it
matches 32-bit ARM and you get NEON compilation failures.
* Adding Java side library loading support for Apple Silicon (and other aarch64 architectures).
* Adding Qgemm fix from @tracysh
* Fixes the java packaging on Windows.
* Missed a check in the java platform detector.
* Remove nGraph Execution Provider
Pursuant to nGraph deprecation notice: https://github.com/microsoft/onnxruntime/blob/master/docs/execution_providers/nGraph-ExecutionProvider.md#deprecation-notice
**Deprecation Notice**
| | |
| --- | --- |
| Deprecation Begins | June 1, 2020 |
| Removal Date | December 1, 2020 |
Starting with the OpenVINO™ toolkit 2020.2 release, all of the features
previously available through nGraph have been merged into the OpenVINO™
toolkit. As a result, all the features previously available through
ONNX RT Execution Provider for nGraph have been merged with ONNX RT
Execution Provider for OpenVINO™ toolkit.
Therefore, ONNX RT Execution Provider for **nGraph** will be deprecated
starting June 1, 2020 and will be completely removed on December 1,
2020. Users are recommended to migrate to the ONNX RT Execution Provider
for OpenVINO™ toolkit as the unified solution for all AI inferencing on
Intel® hardware.
* Remove nGraph Licence info from ThirdPartyNotices.txt
* Use simple Test.Run() for tests without EP exclusions
To be consistent with rest of test code.
* Remove nGraph EP functions from Java code
* Add options for nnapi ep
* Add nnapi flags test
* add comments
* Add flag comments
* Make the flags bitset const
* Fix build break
* Add stub changes to java and c# api
* Fix java related build break
* Fix java build break
* Switch to bit flags instead of bitset
Fixed static IS_ANDROID detection
final static IS_ANDROID is causing an Error Unsupport arch:aarch64, so removed IS_ANDROID & replaced with IS_ANDROID with isAndroid().
* [java] Fixing the buffer semantics.
* Renaming bufferCapacity to bufferRemaining.
* Adding a cast to char* so the pointer arithmetic works on Windows.
* Add SetLanguageProjection C Api and use it in four projections
* static cast enum languageprojection to uint32_t
* resolve comments
* fix typo and line added unintentionally
* revert unecessary change
* reorder c# api
* add TensorAt and CreateAndRegisterAllocator in Csharp to keep the same order as C apis
* update java API docs
* fix link
* rearrange
* update platforms, use table
* use javadoc.io
* craigacp tested it in java 14
* update link
* fix broken link
* fix testdata link
Modify gradle build so artifactID has _gpu for GPU builds.
Pass USE_CUDA flag on CUDA build
Adjust publishing pipelines to extract POM from a correct path.
Co-Authored-By: @Craigacp
1. Enlarge the read buffer size further, so that our code can run even faster. TODO: need apply the similar changes to python some other language bindings.
2. Add coreml_VGG16_ImageNet to the test exclusion set of x86_32. It is not a new model but previously we didn't run the test against x86_32.
* Add amd migraphx execution provider to onnx runtime
* rename MiGraphX to MIGraphX
* remove unnecessary changes in migraphx_execution_provider.cc
* add migraphx EP to tests
* add input requests of the batchnorm operator
* add to support an onnx operator PRelu
* update migrapx dockerfile and removed one unused line
* sync submodules with mater branch
* fixed a small bug
* fix various bugs to run msft real models correctly
* some code cleanup
* fix python file format
* fixed a code style issue
* add default provider for migraphx execution provider
Co-authored-by: Shucai Xiao <Shucai.Xiao@amd.com>
onnxruntime init failure due to wrong path of reading native libraries. In OS X 64 system, the arch name is detected as x86 which generates invalid path to read native libraries.
Exception java.lang.UnsatisfiedLinkError: no onnxruntime in java.library.path
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867)
at java.lang.Runtime.loadLibrary0(Runtime.java:870)
at java.lang.System.loadLibrary(System.java:1122)
at ai.onnxruntime.OnnxRuntime.load(OnnxRuntime.java:174)
at ai.onnxruntime.OnnxRuntime.init(OnnxRuntime.java:81)
at ai.onnxruntime.OrtEnvironment.<clinit>(OrtEnvironment.java:24)
* Initial update of readme
* Readme updates
* Review of consolidated README (#3930)
* Proposed updates for readme (#3953)
I found some of the information was duplicated within the doc, so attempted to streamline
* Fix links
* More updates
- fix build instructions
- nodejs doc reorganization
- roadmap update
- version fixes
* Update ORT Server build instructions
* More doc cleanup
* fix python dev notes name
* Update nodejs and some links
* sync eigen version back to master
* Minor fixes
* add nodsjs to sample table of content
* Update README.md
* Update README.md
* Update README.md
* Update README.md
* Update README.md
* Update README.md
* address PR feedback
* address PR feedback
* nodejs build instruction
* Update Java instructions to include gradle
* Roadmap refresh
Reformat some data, fix link, minor rewording
* Clarify Visual C++ runtime req
Co-authored-by: Nat Kershaw (MSFT) <nakersha@microsoft.com>
Co-authored-by: Prasanth Pulavarthi <prasantp@microsoft.com>
Co-authored-by: manashgoswami <magoswam@microsoft.com>
* [java] - adding a cuda enabled test.
* Adding --build_java to the windows gpu ci pipeline.
* Removing a stray line from the unit tests that always enabled CUDA for Java.
Detect os and arch and move the artifacts to a new folder.
Remove unnecesary jars so we cam focus on those we publish.
Add signing
Make signature simlper.
Fix indent.
Halt on 32-bit arch.
Credits: @Craigacp
* java - adding support for custom op libraries.
* Adding support for RunOptions and additional methods for SessionOptions and OrtSession.
As a result OrtEnvironment.LoggingLevel moved to be a top level enum
called OrtLoggingLevel.
* java - adding unit tests for RunOptions and SessionOptions.
* java - removing unused releaseNamesHandle method
* java - add test for custom op library.
* java - adding log verbosity methods, and tests for the same.
* java - fixes for custom op loading test on Windows.
* Cleanup after rebase on master.