onnxruntime/docs/execution_providers/ACL-ExecutionProvider.md
Faith Xu bb7f43ee91
Documentation update: build instructions (#2636)
* Spacing fix for code block

* Update instructions

Include java, acl, and nn api instructions on build page

* Update build instructions to link to build.md

* typo

* Update build instructions to link to build.md

* Include other minor build.md page updates

* Update CUDA version

* Fix dockerfile links
2019-12-19 13:40:34 -08:00

21 lines
1 KiB
Markdown

## ACL Execution Provider
[Arm Compute Library](https://github.com/ARM-software/ComputeLibrary) is an open source inference engine maintained by Arm and Linaro companies. The integration of ACL as an execution provider (EP) into ONNX Runtime accelerates performance of ONNX model workloads across Armv8 cores.
### Build ACL execution provider
For build instructions, please see the [BUILD page](../../BUILD.md#ARM-Compute-Library).
### Using the ACL execution provider
#### C/C++
To use ACL as execution provider for inferencing, please register it as below.
```
InferenceSession session_object{so};
session_object.RegisterExecutionProvider(std::make_unique<::onnxruntime::ACLExecutionProvider>());
status = session_object.Load(model_file_name);
```
The C API details are [here](../C_API.md#c-api).
### Performance Tuning
For performance tuning, please see guidance on this page: [ONNX Runtime Perf Tuning](../ONNX_Runtime_Perf_Tuning.md)
When/if using [onnxruntime_perf_test](../../onnxruntime/test/perftest), use the flag -e acl