mirror of
https://github.com/saymrwulf/onnxruntime.git
synced 2026-05-14 20:48:00 +00:00
* Add support for sessions to share a global threadpool. * Fix build issues * Add tests, fix build issues. * Added some documentation * Fix centos issue when threadpools become nullptr due to 1 core. * Fix mac and x86 build issues * Address some PR comments * Disabled test for android, added few more tests and addressed more PR comments. * const_cast
29 lines
1.4 KiB
Markdown
29 lines
1.4 KiB
Markdown
## ACL Execution Provider
|
|
|
|
[Arm Compute Library](https://github.com/ARM-software/ComputeLibrary) is an open source inference engine maintained by Arm and Linaro companies. The integration of ACL as an execution provider (EP) into ONNX Runtime accelerates performance of ONNX model workloads across Armv8 cores.
|
|
|
|
### Build ACL execution provider
|
|
For build instructions, please see the [BUILD page](../../BUILD.md#ARM-Compute-Library).
|
|
|
|
### Using the ACL execution provider
|
|
#### C/C++
|
|
To use ACL as execution provider for inferencing, please register it as below.
|
|
```
|
|
string log_id = "Foo";
|
|
auto logging_manager = std::make_unique<LoggingManager>
|
|
(std::unique_ptr<ISink>{new CLogSink{}},
|
|
static_cast<Severity>(lm_info.default_warning_level),
|
|
false,
|
|
LoggingManager::InstanceType::Default,
|
|
&log_id)
|
|
Environment::Create(std::move(logging_manager), env)
|
|
InferenceSession session_object{so, env};
|
|
session_object.RegisterExecutionProvider(std::make_unique<::onnxruntime::ACLExecutionProvider>());
|
|
status = session_object.Load(model_file_name);
|
|
```
|
|
The C API details are [here](../C_API.md#c-api).
|
|
|
|
### Performance Tuning
|
|
For performance tuning, please see guidance on this page: [ONNX Runtime Perf Tuning](../ONNX_Runtime_Perf_Tuning.md)
|
|
|
|
When/if using [onnxruntime_perf_test](../../onnxruntime/test/perftest), use the flag -e acl
|