mirror of
https://github.com/saymrwulf/onnxruntime.git
synced 2026-05-14 20:48:00 +00:00
* Add ArmNN Execution Provider Add a new execution provider targeting Arm architecture based on ArmNN. Validated on NXP i.MX8QM CPU with ResNet50, MobileNetv2 and VGG models. reviewed-by: mike.caraman@nxp.com * Minor fixes - renamed onnxruntime_ARMNN_RELU_USECPU to onnxruntime_ARMNN_RELU_USE_CPU - fixed acl typo * remove extra includes. added exception for ArmNN in test * fix indentation * Separated the activation implementation from the cpu and fixed the blockage from the endif Co-authored-by: Andrei-Alexandru <andrei-alexandru.avram@nxp.com>
1.4 KiB
1.4 KiB
ArmNN Execution Provider
ArmNN is an open source inference engine maintained by Arm and Linaro companies. The integration of ArmNN as an execution provider (EP) into ONNX Runtime accelerates performance of ONNX model workloads across Armv8 cores.
Build ArmNN execution provider
For build instructions, please see the BUILD page.
Using the ArmNN execution provider
C/C++
To use ArmNN as execution provider for inferencing, please register it as below.
string log_id = "Foo";
auto logging_manager = std::make_unique<LoggingManager>
(std::unique_ptr<ISink>{new CLogSink{}},
static_cast<Severity>(lm_info.default_warning_level),
false,
LoggingManager::InstanceType::Default,
&log_id)
Environment::Create(std::move(logging_manager), env)
InferenceSession session_object{so, env};
session_object.RegisterExecutionProvider(std::make_unique<::onnxruntime::ArmNNExecutionProvider>());
status = session_object.Load(model_file_name);
The C API details are here.
Performance Tuning
For performance tuning, please see guidance on this page: ONNX Runtime Perf Tuning
When/if using onnxruntime_perf_test, use the flag -e armnn