mirror of
https://github.com/saymrwulf/onnxruntime.git
synced 2026-05-14 20:48:00 +00:00
### Description For no, CoreML only support run mlmodels on CPU/ALL, However, sometimes CPU_GPU would be faster a lot. We support the option to select different hardware to boost performance in this PR. ### Motivation and Context <!-- - Why is this change required? What problem does it solve? - If it fixes an open issue, please link to the issue here. --> --------- Co-authored-by: Edward Chen <18449977+edgchen1@users.noreply.github.com> |
||
|---|---|---|
| .. | ||
| onnxruntime.h | ||
| onnxruntime_training.h | ||
| ort_checkpoint.h | ||
| ort_coreml_execution_provider.h | ||
| ort_custom_op_registration.h | ||
| ort_enums.h | ||
| ort_env.h | ||
| ort_session.h | ||
| ort_training_session.h | ||
| ort_value.h | ||
| ort_xnnpack_execution_provider.h | ||