mirror of
https://github.com/saymrwulf/onnxruntime.git
synced 2026-05-14 20:48:00 +00:00
### Description For no, CoreML only support run mlmodels on CPU/ALL, However, sometimes CPU_GPU would be faster a lot. We support the option to select different hardware to boost performance in this PR. ### Motivation and Context <!-- - Why is this change required? What problem does it solve? - If it fixes an open issue, please link to the issue here. --> --------- Co-authored-by: Edward Chen <18449977+edgchen1@users.noreply.github.com> |
||
|---|---|---|
| .. | ||
| OnnxruntimeModule.xcodeproj | ||
| OnnxruntimeModule.xcworkspace | ||
| OnnxruntimeModuleTest | ||
| OnnxruntimeJSIHelper.h | ||
| OnnxruntimeJSIHelper.mm | ||
| OnnxruntimeModule.h | ||
| OnnxruntimeModule.mm | ||
| Podfile | ||
| TensorHelper.h | ||
| TensorHelper.mm | ||