mirror of
https://github.com/saymrwulf/onnxruntime.git
synced 2026-05-14 20:48:00 +00:00
* Add amd migraphx execution provider to onnx runtime * rename MiGraphX to MIGraphX * add migraphx EP to tests * support multiple program output * disable more tests * backup changes related to program multiple outputs * remove logging code * remove unnecessary changes in migraphx_execution_provider.cc * add migraphx EP to tests * add input requests of the batchnorm operator * add to support an onnx operator PRelu * update migrapx dockerfile and removed one unused line * chagnes related to support dynamic input shape * fix build error * code backup * code backup * version that has 106 models run correctly * code backup * code backup * remove unnecessary print info * code backup * code backup * code backup * code backup * code backup * code backup * changes corresponding to migraphx change * fix merge conflict * minor code cleanup * code cleanup * remove unnecessary code * remove unnecessary code * add to support more constant folding analysis * more constant folding checking for shape input * add env var to control whether fp16 is enabled. Modify docker file to use ROCM3.3 * fix function name to avoid build error * add build and execution instruction for migraphx execution provider * added more build instructions * fixed a small format error * a minor change * fix review comments * another minor change * additional refinement of the documents * additional changes * remove unnecessary changes in the dockfile * additional changes for the dockerfile * code change backup * fix errors related to a few unit tests * fix a build error related to api change * fix unit test errors by either disabling the test or fix related isssues * remove unnecessary log info * sync submodule tvm with master * remove unnecessary changes * remove an unnecessary code line * refine documents for addition example
43 lines
2.1 KiB
Markdown
43 lines
2.1 KiB
Markdown
# MIGraphX Execution Provider
|
|
|
|
ONNX Runtime's [MIGraphX](https://github.com/ROCmSoftwarePlatform/AMDMIGraphX/) execution provider uses AMD's Deep Learning graph optimization engine to accelerate ONNX model on AMD GPUs.
|
|
|
|
## Build
|
|
For build instructions, please see the [BUILD page](../../BUILD.md#AMD-MIGraphX).
|
|
|
|
## Using the MIGraphX execution provider
|
|
### C/C++
|
|
The MIGraphX execution provider needs to be registered with ONNX Runtime to enable in the inference session.
|
|
```
|
|
string log_id = "Foo";
|
|
auto logging_manager = std::make_unique<LoggingManager>
|
|
(std::unique_ptr<ISink>{new CLogSink{}},
|
|
static_cast<Severity>(lm_info.default_warning_level),
|
|
false,
|
|
LoggingManager::InstanceType::Default,
|
|
&log_id)
|
|
Environment::Create(std::move(logging_manager), env)
|
|
InferenceSession session_object{so,env};
|
|
session_object.RegisterExecutionProvider(std::make_unique<::onnxruntime::MIGraphXExecutionProvider>());
|
|
status = session_object.Load(model_file_name);
|
|
```
|
|
You can check [here](https://github.com/scxiao/ort_test/tree/master/char_rnn) for a specific c/c++ program.
|
|
|
|
The C API details are [here](../C_API.md#c-api).
|
|
|
|
### Python
|
|
When using the Python wheel from the ONNX Runtime build with MIGraphX execution provider, it will be automatically
|
|
prioritized over the default GPU or CPU execution providers. There is no need to separately register the execution
|
|
provider. Python APIs details are [here](../python/api_summary.rst#api-summary).
|
|
|
|
You can check [here](https://github.com/scxiao/ort_test/tree/master/python/run_onnx) for a python script to run an
|
|
model on either the CPU or MIGraphX Execution Provider.
|
|
|
|
## Performance Tuning
|
|
For performance tuning, please see guidance on this page: [ONNX Runtime Perf Tuning](../ONNX_Runtime_Perf_Tuning.md)
|
|
|
|
When/if using [onnxruntime_perf_test](../../onnxruntime/test/perftest#onnxruntime-performance-test), use the flag `-e migraphx`
|
|
|
|
## Configuring environment variables
|
|
MIGraphX providers an environment variable ORT_MIGRAPHX_FP16_ENABLE to enable the FP16 mode.
|
|
|