mirror of
https://github.com/saymrwulf/onnxruntime.git
synced 2026-05-16 21:00:14 +00:00
* Initial update of readme * Readme updates * Review of consolidated README (#3930) * Proposed updates for readme (#3953) I found some of the information was duplicated within the doc, so attempted to streamline * Fix links * More updates - fix build instructions - nodejs doc reorganization - roadmap update - version fixes * Update ORT Server build instructions * More doc cleanup * fix python dev notes name * Update nodejs and some links * sync eigen version back to master * Minor fixes * add nodsjs to sample table of content * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * address PR feedback * address PR feedback * nodejs build instruction * Update Java instructions to include gradle * Roadmap refresh Reformat some data, fix link, minor rewording * Clarify Visual C++ runtime req Co-authored-by: Nat Kershaw (MSFT) <nakersha@microsoft.com> Co-authored-by: Prasanth Pulavarthi <prasantp@microsoft.com> Co-authored-by: manashgoswami <magoswam@microsoft.com>
77 lines
No EOL
2 KiB
Markdown
77 lines
No EOL
2 KiB
Markdown
# RKNPU Execution Provider (preview)
|
|
RKNPU DDK is an advanced interface to access Rockchip NPU. RKNPU Execution Provider enables deep learning inference on Rockchip NPU via RKNPU DDK.
|
|
|
|
## Supported platforms
|
|
|
|
* RK1808 Linux
|
|
|
|
*Note: RK3399Pro platform is not supported.*
|
|
|
|
|
|
## Build
|
|
For build instructions, please see the [BUILD page](../../BUILD.md#RKNPU).
|
|
|
|
## Usage
|
|
### C/C++
|
|
To use RKNPU as execution provider for inferencing, please register it as below.
|
|
```
|
|
string log_id = "Foo";
|
|
auto logging_manager = std::make_unique<LoggingManager>
|
|
(std::unique_ptr<ISink>{new CLogSink{}},
|
|
static_cast<Severity>(lm_info.default_warning_level),
|
|
false,
|
|
LoggingManager::InstanceType::Default,
|
|
&log_id)
|
|
Environment::Create(std::move(logging_manager), env)
|
|
InferenceSession session_object{so,env};
|
|
session_object.RegisterExecutionProvider(std::make_unique<::onnxruntime::RknpuExecutionProvider>());
|
|
status = session_object.Load(model_file_name);
|
|
```
|
|
The C API details are [here](../C_API.md#c-api).
|
|
|
|
|
|
## Supported Operators
|
|
|
|
The table below shows the ONNX Ops supported using RKNPU Execution Provider and the mapping between ONNX Ops and RKNPU Ops.
|
|
|
|
| **ONNX Ops** | **RKNPU Ops** |
|
|
| --- | --- |
|
|
| Add | ADD |
|
|
| Mul | MULTIPLY |
|
|
| Conv | CONV2D |
|
|
| QLinearConv | CONV2D |
|
|
| Gemm | FULLCONNECT |
|
|
| Softmax | SOFTMAX |
|
|
| AveragePool | POOL |
|
|
| GlobalAveragePool | POOL |
|
|
| MaxPool | POOL |
|
|
| GlobalMaxPool | POOL |
|
|
| LeakyRelu | LEAKY_RELU |
|
|
| Concat | CONCAT |
|
|
| BatchNormalization | BATCH_NORM |
|
|
| Reshape | RESHAPE |
|
|
| Flatten | RESHAPE |
|
|
| Squeeze | RESHAPE |
|
|
| Unsqueeze | RESHAPE |
|
|
| Transpose | PERMUTE |
|
|
| Relu | RELU |
|
|
| Sub | SUBTRACT |
|
|
| Clip(0~6)| RELU6 |
|
|
| DequantizeLinear | DATACONVERT |
|
|
| Clip | CLIP |
|
|
|
|
|
|
## Supported Models
|
|
|
|
Below Models are supported from ONNX open model zoo using RKNPU Execution Provider
|
|
|
|
### Image Classification
|
|
- squeezenet
|
|
- mobilenetv2-1.0
|
|
- resnet50v1
|
|
- resnet50v2
|
|
- inception_v2
|
|
|
|
### Object Detection
|
|
- ssd
|
|
- yolov3 |