Update C_API.md

Rephrasing
This commit is contained in:
Ryan Hill 2018-11-26 14:57:54 -08:00 committed by GitHub
parent 194cc1539c
commit 3761d2d718
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23

View file

@ -1,17 +1,17 @@
# C API
# Q: Why having a C API?
Q: Why not just live in C++ world? Why must C?
A: We want to distribute onnxruntime as a DLL, which can be used in .Net languages through [P/Invoke](https://docs.microsoft.com/en-us/cpp/dotnet/how-to-call-native-dlls-from-managed-code-using-pinvoke).
Then this is the only option we have.
# Q: Why have a C API?
Q: Why not just live in a C++ world? Why C?
A: We want to distribute the onnxruntime as a DLL, which can be used in .Net languages through [P/Invoke](https://docs.microsoft.com/en-us/cpp/dotnet/how-to-call-native-dlls-from-managed-code-using-pinvoke).
This is the only option we have.
Q: Is it only for .Net?
A: No. It is designed for
1. Creating language bindings for onnxruntime.e.g. C#, python, java, ...
2. Dynamic linking always has some benefits. For example, for solving diamond dependency problem.
A: No. It is designed for:
1. Creating language bindings for the onnxruntime. e.g. C#, python, java, ...
2. Dynamic linking has some benefits. For example, solving diamond dependency problems.
Q: Can I export C++ types and functions across DLL or "Shared Object" Library(.so) boundaries?
A: Well, you can, but it's not a good practice. And we won't do it in this project.
A: Well, you can, but it's not a good practice. We won't do it in this project.
## What's inside
@ -26,14 +26,12 @@ A: Well, you can, but it's not a good practice. And we won't do it in this proje
## How to use it
Include [onnxruntime_c_api.h](include/onnxruntime/core/session/onnxruntime_c_api.h) in your source code.
Then,
1. Call ONNXRuntimeInitialize
2. Create Session: ONNXRuntimeCreateInferenceSession(env, model_uri, nullptr,...)
3. Create Tensor
1. Include [onnxruntime_c_api.h](include/onnxruntime/core/session/onnxruntime_c_api.h).
2. Call ONNXRuntimeInitialize
3. Create Session: ONNXRuntimeCreateInferenceSession(env, model_uri, nullptr,...)
4. Create Tensor
1) ONNXRuntimeCreateAllocatorInfo
2) ONNXRuntimeCreateTensorWithDataAsONNXValue
4. ONNXRuntimeRunInference
5. ONNXRuntimeRunInference