onnxruntime/csharp/sample/InferenceSample
Dmitri Smirnov bd4d011142
[C#] Rename unreleased API, add utilities (#16806)
### Description
1. rename OrtValue.FillStringTensorElement to StringTensorSetElementAt .
To the API user I think we're conceptually setting the string at an
offset in the tensor with is roughly equivalent to `List<string> list
... list[index] = "value"`.
2. While working on new inference examples, I noticed that I am still
inclined to use `DenseTensor` for N-D indexing. Added `GetStrides()` and
`GetIndex()` from strides for long dims, so the user can obtain strides
and translate N-D indices into a flat index to operate directly on the
native `OrtValue` buffers. Expose these functions to the user.
3. Make sure we generate docs for C# public static  functions.
2023-08-02 10:06:42 -07:00
..
Microsoft.ML.OnnxRuntime.InferenceSample [C#] Rename unreleased API, add utilities (#16806) 2023-08-02 10:06:42 -07:00
Microsoft.ML.OnnxRuntime.InferenceSample.Forms
Microsoft.ML.OnnxRuntime.InferenceSample.Forms.Android
Microsoft.ML.OnnxRuntime.InferenceSample.Forms.iOS
Microsoft.ML.OnnxRuntime.InferenceSample.Maui
Microsoft.ML.OnnxRuntime.InferenceSample.NetCoreApp
readme.md

To test the iOS or Android samples the native build of ONNX Runtime is required and must be in a specific location.

Only the native build for the platform you are testing on is required. e.g. if you're testing using an Android device that is arm64, you only need the libonnxruntime.so for arm64-v8a. The version of the native build should match the checked-out version of the ONNX Runtime repository you're currently using as closely as possible. Otherwise mismatches with the native entry points is possible and could cause crashes.

To acquire the native build you can:

  • build it yourself
    • Android build instructions
    • iOS build instructions
  • extract it from the Microsoft.ML.OnnxRuntime nuget package using NuGetPackageExplorer
    • release version is here
    • integration test version is here
      • this is frequently updated and should work if you're currently using the master branch of ONNX Runtime
  • or if you have access to the internal packaging pipelines
    • the Zip-Nuget-Java-Nodejs Packaging Pipeline produces the native package as an artifact under drop-signed-nuget-CPU
      • run a build for your current branch in the pipeline to ensure the native build matches exactly

For iOS the native build should be at one or more of:

  • \build\iOS\iphoneos\Release\Release-iphoneos\onnxruntime.framework for an iOS device
  • \build\iOS\iphonesimulator\Release\Release-iphonesimulator\onnxruntime.framework for an iOS simulator

For Android the native build should be at one or more of:

  • \build\Android\arm64-v8a\Release\libonnxruntime.so for an 64-bit arm device
  • \build\Android\armeabi-v7a\Release\libonnxruntime.so for an 32-bit arm device
  • \build\Android\x86\Release\libonnxruntime.so for an x86 Android emulator
  • \build\Android\x86_64\Release\libonnxruntime.so for an x86_64 Android emulator