mirror of
https://github.com/saymrwulf/onnxruntime.git
synced 2026-05-15 20:50:42 +00:00
Implement CloudEP for hybrid inferencing. The PR introduces zero new API, customers could configure session and run options to do inferencing with Azure [triton endpoint.](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-deploy-with-triton?tabs=azure-cli%2Cendpoint) Sample configuration in python be like: ``` sess_opt.add_session_config_entry('cloud.endpoint_type', 'triton'); sess_opt.add_session_config_entry('cloud.uri', 'https://cloud.com'); sess_opt.add_session_config_entry('cloud.model_name', 'detection2'); sess_opt.add_session_config_entry('cloud.model_version', '7'); // optional, default 1 sess_opt.add_session_config_entry('cloud.verbose', '1'); // optional, default '0', meaning no verbose ... run_opt.add_run_config_entry('use_cloud', '1') # 0 for local inferencing, 1 for cloud endpoint. run_opt.add_run_config_entry('cloud.auth_key', '...') ... sess.run(None, {'input':input_}, run_opt) ``` Co-authored-by: Randy Shuai <rashuai@microsoft.com> |
||
|---|---|---|
| .. | ||
| docker | ||
| ort_minimal | ||
| tvm | ||
| build_cuda_c_api_package.sh | ||
| build_linux_arm64_python_package.sh | ||
| build_yocto.sh | ||
| copy_strip_binary.sh | ||
| create_package.sh | ||
| extract_and_bundle_gpu_package.sh | ||
| java_copy_strip_binary.sh | ||
| java_linux_final_test.sh | ||
| run_build.sh | ||
| run_dockerbuild.sh | ||
| run_python_dockerbuild.sh | ||
| run_python_tests.sh | ||
| test_custom_ops_pytorch_export.sh | ||
| upload_code_coverage_data.sh | ||
| upload_ortsrv_binaries.sh | ||
| yocto_build_toolchain.sh | ||