mirror of
https://github.com/saymrwulf/onnxruntime.git
synced 2026-05-15 20:50:42 +00:00
### Description <!-- Describe your changes. --> New CI: [Linux_TRT_Minimal_CUDA_Test_CI](https://dev.azure.com/onnxruntime/onnxruntime/_build?definitionId=230&_a=summary) and [Win_TRT_Minimal_CUDA_Test_CI ](https://dev.azure.com/onnxruntime/onnxruntime/_build?definitionId=231) Setting config for new CI to monitor if there's no issue to build ORT-TRTEP with minimal CUDA * yaml content is following Linux TRT CI yaml, with different build arg/cache name * build arg is following [[TensorRT EP] Enable a minimal CUDA EP compilation without kernels](https://github.com/microsoft/onnxruntime/pull/19052#issuecomment-1888066851) ### Motivation and Context <!-- - Why is this change required? What problem does it solve? - If it fixes an open issue, please link to the issue here. --> Monitor if user is able to build ORT-TRTEP-minimalCUDA without any blocker (which takes ~30min to build) |
||
|---|---|---|
| .. | ||
| docker | ||
| ort_minimal | ||
| python | ||
| build_cuda_c_api_package.sh | ||
| build_cuda_ci.sh | ||
| build_linux_python_package.sh | ||
| build_rocm_c_api_package.sh | ||
| build_tensorrt_c_api_package.sh | ||
| build_tensorrt_ci.sh | ||
| build_yocto.sh | ||
| copy_strip_binary.sh | ||
| create_package.sh | ||
| delete_unused_files_before_upload.sh | ||
| extract_and_bundle_gpu_package.sh | ||
| java_copy_strip_binary.sh | ||
| java_linux_final_test.sh | ||
| run_build.sh | ||
| run_dockerbuild.sh | ||
| run_python_dockerbuild.sh | ||
| run_python_dockertest.sh | ||
| run_python_tests.sh | ||
| test_custom_ops_pytorch_export.sh | ||
| upload_code_coverage_data.sh | ||
| upload_ortsrv_binaries.sh | ||
| yocto_build_toolchain.sh | ||