mirror of
https://github.com/saymrwulf/onnxruntime.git
synced 2026-05-16 21:00:14 +00:00
Integrate TensorRT 8.5 - Update TensorRT EP to support TensorRT 8.5 - Update relevant CI pipelines - Disable known non-supported ops for TensorRT - Make timeout configurable. We observe more than [20 hours](https://aiinfra.visualstudio.com/Lotus/_build/results?buildId=256729&view=logs&j=71ce39d8-054f-502a-dcd0-e89fa9931f40) of running unit tests with TensorRT 8.5 in package pipelines. Because we can't use placeholder to significantly reduce testing time (c-api application test will deadlock) in package pipelines, we only run subsets of model tests and unit tests that are related to TRT (add new build flag--test_all_timeout and set it to 72000 seconds by package pipelines). Just to remember, we still run all the tests in TensorRT CI pipelines to have full test coverage. - include https://github.com/microsoft/onnxruntime/pull/13918 to fix onnx-tensorrt compile error. Co-authored-by: George Wu <jywu@microsoft.com>
6 lines
472 B
Batchfile
6 lines
472 B
Batchfile
REM Copyright (c) Microsoft Corporation. All rights reserved.
|
|
REM Licensed under the MIT License.
|
|
|
|
set PATH=C:\local\TensorRT-8.5.1.7.Windows10.x86_64.cuda-11.8.cudnn8.6\lib;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\extras\CUPTI\lib64;C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\MSBuild\Current\Bin;%PATH%
|
|
set GRADLE_OPTS=-Dorg.gradle.daemon=false
|
|
set CUDA_MODULE_LOADING=LAZY
|