mirror of
https://github.com/saymrwulf/onnxruntime.git
synced 2026-05-15 20:50:42 +00:00
### Description 1. Add valgrind to existing ep_perf CI MemTest and parse ORT-TRT memLeak details 1. General Valgrind logs and logs related to ORT-TRT will be parsed in [CI artifacts](https://aiinfra.visualstudio.com/Lotus/_build/results?buildId=334122&view=artifacts&pathAsName=false&type=publishedArtifacts) 1. Logic: 1. Run valgrind with `onnxruntime-perf-test -e tensorrt` and export log to `valgrind.log` 2. Identify if any `definitely lost` memleak happened 1. For log paragraphs which show `definitely lost`, parse if they have keyword `TensorrtExecutionProvider`. 2. If so, extract these details to `ort_trt_memleak_detail.log`, and return `build failure` to EP Perf CI 3. Fix existing addressSanitizer and sync the squeezenet testcase with latest update from [ort-inference-example](https://github.com/microsoft/onnxruntime-inference-examples/blob/main/c_cxx/squeezenet/main.cpp) 1. Updates in short: Upgrade main.cpp to be using OrtTensorRTProviderOptionsV2 4. Reorder the 7-min-MemTest to be ahead of 9-hr-model-tests, and enable MemTest by default |
||
|---|---|---|
| .. | ||
| inference | ||
| scripts | ||
| Dockerfile.manylinux2014_aten_cpu | ||
| Dockerfile.manylinux2014_cpu | ||
| Dockerfile.manylinux2014_cuda11 | ||
| Dockerfile.manylinux2014_cuda11_6_tensorrt8_4 | ||
| Dockerfile.manylinux2014_cuda11_6_tensorrt8_5 | ||
| Dockerfile.manylinux2014_cuda11_8_tensorrt8_6 | ||
| Dockerfile.manylinux2014_lort_cpu | ||
| Dockerfile.manylinux2014_rocm | ||
| Dockerfile.manylinux2014_training_cuda11_8 | ||
| Dockerfile.ubuntu_cuda11_6_tensorrt8_4 | ||
| Dockerfile.ubuntu_cuda11_8_tensorrt8_5 | ||
| Dockerfile.ubuntu_cuda11_8_tensorrt8_6 | ||
| Dockerfile.ubuntu_gpu_training | ||
| Dockerfile.ubuntu_openvino | ||
| Dockerfile.ubuntu_tensorrt | ||
| Dockerfile.ubuntu_tensorrt_bin | ||
| Dockerfile_manylinux2014_openvino_multipython | ||
| manylinux-entrypoint | ||
| manylinux.patch | ||
| migraphx-ci-pipeline-env.Dockerfile | ||