Implement CloudEP for hybrid inferencing.
The PR introduces zero new API, customers could configure session and
run options to do inferencing with Azure [triton
endpoint.](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-deploy-with-triton?tabs=azure-cli%2Cendpoint)
Sample configuration in python be like:
```
sess_opt.add_session_config_entry('cloud.endpoint_type', 'triton');
sess_opt.add_session_config_entry('cloud.uri', 'https://cloud.com');
sess_opt.add_session_config_entry('cloud.model_name', 'detection2');
sess_opt.add_session_config_entry('cloud.model_version', '7'); // optional, default 1
sess_opt.add_session_config_entry('cloud.verbose', '1'); // optional, default '0', meaning no verbose
...
run_opt.add_run_config_entry('use_cloud', '1') # 0 for local inferencing, 1 for cloud endpoint.
run_opt.add_run_config_entry('cloud.auth_key', '...')
...
sess.run(None, {'input':input_}, run_opt)
```
Co-authored-by: Randy Shuai <rashuai@microsoft.com>
Integrate TensorRT 8.5
- Update TensorRT EP to support TensorRT 8.5
- Update relevant CI pipelines
- Disable known non-supported ops for TensorRT
- Make timeout configurable.
We observe more than [20
hours](https://aiinfra.visualstudio.com/Lotus/_build/results?buildId=256729&view=logs&j=71ce39d8-054f-502a-dcd0-e89fa9931f40)
of running unit tests with TensorRT 8.5 in package pipelines. Because we
can't use placeholder to significantly reduce testing time (c-api
application test will deadlock) in package pipelines, we only run
subsets of model tests and unit tests that are related to TRT (add new
build flag--test_all_timeout and set it to 72000 seconds by package
pipelines). Just to remember, we still run all the tests in TensorRT CI
pipelines to have full test coverage.
- include https://github.com/microsoft/onnxruntime/pull/13918 to fix
onnx-tensorrt compile error.
Co-authored-by: George Wu <jywu@microsoft.com>
## Description
1. Convert some git submodules to cmake external projects
2. Update nsync from
[1.23.0](https://github.com/google/nsync/releases/tag/1.23.0) to
[1.25.0](https://github.com/google/nsync/releases/tag/1.25.0)
3. Update re2 from 2021-06-01 to 2022-06-01
4. Update wil from an old commit to 1.0.220914.1 tag
5. Update gtest to a newer commit so that it can optionally leverage
absl/re2 for parsing command line flags.
The following git submodules are deleted:
1. FP16
2. safeint
3. XNNPACK
4. cxxopts
5. dlpack
7. flatbuffers
8. googlebenchmark
9. json
10. mimalloc
11. mp11
12. pthreadpool
More will come.
## Motivation and Context
There are 3 ways of integrating 3rd party C/C++ libraries into ONNX
Runtime:
1. Install them to a system location, then use cmake's find_package
module to locate them.
2. Use git submodules
6. Use cmake's external projects(externalproject_add).
At first when this project was just started, we considered both option 2
and option 3. We preferred option 2 because:
1. It's easier to handle authentication. At first this project was not
open source, and it had some other non-public dependencies. If we use
git submodule, ADO will handle authentication smoothly. Otherwise we
need to manually pass tokens around and be very careful on not exposing
them in build logs.
2. At that time, cmake fetched dependencies after "cmake" finished
generating vcprojects/makefiles. So it was very difficult to make cflags
consistent. Since cmake 3.11, it has a new command: FetchContent, which
fetches dependencies when it generates vcprojects/makefiles just before
add_subdirectories, so the parent project's variables/settings can be
easily passed to the child projects.
And when the project went on, we had some new concerns:
1. As we started to have more and more EPs and build configs, the number
of submodules grew quickly. For more developers, most ORT submodules are
not relevant to them. They shouldn't need to download all of them.
2. It is impossible to let two different build configs use two different
versions of the same dependency. For example, right now we have protobuf
3.18.3 in the submodules. Then every EP must use the same version.
Whenever we have a need to upgrade protobuf, we need to coordinate
across the whole team and many external developers. I can't manage it
anymore.
3. Some projects want to manage the dependencies in a different way,
either because of their preference or because of compliance
requirements. For example, some Microsoft teams want to use vcpkg, but
we don't want to force every user of onnxruntime using vcpkg.
7. Someone wants to dynamically link to protobuf, but our build script
only does static link.
8. Hard to handle security vulnerabilities. For example, whenever
protobuf has a security patch, we have a lot of things to do. But if we
allowed people to build ORT with a different version of protobuf without
changing ORT"s source code, the customer who build ORT from source will
be able to act on such things in a quicker way. They will not need to
wait ORT having a patch release.
9. Every time we do a release, github will also publish a source file
zip file and a source file tarball for us. But they are not usable,
because they miss submodules.
### New features
After this change, users will be able to:
1. Build the dependencies in the way they want, then install them to
somewhere(for example, /usr or a temp folder).
2. Or download the dependencies by using cmake commands from these
dependencies official website
3. Similar to the above, but use your private mirrors to migrate supply
chain risks.
4. Use different versions of the dependencies, as long as our source
code is compatible with them. For example, you may use you can't use
protobuf 3.20.x as they need code changes in ONNX Runtime.
6. Only download the things the current build needs.
10. Avoid building external dependencies again and again in every build.
### Breaking change
The onnxruntime_PREFER_SYSTEM_LIB build option is removed you could think from now
it is default ON. If you don't like the new behavior, you can set FETCHCONTENT_TRY_FIND_PACKAGE_MODE to NEVER.
Besides, for who relied on the onnxruntime_PREFER_SYSTEM_LIB build
option, please be aware that this PR will change find_package calls from
Module mode to Config mode. For example, in the past if you have
installed protobuf from apt-get from ubuntu 20.04's official repo,
find_package can find it and use it. But after this PR, it won't. This
is because that protobuf version provided by Ubuntu 20.04 is too old to
support the "config mode". It can be resolved by getting a newer version
of protobuf from somewhere.
Patch Protobuf and ONNX's cmake files and enforce BinSkim check.
This PR has overlap with #13523 . I would prefer to get this one merged
first so that we can finished the BinSkim work, and I try to make this
PR as small as possible.
1. Update CUDA version from 11.4 to 11.6.
2. Update Manylinux version
3. Upgrade GCC version from 10 to 11 for most x86_64 pipelines. CentOS 7 ARM64 doesn't have GCC 11 yet.
4. Refactor python packaging pipeline:
a. Split Linux GPU build job to two parts, build and test, so that the
build part doesn't need to use a GPU machine
b. Make the Linux GPU build job and Linux CPU build job more similar: share the same bash script and yaml file.
5. Temporarily disable Attention_Mask1D_Fp16_B2_FusedNoPadding because it is causing one of our packaging pipeline to fail. I have created an ADO task for this.
* Add tests for all uniary aten ops supported in eager mode
* fixing the PR draft
* fixing the merge
* changing eval to be at compile time
* adding requirements for eager
* 1.adding function to {ops}_out
2.cleaning the code
and adding comments
* editing the code according to code review
Co-authored-by: root <root@AHA-LIRONKESE-1>
* update trt 8.4ga
* trt 8.4 linux ci pipeline
* fix cmake
* placeholder_builder
* trt 8.4 windows pipeline
* gpu package pipeline
* trt 8.4.1.5 , packaging pipeline updates
* python packaging
* ctest timeout
* python packaging test
* bump timeout
* python format
* format
* revert
* newline
* enable trt python tests
* typo
* python format
* disable on windows
Description: Format all python files under onnxruntime with black and isort.
After checking in, we can use .git-blame-ignore-revs to ignore the formatting PR in git blame.
#11315, #11316
* add c-api test for package
* fix bug for running c-api test for package
* refine run application script
* remove redundant code
* include CUDA test
* Remove testing CUDA EP temporarily
* fix bug
* Code refactor
* try to fix YAML bug
* try to fix YAML bug
* try to fix YAML bug
* fix bug for multiple directories in Pipelines
* fix bug
* add comments and fix bug
* Update c-api-noopenmp-packaging-pipelines.yml
* Remove failOnStandardError flag in Pipelines
* Add android package build settings for full build
Co-authored-by: gwang0000 <62914304+gwang0000@users.noreply.github.com>
Co-authored-by: Scott McKay <skottmckay@gmail.com>
Co-authored-by: Edward Chen <18449977+edgchen1@users.noreply.github.com>
* update base image from 11.4.0 to 11.4.2
* update Linux TRT GPU pipeline to TRT 8.2
* update onnx-tensorrt to 8.2-GA
* disable failing TensorRT 8.2 tests.
* update pad test.
* fix
* update win trt ci pipeline to trt 8.2
* test run with cuda 11.4 and cudnn 8.2
* increase timeout
* revert
* revert
* update packaging pipelines to use trt 8.2
* fix typo
* update trt gpu perf pipeline to trt 8.2
* increase timeout
* delete deprecated ci-perf-pipeline.yml
* bump timeout
* adjust timeout packaging
* add ortmodule and eager mode test
* add ortmodule dependency
* convert between aten ort tensor and ortvalue
* register the EP to ortmodule using ort device information
* remove duplicated test
* remove useless dependency
* handle half precision type for ortmodule outputs
* adjust the tensor conversion python code
Co-authored-by: Cheng Tang <chenta@microsoft.com@orttrainingdev9.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
* add ortmodule and eager mode test
* add ortmodule dependency
* fix eager pipeline
* skip tthe ortmodule test for windows due to win ci issue
* remove useless win ci change
* add torch
Co-authored-by: Abhishek Jindal <abjindal@microsoft.com>
Add Xamarin support to the ORT nuget packages.
- Update C# code to support Xamarin builds for iOS and Android
- refactor some things to split out common code
- include iOS and Android ORT native shared library in native nuget package
* implement cuda provider
* define profiler common
* call start after register
* add memcpy event
* add cuda correlation
* format code
* add cupti to test path
* switch to CUpti_ActivityKernel3
* reset cupti path
* fix test case
* fix trt pipeline
* add namespace
* format code
* exclude training from testing
* remove mutex
* Update to CUDA11.4 and TensorRT-8.0.3.4
* update trt pool, remove cudnn from setup_env_gpu.bat
* revert pool
* test gpu package pipeline on t4
* back out changes
* back out changes
Co-authored-by: George Wu <jywu@microsoft.com>
* modify for test
* modify for test
* modify for test
* modify for test
* modify for test
* modify for test
* prepare for PR
* Rename cuda directory to gpu directory in tarball
* Fix gpu java package
* fix bug
* fix small bug
* Add onnxruntime_providers_shared.dll into gpu nuget package
* Modify for test
* Temporarily remove for test
* Modify for test
* Modify for test
* Test packging Windows combined GPU
* Test packging Windows combined GPU
* Test packging Windows combined GPU
* Test packging Windows combined GPU
* modify for test
* modify for test
* fix bug
* Modify for test
* Modify for test
* Modify for test
* Modify for test
* Modify for test
* Modify for test
* Modify for test
* Modify for test
* Prepare for PR
* Prepare for PR
* Code refactor
* Rename proper Artifact name
* Rename intermediate Artifact names
* Revert Artifact Names
* Rename Artifact Names
* Modify Artifact name
* Modify Artifact name
* Modify Artifact name
* Update Java package
* Update Java package
* fix bug to change artifact name
* Fix bug for the wrong file path
* Fix no fetching correct artifact and test
* temporarily modify for test
* undo the change for test
* Merge CPU/GPU nuget pipeline
* Include TensorRT EP libraries into existing GPU nuget package pipeline
* modify to use correct YAML
* Modify for test
* modify for test
* Add depedance
* Add depedance (cont.)
* modify for test
* Add create TensorRT nuget package
* modify for test
* modify for test
* Merge CPU/GPU nuget pipeline
* Include TensorRT EP libraries into existing GPU nuget package pipeline
* modify to use correct YAML
* Modify for test
* modify for test
* Add depedance
* Add depedance (cont.)
* modify for test
* Add create TensorRT nuget package
* modify for test
* fix merge bug
* code refactor
* code refactor
* modify for test
* modify for test
* modify for test
* modify for test
* modify for test
* modify for test
* cleanup
* modify for test
* fix bug
* modify for test
* refactor
* fix bug and test
* Modify for test
* Modify for test
* Modify for test
* Modify for test
* Prepare for PR
* Prepare for PR
* code refacotr from review
* Remove naming 'Microsoft.ML.OnnxRuntime.TensorRT' to avoid confusion
* Add linux TensorRT libraries
* Remove redundant variable in YMAL
* revert file
* undo revert file
* Modify regular expression so that it can capture the correct file
* Remove newline at end of file
* small fix
* Revert to CUDA11.1 on Windows
* Add unit tests for nuget package on Linux
Co-authored-by: Changming Sun <chasun@microsoft.com>
Merge CPU/GPU nuget pipeline. The old GPU nuget pipeline will be only for DML.
TODO: the result GPU package contains PDB files for some of the DLLs, but not all. It is due to the refactoring of CUDA EP to pluggable DLLs. At that time we forgot to copy the PDB files. However, I can't add them in now. Because currently the package is already 220MB large. If the missed PDB files were added, then it will be oversize. nuget.org doesn't accept >250MB packages.
* update onnx-tensorrt parser to master
* disable unsupported tests
* add cuda sm 75 for T4
* update tensorrt pipeline
* update trt pipelines
* update trt pipelines
* Update linux-gpu-tensorrt-ci-pipeline.yml
* update trt cid pipeline
* Update linux-gpu-tensorrt-ci-pipeline.yml
* Update Tensorrt Windows build pool and TensorRT/CUDA/CuDNN version
* update to cuda11.4 in trt ci pipeline
* update base image to cuda11.4
* update packaging pipeline to cuda11.4
* clean up
* remove cuda11.1 and cuda11.3 docker file
* disable unsupported tensorrt tests at runtime
* Update linux-multi-gpu-tensorrt-ci-pipeline.yml
1. Update manylinux build scripts. This will add [PEP600](https://www.python.org/dev/peps/pep-0600/)(manylinux2 tags) support. numpy has adopted this new feature, we should do the same. The old build script files were copied from https://github.com/pypa/manylinux, but they has been deleted and replaced in the upstream repo. The manylinux repo doesn't have a manylinux2014 branch anymore. So I'm removing the obsolete code, sync the files with the latest master.
2. Update GPU CUDA version from 11.0 to 11.1(after a discussion with PMs).
3. Delete tools/ci_build/github/linux/docker/Dockerfile.manylinux2014_cuda10_2. (Merged the content to tools/ci_build/github/linux/docker/Dockerfile.manylinux2014_cuda11)
4. Modernize the cmake code of how to locate python devel files. It was suggested in https://github.com/onnx/onnx/pull/1631 .
5. Remove `onnxruntime_MSVC_STATIC_RUNTIME` and `onnxruntime_GCC_STATIC_CPP_RUNTIME` build options. Now cmake has builtin support for it. Starting from cmake 3.15, we can use `CMAKE_MSVC_RUNTIME_LIBRARY` cmake variable to choose which MSVC runtime library we want to use.
6. Update Ubuntu docker images that used in our CI build from Ubuntu 18.04 to Ubuntu 20.04.
7. Update GCC version in CUDA 11.1 pipelines from 8.x to 9.3.1
8. Split Linux GPU CI pipeline to two jobs: build the code on a CPU machine then run the tests on another GPU machines. In the past we didn't test our python packages. We only tested the pre-packed files. So we didn't catch the rpath issue in CI build.
9. Add a CentOS machine pool and test our Linux GPU build on real CentOS machines.
10. Rework ARM64 Linux GPU python packaging pipeline. Previously it uses cross-compiling therefore we must static link to C Runtime. But now have pluggable EP API and it doesn't support static link. So I changed to use qemu emulation instead. Now the build is 10x slower than before. But it is more extensible.
Add python 3.8/3.9 support for Windows GPU and Linux ARM64
Delete jemalloc from cgmanifest.json.
Add onnx node test to Nuphar pipeline.
Change $ANDROID_HOME/ndk-bundle to $ANDROID_NDK_HOME. The later one is more accurate.
Delete Java GPU packaging pipeline
Remove test data download step in Nuget Mac OS pipeline. Because these machines are out of control and out of our network, it's hard to make it reliable and the data secure.
Fix a doc problem in c-api-artifacts-package-and-publish-steps-windows.yml. It shouldn't copy C_API.md, because the file has been moved into a different branch.
Delete the CI build docker file for Ubuntu cuda 9.x and Ubuntu x86 32 bits
And, due to some internal restrictions, I need to rename some of the agent pools
1. Merge Nuget CPU pipeline, Java CPU pipeline, C-API pipeline into a single one.
2. Enable compile warnings for cuda files(*.cu) on Windows.
3. Enable static code analyze for the Windows builds in these jobs. For example, this is our first time scanning the JNI code.
4. Fix some warnings in the training code.
5. Enable code sign for Java. Previously we forgot it.
6. Update TPN.txt to remove Jemalloc.
* cancel night build on pyop
* setup win cuda11 pipeline
* add debug build
* test base gpu settings
* setup pipelines to test cuda 10.2 and 11
* rename linux docker images
* rename docker image tag and add clean up job
* fix typo in cuda 11 config
* set cuda11 env
* update linux cuda 11 pipeline
* reset docker image name
* disable uninitialized warning from linux build
* change the way to silence uninitialized warning
* add flags to linux gpu pipeline
* switch docker image for linux cuda 10.2
* switch linuc cuda 10.2 image
* test cuda11 with devtool8
* try latest built images
Co-authored-by: Randy Shuai <rashuai@microsoft.com>
1. Fix the nuget cpu pipeline and put code coverage pipeline back.
2. Reduce onnx_test_runner's default logging level from WARNING to ERROR. Because there are too many log messages now.
3. Enlarge the protobuf read buffer size for onnx_test_runner. It was missed from PR #4020.
* Enable running PEP8 checks via flake8 as part of the build if flake8 is installed.
Update scripts in \tools and \onnxruntime\python. Excluding \onnxruntime\python\tools which needs a lot more work to be PEP8 compliant. Also excluding orttraining\tools for the same reason.
Install flake8 as part of the static_analysis build task in the Win-CPU CI so the checks are run in one CI build.
Update coding standards doc.
Discussed with Faith, because the data size is very small and changes are gradual, there is no need to delete the old data. We want to keep all the history.
Previously, we put the "bin" folder of all the CUDA verions in the system PATH. And 10.2 is in the front. It's a mess.
So I've removed all of them from the system PATH env. But I need to add one of them back through build scripts.
(The problem only affect the C# test, not the C/C++ tests that forked from build.py).
Use CUDA 10.1 for Linux build
(Windows change is already in)
Please note, cublas 10.2.1.243 is for CUDA SDK 10.1.243, not CUDA 10.2.x. CUDA 10.2.89 need cublas 10.2.2.89. They match on the last part of the digits.
libcublas10-10.1.0.105 won't work!!!
The cuda docker image by viswamy is already using 10.1, no need to change.
* update onnx-tensorrt submodule to trt7 branch
* add fp16 option for TRT7
* switch to master branch of onnx tensorrt
* update submodule
* update to TensorRT7.0.0.11
* update to onnx-tensorrt for TensorRT7.0
* switch to private branch due to issues in master branch
* remove trt_onnxify
* disable warnings c4804 for TensorRT parser
* disable warnings c4702 for TensorRT parser
* add back sanity check of shape tensort input in the parser
* disable some warnings for TensorRT7
* change fp16 threshold for TensorRT
* update onn-tensorrt parser
* fix cycle issue in faster-rcnn and add cycle detection in GetCapability
* Update TensorRT container to v20.01
* Update TensorRT image name
* Update linux-multi-gpu-tensorrt-ci-pipeline.yml
* Update linux-gpu-tensorrt-ci-pipeline.yml
* disable rnn tests for TensorRT
* disable rnn tests for TensorRT
* disabled some unit test for TensorRT
* update onnx-tensorrt submodule
* update build scripts for TensorRT
* formating the code
* Update TensorRT-ExecutionProvider.md
* Update BUILD.md
* Update tensorrt_execution_provider.h
* Update tensorrt_execution_provider.cc
* Update win-gpu-tensorrt-ci-pipeline.yml
* use GetEnvironmentVar function to get env virables and switch to Win-GPU-2019 agent pool for win CI build
* change tensorrt path
* change tensorrt path
* fix win ci build issue
* update code based on the reviews
* fix build issue
* roll back to cuda10.0
* add RemoveCycleTest for TensorRT
* fix windows ci build issues
* fix ci build issues
* fix file permission
* fix out of range issue for max_workspace_size_env
1. refactor the pipeline, remove some duplicated code
2. Move Windows_py_GPU_Wheels job to Win-GPU-CUDA10. We'll deprecated the "Win-GPU" pool
3. Delete cpu-nocontribops-esrp-pipeline.yml and cpu-nocontribops-pipeline.yml
4. In Linux nuget jobs, run "make install" before creating the package. So that extra RPAH info will be removed
* remove memory copy between CUDA and TRT
* add info to RegisterExecutionProvider input
* use new IDeviceAllocator for trt allocator
* remove SetDefaultInputsMemoryType from TRT EP
* remove onnx-tensorrt 5.0
* add submodule onnx-tensorrt branch 5.1
* remove redundancy
* Update transformer_memcpy.cc
* Update tensorrt_execution_provider.cc
* switch to TensorRT 5.1.5.0
* update python binding
* disable failed test case on TensorRT
* Update activation_op_test.cc
* upgrade to TensorRT container 19.06
* update according to feedback
* add comments
* remove tensorrt allocator and use cuda(gpu) allocator
* update onnx-tensorrt submodule
* change ci build cuda directory name
Python script and necessary changes in the azure-pipelines yaml file to post the binary size data from NuGet package build. Currently only posted from CPU pipeline. GPU and other pipelines may be added as necessary.
* Simplify linux gpu pipeline
* Refactor win-gpu-ci-pipeline.yml
* Set cuda environment variables for testing and version
* Remove variables from starter script
* minor fix
* Add GPU Nuget pipeline
* Set DisableContribOps environment variable for Linux package tests
* Add ESRP tasks
* Add ESRP signing templates
* Test out hardcode value of ERSP
* Test out hardcode value of ERSP
* Test out hardcode value of ERSP
* Test out hardcode value of ERSP
* test variable expansion
* test variable expansion
* test variable expansion
* test variable expansion
* test variable expansion
* test variable expansion
* test out variable expansion
* test variable expansion
* test variable expansion
* test variable expansion
* test variable expansion
* test variable expansion
* test variable expansion
* test variable expansion
* test variable expansion
* update cpu pipeline to conditionally esrp sign
* Set C# GPU tests to run only if env var is set
* Refactor for easy parameter passing
* refactored esrp templates
* remove variables from template
* Add packaging variables back to pipelines
* update C# for cuda 10
* Merge vars ana parameters for gpu pipeline
* remove vars from mklml pipeline
* display envvars on terminal
* Clean up C# cuda tests, and upgrade to Cuda10
* Introduce CUDNN_PATH pipeline varaible
* YAML variable are always uppercased (not true with classic)
* Update C# GPU test to be more meaningful
* remove macos from gpu tests
* remove debugging info for DisableContribOps option
* Remove DisableContrib ops parameters -- use variables only
* Fix typo from = to -
* remove debug steps
* fix typo
* remove unused variable TESTONGPU from some templates
* clean up CUDA env setup scripts
* Remove CUDNN_PATH from setup_env_cuda.bat
- Added Python script to post the code coverage data to the MySQL table used for dashboard
- Added a build job to run a windows cpu debug build on every merge on master, and run the script
- Removed the code coverage step from the CI build
* added the runcoverage powershell script
* updated the run coverage script. added installation to the windows CI for trying
* exclude other parts of win ci
* fix in the download script
* fix in the download script
* fix in the download script
* fix in the download script
* fix in the download script
* fix in the download script
* fix in the download script
* fix in the download script
* fix in the download script
* added the runtestcoverage script to the pipeline
* some typo fix
* formatting
* re-commenting previously commented block
* cleaned up the powershell script
* fix path in pipeline
* fix path in pipeline
* fixed model path
* some fixes
* excluded long running tests
* add the publish job
* uncomment other tasks
* fixed excluded tests
* some format correction
* stopped running the test debug
* try placing the tes-all at the beginning
* try running the failing test only
* edit run_coverage
* some fix
* skip onnx_model_test
* Added memory size log in powershell script
* try running the onnxruntime_test_all.exe separately from codecov
* enable error reporting, and double memory size in powershell
* corrected the set-item
* remove memory resize, since we are already at max 2 GB
* fixed the tvm.dll issue
* added back the onnx tests in codecov. added back the regular test run
* cleanup
* remove * from the the module path
* add junction target resolution for modules dir
* remove junction-resolution
* reduced tests
* added target extraction for the junction paths in build machine
* added the appropriate change in win ci pipeline to call the updated ps script
* fix typo
* added back all the tests that were disabled
* try fixing the source root
* cleanup and enable all tests
* increase timeout for windows CPU CI due to codecoverage
* templatized the code coverage steps. Conitnue on error with any codecoverage step
* change quote marks
* Add build step to remove the cuda msbuildcutomization file after build, otherwise, the cuda high version could impact the lower version build
* update vs path
* update the path