Add python 3.8/3.9 support for Windows GPU and Linux ARM64
Delete jemalloc from cgmanifest.json.
Add onnx node test to Nuphar pipeline.
Change $ANDROID_HOME/ndk-bundle to $ANDROID_NDK_HOME. The later one is more accurate.
Delete Java GPU packaging pipeline
Remove test data download step in Nuget Mac OS pipeline. Because these machines are out of control and out of our network, it's hard to make it reliable and the data secure.
Fix a doc problem in c-api-artifacts-package-and-publish-steps-windows.yml. It shouldn't copy C_API.md, because the file has been moved into a different branch.
Delete the CI build docker file for Ubuntu cuda 9.x and Ubuntu x86 32 bits
And, due to some internal restrictions, I need to rename some of the agent pools
1. Merge Nuget CPU pipeline, Java CPU pipeline, C-API pipeline into a single one.
2. Enable compile warnings for cuda files(*.cu) on Windows.
3. Enable static code analyze for the Windows builds in these jobs. For example, this is our first time scanning the JNI code.
4. Fix some warnings in the training code.
5. Enable code sign for Java. Previously we forgot it.
6. Update TPN.txt to remove Jemalloc.
* cancel night build on pyop
* setup win cuda11 pipeline
* add debug build
* test base gpu settings
* setup pipelines to test cuda 10.2 and 11
* rename linux docker images
* rename docker image tag and add clean up job
* fix typo in cuda 11 config
* set cuda11 env
* update linux cuda 11 pipeline
* reset docker image name
* disable uninitialized warning from linux build
* change the way to silence uninitialized warning
* add flags to linux gpu pipeline
* switch docker image for linux cuda 10.2
* switch linuc cuda 10.2 image
* test cuda11 with devtool8
* try latest built images
Co-authored-by: Randy Shuai <rashuai@microsoft.com>
1. Fix the nuget cpu pipeline and put code coverage pipeline back.
2. Reduce onnx_test_runner's default logging level from WARNING to ERROR. Because there are too many log messages now.
3. Enlarge the protobuf read buffer size for onnx_test_runner. It was missed from PR #4020.
* Enable running PEP8 checks via flake8 as part of the build if flake8 is installed.
Update scripts in \tools and \onnxruntime\python. Excluding \onnxruntime\python\tools which needs a lot more work to be PEP8 compliant. Also excluding orttraining\tools for the same reason.
Install flake8 as part of the static_analysis build task in the Win-CPU CI so the checks are run in one CI build.
Update coding standards doc.
Discussed with Faith, because the data size is very small and changes are gradual, there is no need to delete the old data. We want to keep all the history.
Previously, we put the "bin" folder of all the CUDA verions in the system PATH. And 10.2 is in the front. It's a mess.
So I've removed all of them from the system PATH env. But I need to add one of them back through build scripts.
(The problem only affect the C# test, not the C/C++ tests that forked from build.py).
Use CUDA 10.1 for Linux build
(Windows change is already in)
Please note, cublas 10.2.1.243 is for CUDA SDK 10.1.243, not CUDA 10.2.x. CUDA 10.2.89 need cublas 10.2.2.89. They match on the last part of the digits.
libcublas10-10.1.0.105 won't work!!!
The cuda docker image by viswamy is already using 10.1, no need to change.
* update onnx-tensorrt submodule to trt7 branch
* add fp16 option for TRT7
* switch to master branch of onnx tensorrt
* update submodule
* update to TensorRT7.0.0.11
* update to onnx-tensorrt for TensorRT7.0
* switch to private branch due to issues in master branch
* remove trt_onnxify
* disable warnings c4804 for TensorRT parser
* disable warnings c4702 for TensorRT parser
* add back sanity check of shape tensort input in the parser
* disable some warnings for TensorRT7
* change fp16 threshold for TensorRT
* update onn-tensorrt parser
* fix cycle issue in faster-rcnn and add cycle detection in GetCapability
* Update TensorRT container to v20.01
* Update TensorRT image name
* Update linux-multi-gpu-tensorrt-ci-pipeline.yml
* Update linux-gpu-tensorrt-ci-pipeline.yml
* disable rnn tests for TensorRT
* disable rnn tests for TensorRT
* disabled some unit test for TensorRT
* update onnx-tensorrt submodule
* update build scripts for TensorRT
* formating the code
* Update TensorRT-ExecutionProvider.md
* Update BUILD.md
* Update tensorrt_execution_provider.h
* Update tensorrt_execution_provider.cc
* Update win-gpu-tensorrt-ci-pipeline.yml
* use GetEnvironmentVar function to get env virables and switch to Win-GPU-2019 agent pool for win CI build
* change tensorrt path
* change tensorrt path
* fix win ci build issue
* update code based on the reviews
* fix build issue
* roll back to cuda10.0
* add RemoveCycleTest for TensorRT
* fix windows ci build issues
* fix ci build issues
* fix file permission
* fix out of range issue for max_workspace_size_env
1. refactor the pipeline, remove some duplicated code
2. Move Windows_py_GPU_Wheels job to Win-GPU-CUDA10. We'll deprecated the "Win-GPU" pool
3. Delete cpu-nocontribops-esrp-pipeline.yml and cpu-nocontribops-pipeline.yml
4. In Linux nuget jobs, run "make install" before creating the package. So that extra RPAH info will be removed
* remove memory copy between CUDA and TRT
* add info to RegisterExecutionProvider input
* use new IDeviceAllocator for trt allocator
* remove SetDefaultInputsMemoryType from TRT EP
* remove onnx-tensorrt 5.0
* add submodule onnx-tensorrt branch 5.1
* remove redundancy
* Update transformer_memcpy.cc
* Update tensorrt_execution_provider.cc
* switch to TensorRT 5.1.5.0
* update python binding
* disable failed test case on TensorRT
* Update activation_op_test.cc
* upgrade to TensorRT container 19.06
* update according to feedback
* add comments
* remove tensorrt allocator and use cuda(gpu) allocator
* update onnx-tensorrt submodule
* change ci build cuda directory name
Python script and necessary changes in the azure-pipelines yaml file to post the binary size data from NuGet package build. Currently only posted from CPU pipeline. GPU and other pipelines may be added as necessary.
* Simplify linux gpu pipeline
* Refactor win-gpu-ci-pipeline.yml
* Set cuda environment variables for testing and version
* Remove variables from starter script
* minor fix
* Add GPU Nuget pipeline
* Set DisableContribOps environment variable for Linux package tests
* Add ESRP tasks
* Add ESRP signing templates
* Test out hardcode value of ERSP
* Test out hardcode value of ERSP
* Test out hardcode value of ERSP
* Test out hardcode value of ERSP
* test variable expansion
* test variable expansion
* test variable expansion
* test variable expansion
* test variable expansion
* test variable expansion
* test out variable expansion
* test variable expansion
* test variable expansion
* test variable expansion
* test variable expansion
* test variable expansion
* test variable expansion
* test variable expansion
* test variable expansion
* update cpu pipeline to conditionally esrp sign
* Set C# GPU tests to run only if env var is set
* Refactor for easy parameter passing
* refactored esrp templates
* remove variables from template
* Add packaging variables back to pipelines
* update C# for cuda 10
* Merge vars ana parameters for gpu pipeline
* remove vars from mklml pipeline
* display envvars on terminal
* Clean up C# cuda tests, and upgrade to Cuda10
* Introduce CUDNN_PATH pipeline varaible
* YAML variable are always uppercased (not true with classic)
* Update C# GPU test to be more meaningful
* remove macos from gpu tests
* remove debugging info for DisableContribOps option
* Remove DisableContrib ops parameters -- use variables only
* Fix typo from = to -
* remove debug steps
* fix typo
* remove unused variable TESTONGPU from some templates
* clean up CUDA env setup scripts
* Remove CUDNN_PATH from setup_env_cuda.bat
- Added Python script to post the code coverage data to the MySQL table used for dashboard
- Added a build job to run a windows cpu debug build on every merge on master, and run the script
- Removed the code coverage step from the CI build
* added the runcoverage powershell script
* updated the run coverage script. added installation to the windows CI for trying
* exclude other parts of win ci
* fix in the download script
* fix in the download script
* fix in the download script
* fix in the download script
* fix in the download script
* fix in the download script
* fix in the download script
* fix in the download script
* fix in the download script
* added the runtestcoverage script to the pipeline
* some typo fix
* formatting
* re-commenting previously commented block
* cleaned up the powershell script
* fix path in pipeline
* fix path in pipeline
* fixed model path
* some fixes
* excluded long running tests
* add the publish job
* uncomment other tasks
* fixed excluded tests
* some format correction
* stopped running the test debug
* try placing the tes-all at the beginning
* try running the failing test only
* edit run_coverage
* some fix
* skip onnx_model_test
* Added memory size log in powershell script
* try running the onnxruntime_test_all.exe separately from codecov
* enable error reporting, and double memory size in powershell
* corrected the set-item
* remove memory resize, since we are already at max 2 GB
* fixed the tvm.dll issue
* added back the onnx tests in codecov. added back the regular test run
* cleanup
* remove * from the the module path
* add junction target resolution for modules dir
* remove junction-resolution
* reduced tests
* added target extraction for the junction paths in build machine
* added the appropriate change in win ci pipeline to call the updated ps script
* fix typo
* added back all the tests that were disabled
* try fixing the source root
* cleanup and enable all tests
* increase timeout for windows CPU CI due to codecoverage
* templatized the code coverage steps. Conitnue on error with any codecoverage step
* change quote marks
* Add build step to remove the cuda msbuildcutomization file after build, otherwise, the cuda high version could impact the lower version build
* update vs path
* update the path