* update onnx-tensorrt parser to master
* disable unsupported tests
* add cuda sm 75 for T4
* update tensorrt pipeline
* update trt pipelines
* update trt pipelines
* Update linux-gpu-tensorrt-ci-pipeline.yml
* update trt cid pipeline
* Update linux-gpu-tensorrt-ci-pipeline.yml
* Update Tensorrt Windows build pool and TensorRT/CUDA/CuDNN version
* update to cuda11.4 in trt ci pipeline
* update base image to cuda11.4
* update packaging pipeline to cuda11.4
* clean up
* remove cuda11.1 and cuda11.3 docker file
* disable unsupported tensorrt tests at runtime
* Update linux-multi-gpu-tensorrt-ci-pipeline.yml
* Install and use conda on ortmodule CI pipelines
* Update build script to install onnxruntime wheel before running unit tests
* Remove python 3.5 from install_python_deps
* Pinning deepspeed version to 0.3.15
* update onnx-tensorrt submodule to trt7 branch
* add fp16 option for TRT7
* switch to master branch of onnx tensorrt
* update submodule
* update to TensorRT7.0.0.11
* update to onnx-tensorrt for TensorRT7.0
* switch to private branch due to issues in master branch
* remove trt_onnxify
* disable warnings c4804 for TensorRT parser
* disable warnings c4702 for TensorRT parser
* add back sanity check of shape tensort input in the parser
* disable some warnings for TensorRT7
* change fp16 threshold for TensorRT
* update onn-tensorrt parser
* fix cycle issue in faster-rcnn and add cycle detection in GetCapability
* Update TensorRT container to v20.01
* Update TensorRT image name
* Update linux-multi-gpu-tensorrt-ci-pipeline.yml
* Update linux-gpu-tensorrt-ci-pipeline.yml
* disable rnn tests for TensorRT
* disable rnn tests for TensorRT
* disabled some unit test for TensorRT
* update onnx-tensorrt submodule
* update build scripts for TensorRT
* formating the code
* Update TensorRT-ExecutionProvider.md
* Update BUILD.md
* Update tensorrt_execution_provider.h
* Update tensorrt_execution_provider.cc
* Update win-gpu-tensorrt-ci-pipeline.yml
* use GetEnvironmentVar function to get env virables and switch to Win-GPU-2019 agent pool for win CI build
* change tensorrt path
* change tensorrt path
* fix win ci build issue
* update code based on the reviews
* fix build issue
* roll back to cuda10.0
* add RemoveCycleTest for TensorRT
* fix windows ci build issues
* fix ci build issues
* fix file permission
* fix out of range issue for max_workspace_size_env
* remove memory copy between CUDA and TRT
* add info to RegisterExecutionProvider input
* use new IDeviceAllocator for trt allocator
* remove SetDefaultInputsMemoryType from TRT EP
* remove onnx-tensorrt 5.0
* add submodule onnx-tensorrt branch 5.1
* remove redundancy
* Update transformer_memcpy.cc
* Update tensorrt_execution_provider.cc
* switch to TensorRT 5.1.5.0
* update python binding
* disable failed test case on TensorRT
* Update activation_op_test.cc
* upgrade to TensorRT container 19.06
* update according to feedback
* add comments
* remove tensorrt allocator and use cuda(gpu) allocator
* update onnx-tensorrt submodule
* change ci build cuda directory name
* updated cmake files for trt
* added trt execution provider
* added trt basic test
* removed trt_path action attribute
* Add files via upload
* Update build.py
* Update trt_allocator.h
* fixed issues found by reviewers
* changed cast operator
* added comment for custom kernel implementation
* changed auto to auto&
* changed to function compile APIs for TRT execution provider
* changed to function compile APIs for TRT execution provider
* added new DType DInt64
* adapted to the changes of onnxruntime_c_api
* removed trt kernel (use function compile instead)
* updated onnx-tensorrt submodule
* set default memory type to TRT fused kernel
* resolve merge conflict
* fixed the issue that USE_CUDA conflicts with USE_TRT
* construct graph by adding nodes in topological order
* made changes for Windows
* change buffers type
* bypass HasImplementationOf check for TRT XP because TRT kernel is not registered
* added domain to version info in rebuilt model proto
* added trt to test option list
* added DomainToVersionMap() to GraphViewer
* removed Copy()
* fixed broken code
* format the code to clang format
* used local reference to the frequently used values
* fixed a couple of issues according to reviewers feedback
* fixed a couple of issues according to reviewers feedback
* added python binding for TRT and enable use_cuda when use_trt is on
* fixed a redefinition issue
* changed shared_ptr to unique_ptr on trt engines, and made a few changes required by reviewers
* enabled trtexecution provider for unit tests
* renamed trt to tensorrt
* added tesorrt to python binding
* update submodule onnx and onnx-tensorrt
* made a couple of minor changes based on reviewer's feedback
* added CUDA_CHECK
* removed test code
* fixed broken code after merge
* updated onnx-tensorrt submodule
* added post processing to align trt inputs/outputs with graph inputs/outputs
* updated onnx submodule
* added CUDA fallback for TensorRT and fixed TensorRT cmake issue
* added ci pipeline for tensorrt and removed some redundent code from trt xp
* fixed syntax issue
* updated onnx-tensorrt submodule
* fix trt build problem by: (#602)
1. Add additional /wd for debug build
2. Add io.h for additional targets
3. Bring back mb version of getopt
* Update install_ubuntu.sh
* Update linux-gpu-tensorrt-ci-pipeline.yml
* Update linux-gpu-tensorrt-ci-pipeline.yml
* Update run_build.sh
* Update run_build.sh
* Update run_build.sh
* Update run_build.sh
* fixed the issue that GetKernelRegistry returns nullptr
* merged master to this branch
* moved some data types to private
* fixed tensorrt CI pipeline issue
* customized test data for TensorRT pipeline
* added onnx-tensorrt in json file and fixed an issue in ci script
* added comments