mirror of
https://github.com/saymrwulf/onnxruntime.git
synced 2026-05-16 21:00:14 +00:00
* remove memory copy between CUDA and TRT * add info to RegisterExecutionProvider input * use new IDeviceAllocator for trt allocator * remove SetDefaultInputsMemoryType from TRT EP * remove onnx-tensorrt 5.0 * add submodule onnx-tensorrt branch 5.1 * remove redundancy * Update transformer_memcpy.cc * Update tensorrt_execution_provider.cc * switch to TensorRT 5.1.5.0 * update python binding * disable failed test case on TensorRT * Update activation_op_test.cc * upgrade to TensorRT container 19.06 * update according to feedback * add comments * remove tensorrt allocator and use cuda(gpu) allocator * update onnx-tensorrt submodule * change ci build cuda directory name |
||
|---|---|---|
| .. | ||
| clean_up_cuda_prop_files.ps1 | ||
| download_cmake.py | ||
| post_binary_sizes_to_dashboard.py | ||
| post_code_coverage_to_dashboard.py | ||
| run_OpenCppCoverage.ps1 | ||
| set_cuda_path.ps1 | ||
| setup_env.bat | ||
| setup_env_cuda.bat | ||