mirror of
https://github.com/saymrwulf/onnxruntime.git
synced 2026-05-16 21:00:14 +00:00
* remove memory copy between CUDA and TRT * add info to RegisterExecutionProvider input * use new IDeviceAllocator for trt allocator * remove SetDefaultInputsMemoryType from TRT EP * remove onnx-tensorrt 5.0 * add submodule onnx-tensorrt branch 5.1 * remove redundancy * Update transformer_memcpy.cc * Update tensorrt_execution_provider.cc * switch to TensorRT 5.1.5.0 * update python binding * disable failed test case on TensorRT * Update activation_op_test.cc * upgrade to TensorRT container 19.06 * update according to feedback * add comments * remove tensorrt allocator and use cuda(gpu) allocator * update onnx-tensorrt submodule * change ci build cuda directory name
1 line
304 B
Batchfile
1 line
304 B
Batchfile
set PATH=%BUILD_BINARIESDIRECTORY%\packages\python;%BUILD_BINARIESDIRECTORY%\packages\python\DLLs;%BUILD_BINARIESDIRECTORY%\packages\python\Library\bin;%BUILD_BINARIESDIRECTORY%\packages\python\script;C:\local\cudnn-10.0-windows10-x64-v7.3.1.20\cuda\bin;C:\local\cuda_10.0.130_win10_trt515dll\bin;%PATH%
|