mirror of
https://github.com/saymrwulf/onnxruntime.git
synced 2026-05-17 21:10:43 +00:00
The test limits GPU's running memory requirements to 20MB. It might be enough in the past, but it seems not enough now when we upgrade CUDA to a newer version or add more kernels/graph transformers to our code. Therefore we need to increase it. Our test log shows sometimes the model needs 128MB memory. So I set the limit to 256MB. |
||
|---|---|---|
| .. | ||
| InferenceTest.netcore.cs | ||
| Microsoft.ML.OnnxRuntime.Tests.NetCoreApp.csproj | ||