mirror of
https://github.com/saymrwulf/onnxruntime.git
synced 2026-05-14 20:48:00 +00:00
The test limits GPU's running memory requirements to 20MB. It might be enough in the past, but it seems not enough now when we upgrade CUDA to a newer version or add more kernels/graph transformers to our code. Therefore we need to increase it. Our test log shows sometimes the model needs 128MB memory. So I set the limit to 256MB. |
||
|---|---|---|
| .. | ||
| Microsoft.AI.MachineLearning.Tests | ||
| Microsoft.AI.MachineLearning.Tests.DotNet5_0 | ||
| Microsoft.AI.MachineLearning.Tests.Uwp | ||
| Microsoft.ML.OnnxRuntime.EndToEndTests | ||
| Microsoft.ML.OnnxRuntime.EndToEndTests.Mobile | ||
| Microsoft.ML.OnnxRuntime.Tests.Common | ||
| Microsoft.ML.OnnxRuntime.Tests.Devices | ||
| Microsoft.ML.OnnxRuntime.Tests.Droid | ||
| Microsoft.ML.OnnxRuntime.Tests.iOS | ||
| Microsoft.ML.OnnxRuntime.Tests.NetCoreApp | ||