mirror of
https://github.com/saymrwulf/onnxruntime.git
synced 2026-05-16 21:00:14 +00:00
### Description Since WebGPU supports only float32 and int32, having Gather, Reshape, Shape, Squeeze and Unsqueeze ops with other data types create additional MemCpy ops and slow down the overall execution as all other OPs with other tensor types will be done on CPU. Before this patch SD Unet had these numbers: Node(s) placed on [CPUExecutionProvider]. Number of nodes: 1141 Node(s) placed on [JsExecutionProvider]. Number of nodes: 4025 memcpy tokens: 2001 After patch: Node(s) placed on [CPUExecutionProvider]. Number of nodes: 1735 Node(s) placed on [JsExecutionProvider]. Number of nodes: 2243 memcpu tokens: 813 It also gives more than 5X performance benefit. From 12sec for one Unet step to 2.2sec on RTX 3090 Ti, so we are almost getting to native performance. UPD: with latest changes from main branch and multi-threading it went down to 1.6sec. Will try re-exporting my model to onnx with maximum optimizations, like using MultiHeadAttention to decrease node count. Maybe after implementing that it can go in less than 1 sec |
||
|---|---|---|
| .. | ||
| onnxjs | ||
| wasm | ||
| backend-onnxjs.ts | ||
| backend-wasm.ts | ||
| build-def.d.ts | ||
| index.ts | ||
| version.ts | ||