onnxruntime/js/common/lib/tensor-utils-impl.ts
Yulong Wang e5ca3f3dcb
[js/api] introducing IO binding for tensor (#16452)
[//]: # (## Work In Progress. Feedbacks are welcome!)

### Description
This PR adds a few properties, methods and factories to Tensor type to
support IO-binding feature. This will allow user to create tensor from
GPU/CPU bound data without a force transferring of data between CPU and
GPU.

This change is a way to resolve #15312

### Change Summary
1. Add properties to `Tensor` type:
a. `location`: indicating where the data is sitting. valid values are
`cpu`, `cpu-pinned`, `texture`, `gpu-buffer`.
b. `texture`: sit side to `data`, a readonly property of `WebGLTexture`
type. available only when `location === 'texture'`
c. `gpuBuffer`: sit side to `data`, a readonly property of `GPUBuffer`
type. available only when `location === 'gpu-buffer'`

2. Add methods to `Tensor` type (usually dealing with inference
outputs):
- async function `getData()` allows user to download data from GPU to
CPU manually.
- function `dispose()` allows user to release GPU resources manually.

3. Add factories for creating `Tensor` instances:
    a. `fromTexture()` to create a WebGL texture bound tensor data
    b. `fromGpuBuffer()` to create a WebGPUBuffer bound tensor data
    c. `fromPinnedBuffer()` to create a tensor using a CPU pinned buffer

### Examples:

create tensors from texture and pass to inference session as inputs
```js
// when create session, specify we prefer 'image_output:0' to be stored on GPU as texture
const session = await InferenceSession.create('./my_model.onnx', {
  executionProviders: [ 'webgl' ],
  preferredOutputLocation: { 'image_output:0': 'texture' }
});

...

const myImageTexture = getTexture(); // user's function to get a texture
const myFeeds = { input0: Tensor.fromTexture(myImageTexture, { width: 224, height: 224 }) }; // shape [1, 224, 224, 4], RGBA format.
const results = await session.run(myFeeds);
const myOutputTexture = results['image_output:0'].texture;
```
2023-08-29 12:58:26 -07:00

58 lines
1.8 KiB
TypeScript

// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.
import {CpuPinnedConstructorParameters, GpuBufferConstructorParameters, TextureConstructorParameters} from './tensor-factory.js';
import {Tensor} from './tensor-impl.js';
/**
* calculate size from dims.
*
* @param dims the dims array. May be an illegal input.
*/
export const calculateSize = (dims: readonly unknown[]): number => {
let size = 1;
for (let i = 0; i < dims.length; i++) {
const dim = dims[i];
if (typeof dim !== 'number' || !Number.isSafeInteger(dim)) {
throw new TypeError(`dims[${i}] must be an integer, got: ${dim}`);
}
if (dim < 0) {
throw new RangeError(`dims[${i}] must be a non-negative integer, got: ${dim}`);
}
size *= dim;
}
return size;
};
/**
* implementation of Tensor.reshape()
*/
export const tensorReshape = (tensor: Tensor, dims: readonly number[]): Tensor => {
switch (tensor.location) {
case 'cpu':
return new Tensor(tensor.type, tensor.data, dims);
case 'cpu-pinned':
return new Tensor({
location: 'cpu-pinned',
data: tensor.data as CpuPinnedConstructorParameters['data'],
type: tensor.type as CpuPinnedConstructorParameters['type'],
dims,
});
case 'texture':
return new Tensor({
location: 'texture',
texture: tensor.texture,
type: tensor.type as TextureConstructorParameters['type'],
dims,
});
case 'gpu-buffer':
return new Tensor({
location: 'gpu-buffer',
gpuBuffer: tensor.gpuBuffer,
type: tensor.type as GpuBufferConstructorParameters['type'],
dims,
});
default:
throw new Error(`tensorReshape: tensor location ${tensor.location} is not supported`);
}
};