ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Find a file
Yulong Wang d2a1b7a353
Introduce custom external data loader (#21634)
### Description

This PR introduces support for custom external data loader. An EP can
register a custom external data loader to override the default behavior,
making it possible to upload initializers directly to GPU.



### Motivation and Context

- In ONNX Runtime Web, WebAssembly uses 32-bit as pointer type
(`sizeof(size_t)==4`), which means there is a 4GB hard limit on the
maximum memory. As the ONNX models get larger, this becomes a blocker
for supporting medium-sized language models.

- ORT runs out of memory because the current code always loads data into
CPU memory, including the .onnx file (protobuf) and external data
file(s). However, if using GPU EP, the big data does not need to be kept
on CPU because the only thing that ORT does is to load the data into
memory, upload to GPU and then release them.

- Some platforms has offered developers way to upload data directly to
GPU. For example, webgpu allows uploading from any ArrayBuffer (it can
be a side buffer, not count into the 4GB) to GPU directly. This helps to
keep the CPU memory usage significantly.

### Design

Class `ExternalDataLoader` and `ExternalDataLoaderManager` are
introduced. They are similar to `DataTransfer` and
`DataTransferManager`. `InferenceSession` owns the manager object, and
`SessionState` keeps a reference to it.

Added a new method `GetExternalDataLoader` in `IExecutionProvider`. An
EP can override the method to register an instance of custom external
data loader.

The key function in a `ExternalDataLoader` class is method `LoadTensor`:

```c++
  // the tensor is pre-created using the TensorProto info of the initializer and the MemoryInfo (from allocation plan).
  virtual common::Status LoadTensor(const Env& env,
                                    const std::filesystem::path& data_file_path,
                                    FileOffsetType data_offset,
                                    SafeInt<size_t> data_length,
                                    Tensor& tensor) const;
```

This function can be registered by EP, going through a few layers and
eventually get into `DeserializeTensorProto()` in the finalizing stage
of session initialization. In this step, initializer tensors are
created. Behavior is changed to first look up for a registered external
data loader that can handle the current memory info. If any instance is
available, use the loader; otherwise respect the old code path.
2024-08-27 12:18:52 -07:00
.config Update tsaoptions.json: update the email alias (#13448) 2022-10-26 15:56:16 -07:00
.devcontainer
.gdn Update win-ci-pipeline.yml: enable xnnpack tests (#16244) 2023-06-14 19:12:42 -07:00
.github [Fix] Make python API doc generation in Microsoft-hosted Agent (#21766) 2024-08-20 23:32:38 +08:00
.pipelines [DML EP] Update DML to 1.15.1 (#21695) 2024-08-12 14:16:43 -07:00
.vscode disable gemm f16 on CPU (#19744) 2024-03-01 13:44:29 -08:00
cgmanifests Revert "Upgrade emsdk from 3.1.59 to 3.1.62" (#21817) 2024-08-22 11:21:00 -07:00
cmake Drop QDQ around more nodes (#21376) 2024-08-27 16:54:37 +10:00
csharp Use zipped xcframework in nuget package (#21663) 2024-08-09 17:38:18 +10:00
dockerfiles [EP Perf] Update cmake (#21624) 2024-08-05 16:41:56 -07:00
docs Phi3 MoE cuda kernel (#21819) 2024-08-27 09:21:30 -07:00
include/onnxruntime/core Introduce custom external data loader (#21634) 2024-08-27 12:18:52 -07:00
java Fix typos according to reviewdog report. (#21335) 2024-07-22 13:37:32 -07:00
js [js/webgpu] Support Reshape/Shape 21+ on jsep (#21871) 2024-08-27 09:02:39 -07:00
objectivec Fix Objective-C static analysis warnings. (#20417) 2024-04-24 11:48:29 -07:00
onnxruntime Introduce custom external data loader (#21634) 2024-08-27 12:18:52 -07:00
orttraining Fix Orttraining Linux Lazy Tensor CI Pipeline (#21652) 2024-08-21 18:10:08 +08:00
rust Fix typos according to reviewdog report. (#21335) 2024-07-22 13:37:32 -07:00
samples Removed all the deprecated python training code and related tests and utils (#18333) 2023-11-17 18:19:21 -08:00
tools Increase timeout for orttraining-linux-gpu pipeline (#21844) 2024-08-27 11:47:12 -07:00
winml Fix warnings (#21809) 2024-08-21 14:23:37 -07:00
.clang-format Prevent GSL_SUPPRESS arguments from being modified by clang-format (#17242) 2023-08-22 18:26:53 -07:00
.clang-tidy
.dockerignore
.gitattributes Fix typos according to reviewdog report. (#21335) 2024-07-22 13:37:32 -07:00
.gitignore Build onnxruntime.dll as arm64x (#18633) 2023-12-06 16:49:00 -08:00
.gitmodules Revert "Upgrade emsdk from 3.1.59 to 3.1.62" (#21817) 2024-08-22 11:21:00 -07:00
.lintrunner.toml [js] change default formatter for JavaScript/TypeScript from clang-format to Prettier (#21728) 2024-08-14 16:51:22 -07:00
build.bat try to find patch.exe in git default installation folder (#17106) 2023-08-10 21:48:13 -07:00
build.sh Upgrade old Python version in packaging pipeline (#16667) 2023-07-17 08:24:47 -07:00
build_arm64x.bat remove unnecessary environment variable (#19166) 2024-01-16 16:24:37 -08:00
CITATION.cff Fix citation author name issue (#19597) 2024-02-22 17:03:56 -08:00
CODEOWNERS Add owners for public facing API files (#15288) 2023-03-30 17:16:15 -07:00
CONTRIBUTING.md Fix link to High Level Design (#11786) 2023-02-28 11:05:54 -08:00
lgtm.yml Fix lgtm C++ error (#13613) 2022-11-10 10:06:22 -08:00
LICENSE
NuGet.config
ort.wprp Fully dynamic ETW controlled logging for ORT and QNN logs (#20537) 2024-06-06 21:11:14 -07:00
ORT_icon_for_light_bg.png
packages.config [DML EP] Update DML to 1.15.1 (#21695) 2024-08-12 14:16:43 -07:00
pyproject.toml Ignore ruff rule N813 (#21477) 2024-07-24 17:48:22 -07:00
README.md Update README.md (#18963) 2024-01-03 17:26:25 -08:00
requirements-dev.txt ONNX 1.15 integration (#17125) 2023-09-26 14:44:48 -07:00
requirements-doc.txt
requirements-lintrunner.txt Update ruff and clang-format versions (#21479) 2024-07-24 11:50:11 -07:00
requirements-training.txt ONNX 1.15 integration (#17125) 2023-09-26 14:44:48 -07:00
requirements.txt Add compatibility for NumPy 2.0 (#21085) 2024-06-27 13:50:53 -07:00
SECURITY.md
setup.py Exclude cudnn 8 DLLs from manylinux package (#21746) 2024-08-15 07:48:42 -07:00
ThirdPartyNotices.txt Fix typos according to reviewdog report. (#21335) 2024-07-22 13:37:32 -07:00
VERSION_NUMBER bumps up version in main from 1.19 -> 1.20 (#21588) 2024-08-05 15:46:04 -07:00

ONNX Runtime is a cross-platform inference and training machine-learning accelerator.

ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms. Learn more →

ONNX Runtime training can accelerate the model training time on multi-node NVIDIA GPUs for transformer models with a one-line addition for existing PyTorch training scripts. Learn more →

Get Started & Resources

Builtin Pipeline Status

System Inference Training
Windows Build Status
Build Status
Build Status
Linux Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Mac Build Status
Android Build Status
iOS Build Status
Web Build Status
Other Build Status

Third-party Pipeline Status

System Inference Training
Linux Build Status

Data/Telemetry

Windows distributions of this project may collect usage data and send it to Microsoft to help improve our products and services. See the privacy statement for more details.

Contributions and Feedback

We welcome contributions! Please see the contribution guidelines.

For feature requests or bug reports, please file a GitHub Issue.

For general discussion or questions, please use GitHub Discussions.

Code of Conduct

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

License

This project is licensed under the MIT License.