ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Find a file
Atanas Dimitrov 4e2d88b75f
Remove useless NodeProto serializations (#18791)
## Description
This pull request aims to enhance the efficiency of the inference
session creation by eliminating unnecessary `Node::ToProto` invocations.
The current codebase presents opportunities for optimization,
particularly in the removal of superfluous `Node::ToProto` calls, along
with their subsequent `~NodeProto` invocations.

## Motivation and Context
The optimization focus of this pull request is on addressing low-hanging
fruit in the inference session creation process. By strategically
removing undesired `Node::ToProto` calls, we aim to streamline the
codebase and enhance the overall performance. The flame graphs
illustrate the notable improvements achieved by reducing the percentage
of `Node::ToProto` calls, thereby optimizing the execution flow.

### Code Snippet
```cpp
TEST(InferenceSessionTests, Bench) {
  // Initialize logging manager
  auto logging_manager = std::make_unique<logging::LoggingManager>(
      std::unique_ptr<ISink>(new CLogSink()), logging::Severity::kVERBOSE, false,
      LoggingManager::InstanceType::Temporal);

  // Create environment
  std::unique_ptr<Environment> env;
  auto st = Environment::Create(std::move(logging_manager), env);
  ASSERT_TRUE(st.IsOK());

  // Configure session options
  SessionOptions so;
  so.execution_mode = ExecutionMode::ORT_SEQUENTIAL;
  so.graph_optimization_level = TransformerLevel::Level2;
  so.intra_op_param.thread_pool_size = 1;

  // Initialize and load the InferenceSession
  InferenceSessionTestGlobalThreadPools session1{so, *env};
  ASSERT_STATUS_OK(session1.Load("big.onnx"));
  ASSERT_STATUS_OK(session1.Initialize());
}
```

### `big.onnx` model creation
```python
import onnx
import numpy as np
from spox import argument, build, Tensor, Var
from spox.opset.ai.onnx import v17 as op
from spox.opset.ai.onnx.ml.v3 import label_encoder

a = argument(Tensor(np.int64, ('N',)))
c = a

for x in range(1000):
    c = op.mul(c, op.const(np.ones(10000, dtype=np.int64)))

for x in range(3000):
    all_strings = list("random_string" + str(i) for i in range(100))
    all_ints = list(range(len(all_strings)))
    c = label_encoder(
        c,
        keys_int64s=all_ints,
        values_strings=all_strings
    )
    c = label_encoder(c, keys_strings=all_strings, values_int64s=all_ints)

model: onnx.ModelProto = build(inputs={'a': a}, outputs={'c': c})
onnx.save(model, "big.onnx")
```

Testing in `Release` with `perf` yields:
Before: 3.3% spent in `Node::ToProto`
After: 1.6% spent in `Node::ToProto`

---------

Co-authored-by: Atanas Dimitrov <atanasdimitrov@Atanass-MacBook-Pro.local>
2024-01-04 17:38:28 +10:00
.config Update tsaoptions.json: update the email alias (#13448) 2022-10-26 15:56:16 -07:00
.devcontainer Remove two lines in the Dockerfile for Github Codespace (#12278) 2022-07-21 20:52:17 -07:00
.gdn Update win-ci-pipeline.yml: enable xnnpack tests (#16244) 2023-06-14 19:12:42 -07:00
.github Bump actions/upload-artifact from 3 to 4 (#18920) 2023-12-31 21:10:47 -08:00
.pipelines Update windowsai-steps.yml: enable "/profile" linker flag (#18022) 2023-12-13 19:47:04 -08:00
.vscode Setup default python formatter for new python plugin (#18563) 2023-11-24 18:04:48 +08:00
cgmanifests Update absl and googletest (#18827) 2023-12-14 16:15:07 -08:00
cmake Delay load dxcore.dll in addition to ext-ms-win-dxcore-l1-1-0.dll (#18913) 2023-12-26 12:33:42 -08:00
csharp Split Onnxruntime Nuget GPU package (#18819) 2023-12-22 16:57:16 +08:00
dockerfiles Update dockerfiles/Dockerfile.source to avoid installing onnx (#17975) 2023-10-20 09:24:21 -07:00
docs pass rotary embedding to attention op (#18846) 2024-01-02 20:38:33 -08:00
include/onnxruntime/core Add SessionOptions use_deterministic_compute to the C and C++ APIs. (#18944) 2024-01-04 11:12:48 +10:00
java [java] Make the backing byte buffer in an OrtValue accessible (#16578) 2023-10-17 10:03:49 -07:00
js link to docs incorrect for js/web/node (#18960) 2024-01-03 17:30:24 -08:00
objectivec Objective-C API updates (#18738) 2023-12-07 16:47:46 -08:00
onnxruntime Remove useless NodeProto serializations (#18791) 2024-01-04 17:38:28 +10:00
orttraining [ROCm] Update CI/Packaging pipeline to ROCm6.0 (#18985) 2024-01-03 17:25:15 +08:00
rust Fix rust compile issues and add GH action to run build validations and tests (#18346) 2023-11-09 04:26:02 -08:00
samples Removed all the deprecated python training code and related tests and utils (#18333) 2023-11-17 18:19:21 -08:00
tools [ROCm] Update CI/Packaging pipeline to ROCm6.0 (#18985) 2024-01-03 17:25:15 +08:00
winml Update winml to use #cores - #soc cores by Default as the number of intraopthreads (#18384) 2023-11-28 09:26:48 -08:00
.clang-format Prevent GSL_SUPPRESS arguments from being modified by clang-format (#17242) 2023-08-22 18:26:53 -07:00
.clang-tidy Create clang-tidy CI (#12653) 2022-09-30 08:05:38 -07:00
.dockerignore
.gitattributes
.gitignore Build onnxruntime.dll as arm64x (#18633) 2023-12-06 16:49:00 -08:00
.gitmodules Remove onnxruntime extensions from list of gitmodules (#17615) 2023-09-19 17:12:14 -07:00
.lintrunner.toml FP16 optimizer automatically detect DeepSpeed compatibility (#18084) 2023-10-25 15:11:02 +08:00
build.bat try to find patch.exe in git default installation folder (#17106) 2023-08-10 21:48:13 -07:00
build.sh Upgrade old Python version in packaging pipeline (#16667) 2023-07-17 08:24:47 -07:00
build_arm64x.bat Build onnxruntime.dll as arm64x (#18633) 2023-12-06 16:49:00 -08:00
CITATION.cff
CODEOWNERS Add owners for public facing API files (#15288) 2023-03-30 17:16:15 -07:00
CONTRIBUTING.md Fix link to High Level Design (#11786) 2023-02-28 11:05:54 -08:00
lgtm.yml Fix lgtm C++ error (#13613) 2022-11-10 10:06:22 -08:00
LICENSE
NuGet.config
ort.wprp
ORT_icon_for_light_bg.png
packages.config Bump DirectML version from 1.12.0 to 1.12.1 (#17225) 2023-08-20 09:55:38 -07:00
pyproject.toml [ORTModule] ATen Efficient Attention and Triton Flash Attention (#17959) 2023-10-27 10:29:27 +08:00
README.md Update README.md (#18963) 2024-01-03 17:26:25 -08:00
requirements-dev.txt ONNX 1.15 integration (#17125) 2023-09-26 14:44:48 -07:00
requirements-doc.txt
requirements-lintrunner.txt Bump linter versions (#18341) 2023-11-08 13:04:40 -08:00
requirements-training.txt ONNX 1.15 integration (#17125) 2023-09-26 14:44:48 -07:00
requirements.txt.in
SECURITY.md
setup.py Improve perf for stage3 training (#18099) 2023-12-15 13:32:19 +08:00
ThirdPartyNotices.txt Flash Attention v2 MHA (#17227) 2023-08-31 13:52:21 -07:00
VERSION_NUMBER Bump Up Version to 1.17.0 (#17587) 2023-09-20 11:02:58 +08:00

ONNX Runtime is a cross-platform inference and training machine-learning accelerator.

ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms. Learn more →

ONNX Runtime training can accelerate the model training time on multi-node NVIDIA GPUs for transformer models with a one-line addition for existing PyTorch training scripts. Learn more →

Get Started & Resources

Builtin Pipeline Status

System Inference Training
Windows Build Status
Build Status
Build Status
Linux Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Mac Build Status
Android Build Status
iOS Build Status
Web Build Status
Other Build Status

Third-party Pipeline Status

System Inference Training
Linux Build Status

Data/Telemetry

Windows distributions of this project may collect usage data and send it to Microsoft to help improve our products and services. See the privacy statement for more details.

Contributions and Feedback

We welcome contributions! Please see the contribution guidelines.

For feature requests or bug reports, please file a GitHub Issue.

For general discussion or questions, please use GitHub Discussions.

Code of Conduct

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

License

This project is licensed under the MIT License.