Commit graph

9041 commits

Author SHA1 Message Date
pengwa
a49bb85cfe
Manage ORTModule configurations consistently (#16396)
### Manage ORTModule options

Move all env vars that used for feature ON/OFF into runtime options for
consistent managements.


Be noted: the features' switch are assigned in 2 phases: default values,
overwritten by env vars (if specified by users). So env vars take the
highest priority when all 2 phases both given value explicitly for one
feature.



### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
2023-06-27 19:19:36 +08:00
pengwa
403bebfb51
Use PadAndUnflatten to replace GatherGrad for restore (#16429)
### Use PadAndUnflatten to replace GatherGrad for restore




### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
2023-06-27 15:07:20 +08:00
cloudhan
ae6da03438
ROCm decoding (#16339) 2023-06-27 13:19:39 +08:00
Yi Zhang
6e9541046e
extend react native ci timeout limit (#16469)
### Description
<!-- Describe your changes. -->

### Motivation and Context
2 consecutive runs in npm pipeline failed due to time out
2023-06-27 08:44:03 +08:00
guyang3532
eb4e6d2062
Support Mul and Sub in padding elimination (#16478)
### Description
Support Mul and Sub in padding elimination
2023-06-27 07:43:29 +08:00
Edward Chen
4a331ef667
Rework CPU MeanVarianceNormalization kernel to support arbitrary axes. (#16420) 2023-06-26 15:29:50 -07:00
Yifan Li
e2c214d81f
[TensorRT EP] TRT 8.6 minor version update (#16475)
### Description
* Minor version update: TRT 8.6.0.12->8.6.1.6
  * CI pipeline ymls/dockerfiles are updated
* cgmanifest.json/deps.txt/download-deps.yml are updated; Win trt
binaries uploaded to [win img
307029](https://aiinfra.visualstudio.com/AI%20Infra%20Management/_build/results?buildId=307029&view=results)
* Re-enable unit tests which were failed in 8.6.0 and re-gained support
in 8.6.1
2023-06-26 10:44:27 -07:00
Baiju Meswani
1f60414bc2
Load CheckpointState from a buffer (#16457) 2023-06-26 09:18:38 -07:00
Yifan Li
efe0af3720
[TensorRT EP] Fix nullptr check (#16468)
### Description
Fix the nullptr check so that it would check the actual existence of
engine/context
(Currently, it checks the address of unique_ptr, which is always not
null. Thx @jslhcl for pointing that out)

> A quick recall of struct
[trt_state](https://github.com/microsoft/onnxruntime/blob/main/onnxruntime/core/providers/tensorrt/tensorrt_execution_provider.h#L104):
> ```
> std::unique_ptr<nvinfer1::ICudaEngine>* engine = nullptr;
>std::unique_ptr<nvinfer1::IExecutionContext>* context = nullptr;
>```


### Motivation and Context
https://github.com/microsoft/onnxruntime/issues/15982
The incorrect check couldn't stop TRT EP from loading incompatible
engine cache on purpose, which invokes unhandled exception
2023-06-26 09:02:59 -07:00
Adam Louly
c55c6255e0
Eliminate safe nodes that are followed by a shape node. (#16065)
### Description
Eliminate Cast operator if Shape is the next one.

### Motivation and Context
#### Cast
When working with onnx opset 15 and above, the shape operator now
accepts all types of variables.
This change is documented in the [onnx
Changelog](https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Shape-15).

As a result, casting variables right before the shape operation becomes
unnecessary.
Removing these unnecessary casts will improve the graph and potentially
provide better performance gains.


## Results
On :
torchrun examples/onnxruntime/training/language-modeling/run_clm.py
--model_name_or_path gpt2 --do_train --overwrite_output_dir --output_dir
./outputs/ --seed 1337 --fp16 True --per_device_train_batch_size 4
--num_train_epochs 1 --dataset_name wikitext --dataset_config_name
wikitext-2-raw-v1 --learning_rate 2e-5 --report_to none --optim
adamw_ort_fused

without changes:
***** train metrics *****
  epoch                    =        1.0
  train_loss               =     3.2981
  train_runtime            = 0:02:13.29
  train_samples            =       2318
  train_samples_per_second =      17.39
  train_steps_per_second   =      4.351

With my changes:
***** train metrics *****
  epoch                    =        1.0
  train_loss               =     3.2981
  train_runtime            = 0:02:08.98
  train_samples            =       2318
  train_samples_per_second =     17.971
  train_steps_per_second   =      4.497

We see around 3% gain.

---------

Co-authored-by: Adam Louly <adamlouly@microsoft.com@orttrainingdev9.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
2023-06-26 16:35:07 +08:00
Scott McKay
48eff09664
Fix file list for test of build with IO debug (#16474)
### Description
<!-- Describe your changes. -->
Update file list to adjust for recent changes to test infra. 


### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
2023-06-26 16:36:22 +10:00
PeixuanZuo
7e211f0e03
[ROCm] Move mount data step into docker container (#16471)
Some CI jobs may interrupted unexpectedly and didn't execute umount data
step. The data left in host device will cause `device or resource busy`
and make subsequent CI jobs fail.

Move the mount data step into docker container, the host machine will
not be occupied when CI jobs exit incorrectly.
2023-06-26 10:25:06 +08:00
Guenther Schmuelling
8971af72af
fix webnn build (#16464)
remove unused variable.
2023-06-25 11:09:58 -07:00
zhijiang
9206b7cdc6
Zhijxu/cast propagation softmax (#16408)
enhance cast-propagation for "softmax can be put at fp16 when data flow
is cast-to-fp32 > softmax > cast-to-fp16"

this optimization can save gpu memory and have performance gain
2023-06-25 10:28:54 +08:00
Tianlei Wu
9407c3270c
GPT-2 attention fusion for transformers >= 4.27 (#16461)
### Description
Before transformers 4.27, the causal mask uses uint8 data type, so there
is extra Cast node to convert it to bool. This adds a pattern that
without Cast node to support attention fusion for GPT-2 models exported
with transformers >= 4.27.

### Motivation and Context

https://github.com/microsoft/onnxruntime/issues/16453
2023-06-23 15:38:35 -07:00
Hector Li
a8c313dec4
[QNN EP] Python script to modify Onnx model to make it aligned with converted QNN model (#16423)
Python script to modify Onnx model to make it aligned with converted QNN
model

### Description
Onnxruntime QNN EP can support context binary file generated by QNN tool chain. However QNN generated context binary file uses channel last and 8 bits or 16 bits for input and output. This script get the QNN model input & output information from QNN converted model_net.json file, and insert Cast, Transpose nodes to Onnx model if required.
2023-06-23 11:00:51 -07:00
Ryan Hill
d5b606d50d
Remove now duplicated symbol (#16458)
### Description
Change #16161 broke rocm by duplicating this symbol. This removes the
duplicate to unblock the tests.
2023-06-23 09:21:03 -07:00
Chen Fu
5c125b4366
Cfu revertamx (#16455)
### Description

This is to revert two PRs that aim at reducing AMX toolchain
requirements. Unfortunately we still have some pipeline issues.

https://github.com/microsoft/onnxruntime/pull/16390
https://github.com/microsoft/onnxruntime/pull/16086

### Motivation and Context

Looks like gcc link time optimization does not work very well with
inline assembly in the above PRs.
2023-06-23 09:20:23 -07:00
Rachel Guo
04dbdc96bf
[js/rn] Fix React Native CI pipeline E2E test (#16447)
### Description
<!-- Describe your changes. -->

Based on this kindly provided quick fix:
https://github.com/microsoft/onnxruntime/pull/16411

See more description in the above linked pr about bumping AGP version,
etc.

Also fixed import header file path in detox e2e test.

### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

Good build:

https://dev.azure.com/onnxruntime/onnxruntime/_build/results?buildId=1041757&view=logs&j=de302ec2-2305-57e0-e8c6-cd89c569f2a3&t=9894c870-b8ce-548d-51ff-8f44d21a4117&l=18
2023-06-22 14:33:49 -07:00
Baiju Meswani
10ba1e270c
Minimal Build for On-Device Training (#16326)
🛠️ __Changes in this pull request:__

This pull request introduces two significant changes to the project:

- Changing on device training checkpoint format: The current
implementation stores the on device training checkpoint as a sequence of
tensors in multiple files inside a checkpoint folder, which can be
inefficient in terms of storage and performance. In this PR, I have
modified the checkpoint format to utilize the flatbuffer table to save
the checkpoint to a single file, providing a more compact and efficient
representation. The changes around this are twofold:
- Add the checkpoint flatbuffer schema that will generate the necessary
checkpoint source files.
- Update the checkpoint saving and loading functionality to use the new
format.

- Adding support for onnxruntime minimal build: To support scenarios
where binary size is a constraint, I made changes to ensure that the
training build can work well with the minimal build.

🔍 __Open Issues:__
- In order to extract the optimizer type, the existing implementation
re-loaded the onnx optimizer model and parsed it. This is no longer
possible, since the model format can either be onnx or ort. One idea is
to do the same for ort format optimizer model. This needs some
investigation.
- Changes to the offline tooling to generate ort format training
artifacts.
- End-to-end training example showcasing the use of the minimal training
build.
- Add support for export model for inferencing in a minimal build.
2023-06-22 12:27:23 -07:00
dependabot[bot]
97f4484df9
Bump actions/setup-python from 3 to 4 (#16404) 2023-06-22 18:12:11 +00:00
Yi Zhang
8e8840f1de
Enable Web CI on Linux (#16419)
### Description
1. Enable Web ci on Linux

### Motivation and Context
1. speed up web ci, the duration can be reduced from 160 minutes to 130
minutes, a time saving of 20% could be be achieved.
The total computation time is 455 minutes now. Moved to Linux, it could
be reduced to 336 minutes.
2. It's the first step to enable compilation cache for emscripten
3. per Yulong's request, build_web stages are still using windows pool


![image](https://github.com/microsoft/onnxruntime/assets/16190118/c9496408-74bd-45ea-b4ae-a4dd2a574d17)


https://dev.azure.com/onnxruntime/onnxruntime/_build/results?buildId=1038382&view=results
2023-06-22 15:42:58 +08:00
Pranav Sharma
a270d8407e
Allow saving of large models after optimization (github issue 12882) (#16440)
### Description
Allow saving of large models after optimization.

### Motivation and Context
Addresses https://github.com/microsoft/onnxruntime/issues/12882
2023-06-21 22:46:26 -07:00
Yufeng Li
89f8f20a61
fix protobuf copyfrom 2G limit (#16422)
### Description
<!-- Describe your changes. -->
protobuf CopyFrom doesn't work for model > 2GB for version 4.23. This PR
removes the copy for Calibrator.

Currently Calibrator copies the ModelProto to avoid changing it. The
reason is that: quantize_static passes a ModelProto to Calibrator to
calibrate quantitation parameters, and then use it for quantization. If
calibrator changes the ModelProto, quantizaiton won't work.

This PR changes quantize_static to pass in a model path to Calibrator
instead of a ModelProto, and make Calibrator only take in model path as
input, which is how it is used in most cases.

This PR also remove the optimization from quantization. User needs to
call pre-process to optimize the model
2023-06-21 20:45:11 -07:00
kunal-vaishnavi
4b69226fca
Fix input typo in Whisper export with beam search (#16439)
### Description
This PR fixes a typo with assigning the `repetition_penalty` input in
the Whisper export with beam search model. It is a follow-up to the
[export stabilization
PR](https://github.com/microsoft/onnxruntime/pull/16297).


### Motivation and Context
The `repetition_penalty` input should be set to `repetition_penalty`
instead of `input_features`.
2023-06-21 18:59:11 -07:00
Baiju Meswani
42489a8a24
Add ability to create ort format models from training offline utility (#16360) 2023-06-21 18:51:43 -07:00
yf711
0ad0d6ebbf
Unblock Linux MultiGPU TensorRT CI (#16446)
### Description
Revert docker base image to
nvidia/cuda:11.8.0-cudnn8-devel-ubuntu20.04@sha256:b754c43fe9d62e88862d168c4ab9282618a376dbc54871467870366cacfa456e



### Motivation and Context
The default img env of nvidia/cuda:11.8.0-cudnn8-devel-ubuntu20.04 has
minor upgrade, which make Linux MultiGPU TensorRT CI (NV12 instance with
Maxwell GPU) fail on three CApiTestGlobalThreadPoolsWithProvider
tests (these three tests have higher error which are above the tolerance)

That minor upgrade includes cudnn 8.7.0->8.9.0, which might be a factor
that make maxwell GPU generator higher error. CIs with T4 GPU are not
affected.
2023-06-21 17:15:39 -07:00
Yulong Wang
de476c8075
[js/web] update webgl context creating (#16436)
### Description
Modify the creating of webgl context.

Previous behavior:
STEP.1 - create canvas (document.createElement), if failed, goto step.2
else step.3
STEP.2 - create offscreenCanvas, if failed abort
STEP.3 - use the canvas created in step.1 or 2 to create webgl context.
if successful return context else abort

Now bahavior:
STEP.1 create offscreenCanvas, if failed goto step.3
STEP.2 use it to create webgl context. if successful, return context
STEP.3 create canvas  (document.createElement). if failed, abort
STEP.4 use it to create webgl context. if successful, return context
else abort

Motivation:
we found in some environment, normalCanvas.getContext() returns null but
offscreenCanvas.getContext() returns the context object. and when
offscreenCanvas is available it is good idea to always prefer to use it.
2023-06-21 17:10:26 -07:00
pallavides
3c2d52a995
DORT support for custom ops (#16392)
### Description
DORT support for custom ops


### Motivation and Context
Custom ops registered via custom_ops shared_library cannot be run using
DORT atm. This PR enables it using:
1. registering custom_ops supported in DORT
2. plumbing down session_options from OrtBackend when creating the
InferenceSession, that were used to register the custom_ops shared
library using
`session_options.register_custom_ops_library(shared_library)`
2023-06-21 15:58:05 -07:00
Yulong Wang
da532f3f5a
[js/webgpu] fix GPU to GPU memcpy (#16393)
### Description
Fixes a GPU to GPU memory copy bug which causes #16267
2023-06-21 15:50:08 -07:00
Tianlei Wu
52e2bdf541
Add license header to CUDA related files (#16437)
Add license header for files under core/providers/cuda or contrib_ops/cuda/
2023-06-21 13:31:43 -07:00
RandySheriffH
6e29e185f3
Clean AzureEP logics (#16367)
Moving out AzureEP invokers out of core runtime.

---------

Co-authored-by: Randy Shuai <rashuai@microsoft.com>
2023-06-21 09:38:52 -07:00
Chi Lo
4e3cff60fd
CUDA graph support for TRT EP (#16081)
CUDA EP already supports [CUDA
graph](https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#cuda-graphs),
also we observed some models can benefit from using CUDA graph with
`trtexec`. Therefore, this PR enables the CUDA graph support for TRT EP.

The implementation is based on
https://github.com/microsoft/onnxruntime/pull/9978 with the same
[constraints](https://github.com/microsoft/onnxruntime/pull/9978) as
below:

- Models with control-flow ops (i.e. If, Loop and Scan ops) are not
supported.
- Usage of CUDA Graphs is limited to models where-in all the model ops
(graph nodes) can be partitioned to the TRT EP.
- The input/output types of models need to be tensors.
- Shapes of inputs/outputs cannot change across inference calls.
- IObinding is required.
2023-06-21 09:36:45 -07:00
Rachel Guo
961fa7274a
[NNAPI doc] add reducemean to supported op list (#16414)
### Description
<!-- Describe your changes. -->

As title.

### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
2023-06-21 00:29:20 -07:00
Rachel Guo
b4b126ffb0
Set onnxruntime-c local pod path environment variable for react native e2e tests on ci (#16431)
### Description
<!-- Describe your changes. -->
Set onnxruntime-c local pod path environment variable for react native
e2e tests on react-native-ci.yml


### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

Previously the E2E test project is not properly consuming a local built
onnxruntime-c version pod.


https://github.com/microsoft/onnxruntime/pull/16411#issuecomment-1598512816
2023-06-21 00:27:36 -07:00
yf711
9a80955c45
Add compute capacity to trtep engine cache file (#16356)
### Description

Add "_smXX" to trtep engine cache file name, which "sm" stands for
"Streaming Multiprocessor".

> The GPU compute capability version is prefixed with "SM" because
NVIDIA typically improves and updates the SM in each new GPU
architecture.

### Motivation and Context

Github issue: https://github.com/microsoft/onnxruntime/issues/15982

Reduce the chance of misusing incompatible engine cache, when user is
switching GPU devices with different compute capacity

* The prevention can't be 100%, as model size & GPU memory size could be
another factor to make cache incompatible
2023-06-20 23:34:24 -07:00
zesongw
64b22cd00f
[WebNN EP] Support Where Op (#16380)
### Description
Add Where Op for WebNN EP as ternary conditional operator.

---------

Co-authored-by: Dwayne Robinson <dwayner@microsoft.com>
2023-06-20 16:45:40 -07:00
Justin Chu
e2381c42f2
Use M_PI to replace 3.14 constants (#16421)
### Description

Use M_PI to replace 3.14 constants

### Motivation and Context

Fixes #16413
2023-06-20 15:09:10 -07:00
Yuhong Guo
48e6186b1a
Move tests from core/providers/cuda/test/* to test/providers/cuda/ and refactor CUDA UT (#16161)
### Description
<!-- Describe your changes. -->

1. Add a new test lib `onnxruntime_providers_cuda_ut` which is similar
to `onnxruntime_providers_cuda` but `onnxruntime_providers_cuda_ut` is
only built if `onnxruntime_BUILD_UNIT_TESTS` is set. We can call all
CUDA UTs through this ut lib without affecting production lib
`onnxruntime_providers_cuda`.
2. Move all test cases from `core/providers/cuda/test/` to
`test/providers/cuda/`. These test cases are built into lib
`onnxruntime_providers_cuda_ut` and run by `./onnxruntime_test_all
--gtest_filter="*CUDA_EP_Unittest*"`. Since the lib is only for test, we
can use gtest macros in the test cases. Previous implementation do not
support using gtest lib in the CUDA UT cases.
3. The cmake code in `cmake/onnxruntime_providers.cmake` is refactored a
bit. A new function `onnxruntime_add_object_library` is to build a
object target. The 2 libs `onnxruntime_providers_cuda_ut` &
`onnxruntime_providers_cuda` share most of the code, so the object files
can be used in both libs, which helps reduce build time. Another
function `config_cuda_provider_shared_module` is used to configure all 3
similar
targets(onnxruntime_providers_cuda_obj/onnxruntime_providers_cuda/onnxruntime_providers_cuda_ut).
4. Refactored the test to call `testing::InitGoogleTest` &
`RUN_ALL_TESTS` in `libonnxruntime_providers_cuda_ut.so`'s `TestAll`.
After this change, we can see all the cases running in
`CUDA_EP_Unittest.All`:

![image](https://github.com/microsoft/onnxruntime/assets/19584326/8ff80df6-060b-4ef0-90b7-657e68d3db87)




### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

After https://github.com/microsoft/onnxruntime/pull/13016, there are
still test files in test/providers/cuda/ that are not moved to
core/providers/cuda/test/ and the test cases are disabled. This PR helps
to clean the unfinished TODOs.

Even through onnxruntime_shared_lib_test covers some test for CUDA
provider. onnxruntime_shared_lib_test works like a coarse grain
end-to-end test for CUDA provider. If CUDA unittest can run cases for a
single component, this wound be helpful for CUDA developers.

---------

Co-authored-by: Yuhong Guo <yuhong.gyh@antgroup.com>
2023-06-20 14:54:55 -07:00
Aung T Naing
e83993bbaf
Added MatMul tests for QNN EP (#15956)
### Description
<!-- Describe your changes. -->
Added test coverage for QNN EP MatMul op


### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
Created test coverage for HTP based MatMul with broadcasting.

---------

Co-authored-by: Hector Li <hecli@microsoft.com>
2023-06-20 13:58:56 -07:00
Sheil Kumar
7b51a1b17d
Enable Microsoft.AI.MachineLearning NuGet with WinUI projects (#16415)
Microsoft.AI.MachineLearning NuGet fails to build on WinUI projects due
to the conflict between the ReferenceCopy of binaries that occurs with
managed applications, with the manual binplacement that occurs with
native appliactions.

With WinUI, both cases are triggered, and a duplicate binplace is
detected as an error.

Fix: Don't rely on the ReferenceCopy for WinUI applications, and
manually binplace the Microsoft.AI.MachineLearning dll.
2023-06-20 13:10:19 -07:00
Wei-Sheng Chin
c8de3eaac6
Update DORT to follow PyTorch changes (#16394)
Fix #16355. The root cause change in PyTorch is
[#103302](https://github.com/pytorch/pytorch/pull/103302), which seem
blocking calling make_fx inside a dynamo backend.

Changes:
1. Move decomposition to `register_backend.py`, so we don't have to call
`make_fx` inside DORT, which triggers a bunch of new exceptions.
2. Remove shape inference based on FakeTensorProp since the FX graph
received from dynamo contains all shapes now.
3. Fix a macro bug so that DORT can build without CUDA.

Before (3),
```
#if defined(USE_CUDA) || defined(USE_ROCM)
  virtual PhiloxGenerator& PhiloxGenerator__Default() = 0;
#ifdef ENABLE_TRAINING_TORCH_INTEROP
...
#endif
#endif
```
After (3),
```
#if defined(USE_CUDA) || defined(USE_ROCM)
  virtual PhiloxGenerator& PhiloxGenerator__Default() = 0;
#endif
#ifdef ENABLE_TRAINING_TORCH_INTEROP
...
#endif
```
The later one looks better since the `ENABLE_TRAINING_TORCH_INTEROP` is
for Python bridge code, not for random-number-generating kernels
`PhiloxGenerator`.
2023-06-20 12:06:50 -07:00
Rachel Guo
30bb0959dc
[NNAPI EP] Add ReduceMean Op support (#16294)
### Description
<!-- Describe your changes. -->

As title.

Special cases for ReduceMean:
[UPDATE] The following cases are supported now by converting to
providing an input with all axes for NNAPI.
Behaviors when axes is not provided or axes provided as an empty vector:
For ReduceMean Opset version 18:
- Support case `axes` is provided as empty with `noop_with_empty_axes`
set to true.
- Support case `axes` is not provided with `noop_with_empty_axes` set to
true.
All treat as identity op.
- Does not support the case when `axes` is not provided/provided as
empty but `noop_with_empty_axes` is set to false.

For ReduceMean OpSet Version 13-:
- Does not support when `axes` attribute is not provided. (as onnx
treats it as default behavior to reduce all dimensions, and the case is
not implemented by NNAPI.)


https://developer.android.com/ndk/reference/group/neural-networks#group___neural_networks_1ggaabbe492c60331b13038e39d4207940e0a047fe95a35b27f45c05432b6ca18eb6c

> 1: A 1-D Tensor of
[ANEURALNETWORKS_TENSOR_INT32](https://developer.android.com/ndk/reference/group/neural-networks#group___neural_networks_1ggaf06d1affd33f3bc698d0c04eceb23298ac34965d8e76ac5acfddf5acd9e40f896).
The dimensions to reduce. Must be in the range [-rank(input_tensor),
rank(input_tensor)).NOTE: When the operation was introduced, the
documentation incorrectly stated that if dimensions were empty, the
operation would reduce across all dimensions. This behavior was never
implemented.

### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

Fixes issue #16194

---------

Co-authored-by: rachguo <rachguo@rachguos-Mini.attlocal.net>
2023-06-20 11:09:00 -07:00
Yufeng Li
d190db7fcd
Update default external_data_location for pre-process of quantization (#16399)
external_data_location should be a string/file_name to indicate the file
name of external data instead of a directory
2023-06-20 09:37:17 -07:00
Yulong Wang
b8917ad84f
[js/web] fix nodejs detection (#16400)
### Description
We used to use `typeof fetch === 'undefined'` as condition to detect the
environment is Node.js or not. Before Node.js v18, this works. However,
in Node.js v18, it introduced `fetch` function, so this check does not
work any more.

This PR changes the condition to check whether `process`,
`process.versions` and `process.versions.node` exists.

Checking whether `process` exists is not enough. This is because in some
configuration, webpack may polyfill nodejs's process.
2023-06-20 00:20:58 -07:00
Prateek Chokse
12dffef768
added support for cmake "find_package" (#8919)
**Description**: 
Adds support for cmake find_package.

**Motivation and Context**
As mentioned in issue #7150 onnxruntime doesn't have support for CMake
find_package, this PR adds that and also adds the CMake package version
file. Now anyone can link onnxruntime like this:
```cmake
find_package(onnxruntime)
add_executable(test Source.cpp)
target_link_libraries(test PRIVATE onnxruntime::onnxruntime)
```
this also simplifies #3124
2023-06-19 22:20:31 -07:00
cao lei
dd72192cf4
ExecutionProvider API refactor - move allocator from EP level to SessionState level and indexed by OrtDevice (#15833)
### Description
This PR is to refactor ExecutionProvider API for memory management,
which is to move allocators from EP level to SessionState level and
indexed by OrtDevice



### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
This PR is to refactor ExecutionProvider API for memory management,
which is to move allocators from EP level to SessionState level and
indexed by OrtDevice. By this change, EP level will shift the burden of
maintaining allocators, which will be user friendly for EP developers

---------

Co-authored-by: Lei Cao <leca@microsoft.com@orttrainingdev8.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
2023-06-19 17:44:45 -07:00
jingyanwangms
5dcaf70501
Adding this set_to_none flag to zero_grad to have signature parity with pytorch Adam (#16375)
### Description
torch.optim Adam zero_grad() signature is
zero_grad(set_to_none=True)

https://pytorch.org/docs/stable/generated/torch.optim.Adam.html#torch.optim.Adam.zero_grad

We set this flag in initialization, similar to deepspeed:
https://deepspeed.readthedocs.io/en/latest/optimizers.html#deepspeed.ops.adam.FusedAdam

Adding this flag to have signature parity with pytorch Adam

### Motivation and Context
Easier model integration

Co-authored-by: Jingyan Wang <jingywa@microsoft.com@orttrainingdev7.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
2023-06-19 17:27:41 -07:00
PeixuanZuo
470d6c1cce
[ROCm] Delete unused file to fix Component Governance Alert (#16407)
Delete unused file to fix Component Governance Alert
2023-06-19 11:28:32 -07:00
guyang3532
341484e67c
Embedding sparsity optimization (#16141)
### Description
Optimize compute graph by eliminating padding in embedding.


### Motivation and Context
The computation for padding in nodes after embedding is unnecessary and
waste computation resources.
This pr just add an Optimizer of PaddingElimination to check and
eliminate the padding after embedding automatically by modifying the
graph.

### Implementation:
1. Find and check embedding node in graph.
2. Iterate the subgraph afterward the embedding node and record all the
input nodes and output nodes to this subgraph.
3. Insert 'Reshape + ShrunkenGather' to flatten each input node shape
from [batch_size, seqlen, ...] to [valid_token_without_padding, ...],
and insert 'GatherGrad + Reshape' to unflatten each output node shape
from [valid_token_without_padding, ...] to [batch_size, seqlen, ...]

---------

Co-authored-by: mindest <linminuser@gmail.com>
2023-06-19 20:34:53 +08:00