mirror of
https://github.com/saymrwulf/onnxruntime.git
synced 2026-05-14 20:48:00 +00:00
19 commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
c0b68e77af
|
Fix warnings (#21809)
### Description Minor changes to resolve some warnings in ORT ### Motivation and Context Binskim for WindowsAI (which consumes ORT) treats warnings as errors, and has hit these warnings. As a security requirement, warnings like "signed/unsigned mismatch" must be resolved. |
||
|
|
eeef157888
|
Format c++ code under winml/ (#16660)
winml/ was previously excluded from lintrunner config. This change includes the directory and adds the clang-format config file specific to winml/ that fits existing style. --------- Signed-off-by: Justin Chu <justinchu@microsoft.com> |
||
|
|
ce3b4eabd3
|
Implement Optional Metadata support and C# test support (#15314)
### Description Implement Optional Type metadata support in the library. Implement optional support in C# API along with metadata. Implement Sequence, Map, Optional test data support and test execution. Prune tests and provide more details for failing tests in C# code. Note, this PR does not enable running onnx test models in C++. ### Motivation and Context Opset18 optional type support. |
||
|
|
75584c5fa8
|
Enabling thread pool to be numa-aware (#13778)
The PR enables ort thread pool to be numa-aware, so that threads could be evenly created and distributed among numa nodes. In addition, to facilitate performance tuning, the PR opens a new API allowing customers to attach threads to certain logical processors. Please check the API [definition](https://github.com/microsoft/onnxruntime/pull/13778/files#diff-5845a5c76fb64abdc8f0cffe21b37f8da1712674eb3abc4cd87190891be1bd48) for details. Co-authored-by: Randy Shuai <rashuai@microsoft.com> |
||
|
|
56387c3c31
|
Fix SDL Unmatched Annotation Errors (#13162)
Fixes 3 SDL unmatched annotation errors. Co-authored-by: Numfor Mbiziwo-Tiapo <numform@microsoft.com> |
||
|
|
c20abcab87
|
User/brianma/eo (#13152)
fixing SDL issues. One was a SAL mismatch, the other was handling an optional null pointer. |
||
|
|
6255194659
|
All LearningModelSessions created from a common LearningModelDevice should share the same thread pool (#11457)
* Share thread pools between devices * make tests reuse device * Change cpu thread pool options for dml sessions to use 1 thread with no spinning * fix test failure * Update missing type constraints for dft * Add comment and rename inference session parameter * default missing causing inconsistent test behavior Co-authored-by: Sheil Kumar <sheilk@microsoft.com> |
||
|
|
5fbfca3d58
|
Add Experimental API for setting model name (#10518)
* Add experimental API for editing model name * Change EditModelName to 'SetName' * Change API to pass c_string * Update SetName to edit the proto * Test that the model proto gets changed * Remove comments * Skip inbox tests * Use filehelper path Co-authored-by: Numfor Mbiziwo-Tiapo <numform@microsoft.com> |
||
|
|
a17bdaf725
|
Enable JoinModels API in WinML+RT Experimental API (#9746)
* Dynamic onnx model fusion * empty node names shoudl remain empty * comments and cleanup * logic reversed for promoting_unlined_outputs * PR feedback * type * typo * fix model outputs with promote unlinked output * remove disembodied model Co-authored-by: Sheil Kumar <sheilk@microsoft.com> |
||
|
|
62c0d24340
|
Fix Windows Store build (#8753)
* Remove APIs unavailable in Store in #8349, #8178, #8065 * Add UWP stubs of C runtime functions * Remove UWP incompatible tests from UWP build * Remove incompatible tests from Store * Use UWP stubs in store only * Skip partition check outside of Windows * Remove unused WRL include * Workaround Windows header not including what it uses * Fix precompiled header name clash * Workaround SDK bugs * DXCore workaround in Win7 * Fix warning * Fix more warnings * Bump WinML to target Windows 8 * Fix more warnings * Remove unnecessary workarounds * Remove Desktop only APIs from DML adapter |
||
|
|
0725f80d2d
|
Revert "Fix Windows Store build (#8481)" (#8679)
This reverts commit
|
||
|
|
53e7831b53
|
Fix Windows Store build (#8481)
* Remove APIs unavailable in Store in #8349, #8178, #8065 * Add UWP stubs of C runtime functions * Remove UWP incompatible tests from UWP build * Remove incompatible tests from Store * Use UWP stubs in store only * Skip partition check outside of Windows * Remove unused WRL include * Workaround Windows header not including what it uses * Fix precompiled header name clash * Workaround SDK bugs * DXCore workaround in Win7 * Fix warning * Fix more warnings * Bump WinML to target Windows 8 * Fix more warnings * Remove unnecessary workarounds |
||
|
|
c3129306e5
|
Enable string attributes for experimental model building (#8428)
* string attributes * Update error message Co-authored-by: Sheil Kumar <sheilk@microsoft.com> |
||
|
|
87cb6fd495
|
Add LearningModelBuilder to WinML Experimental Namespace along with various Audio operators (#6623)
* model building * fix build * winml adapter model building api * model building * make build * make build again * add model building with audio op * inplace and inorder fft * add ifft * works! * cleanup * add comments * switch to iterative rather than recursive and use parallelization * batched parallelization * fft->dft * cleanup * window functions * add melweightmatrix op * updates to make spectrogram test work * push latest * add onesided * cleanup * Clean up building apis and fix mel * cleanup * cleanup * naive stft * fix test output * middle c complete * 3 tones * cleanup * signal def new line * Add save functionality * Perf improvements, 10x improvement * cleanup * use bitreverse lookup table for performance * implement constant initializers for tensors * small changes * add matmul tests * merge issues * support add attribute * add tests for double data type windowfunctions and minor cleanup * stft onesided/and not tests * cleanup * cleanup * clean up * cleanup * remove threading attribute * forward declare orttypeinfo * warnings * fwd declare * fix warnings * 1 more warning * remove saving to e drive... * cleanup and fix stft test * add opset picker * small additions * add onnxruntime tests * add signed/unsigned * fix warning * fix warning * finish onnxruntime tests * make windows namespace build succeed * add experimental flag * add experimental api into nuget package * add experimental api build flag and add to windows ai nuget package * turn experimental for tests * add minimum opset version to new experimental domain * api cleanup * disable ms experimental ops test when --ms_experimental is not enabled * add macro behind flag * remove unused x * pr feedback Co-authored-by: Sheil Kumar <sheilk@microsoft.com> |
||
|
|
3b1227c5ce
|
SDL annotation fixes (#6448)
Co-authored-by: Ori Levari <orlevari@microsoft.com> |
||
|
|
c7da194313
|
remove winrt (#3899)
Co-authored-by: Sheil Kumar <sheilk@microsoft.com> |
||
|
|
cf6a1c1715
|
Fix Windows Inbox build failing on 1) building raw api tests and 2) referencing _winml namespace in onnxruntime.dll (#3872)
* add build inbox flag * remove raw tests and wstring for utf filenames * enable raw tests * use ToWideString * create new utf8 helper * update string helper to utf8 Co-authored-by: Sheil Kumar <sheilk@microsoft.com> |
||
|
|
43b87de7d1
|
Support wide char paths in CreateModel (#3835)
* Support wide char paths in CreateModel * Use UTF-8 in WinML adapter API |
||
|
|
c32cedc6c9
|
Merge windowsai (winml layering) into master (#2956)
* Initial Commit * Merged PR 3985217: add onecoreuap_apiset.lib in order to avoid linking against kernel32.lib etc (#2346) add onecoreuap_apiset.lib in order to avoid linking against kernel32.lib etc and violating our OS layering requirements. We linked against onecoreuap_apiset.lib in VB so we will continue doing this, but I am still unsure why not to link against onecore instead since that is where we ship. However, since Sheil is the owner of this code we will wait to discuss with him before changing anything. * Initial changes for layering * more snipping to get core into ort * update build instructions to include --build_shared_lib (#2358) * update build instructions to include --build_shared_lib * fix line breaks * Task 23998197: add winml_lib_core into onnnxruntime.dll (#2368) * Task 23998197: add winml_lib_core into onnnxruntime.dll * PR feedback build break on perf_test * return proper error when the model path isn't found (#2391) * LearningModelSession is cleaned up to use the adapter, and parts of b… (#2382) this is a big PR. we are going to move it up to layer_dev , which is still a L3 so we are still safe to do work there agile. we are going to move this into the L3 so that ryan can start doing intergration testing. we will pause for a full code review and integration test result prior to going into the L2. >>>> raw comments from previous commits >>> * LearningModelSession is cleaned up to use the adapter, and parts of binding are. * moved everything in the winmladapter made it all nano-com using, WRL to construct objects in the ORT side. base interfaces for everythign for winml to call cleaned up a bunch of winml to use the base interfaces. * more pieces * GetData across the abi. * renamed some namepsace cleaned up OrtValue cleaned up Tensor cleaned up custom ops. everything *but* learnignmodel should be clean * make sure it's building. winml.dll is still a monolith. * model moved over. everything builds clean. step ! * weak ref comment * Layer dev paulm (#2408) * model moved over. everything builds clean. step ! * weak ref comment * added a wrapper for RoGetActivationFactory to hook back into winml for creating winml objects. fixes model load. * Layer dev paulm (#2414) * model moved over. everything builds clean. step ! * weak ref comment * added a wrapper for RoGetActivationFactory to hook back into winml for creating winml objects. fixes model load. * User/xianz/win ml telemetry (#2410) * add option to enable winml telemetry * add option to enable winml telemetry * clean logs while developping * clean the log of GUID * compile onnxruntime_common with winml telemetry * use option for use_telemetry * rename option winml_use_telemetry to onnxruntime_use_telemetry * little change * fixed some lifetime management. fixed the debug build. squeezenet passes using winmlrunner for CPU and GPU * Layer dev paulm (#2423) * model moved over. everything builds clean. step ! * weak ref comment * added a wrapper for RoGetActivationFactory to hook back into winml for creating winml objects. fixes model load. * fixed some lifetime management. fixed the debug build. squeezenet passes using winmlrunner for CPU and GPU * PR feedback. * Layer dev paulm (#2424) * model moved over. everything builds clean. step ! * weak ref comment * added a wrapper for RoGetActivationFactory to hook back into winml for creating winml objects. fixes model load. * fixed some lifetime management. fixed the debug build. squeezenet passes using winmlrunner for CPU and GPU * PR feedback. * couple of fixes and coded getmutabledata() * Layer dev paulm (#2425) * model moved over. everything builds clean. step ! * weak ref comment * added a wrapper for RoGetActivationFactory to hook back into winml for creating winml objects. fixes model load. * fixed some lifetime management. fixed the debug build. squeezenet passes using winmlrunner for CPU and GPU * PR feedback. * couple of fixes and coded getmutabledata() * fixed 2 more heap corruptions * Layer dev paulm (#2426) * model moved over. everything builds clean. step ! * weak ref comment * added a wrapper for RoGetActivationFactory to hook back into winml for creating winml objects. fixes model load. * fixed some lifetime management. fixed the debug build. squeezenet passes using winmlrunner for CPU and GPU * PR feedback. * couple of fixes and coded getmutabledata() * fixed 2 more heap corruptions * Add opset and IR check when loading model (#2413) * Add opset and IR check. * Add test case for future opsets. https://github.com/microsoft/onnxruntime/issues/2371 * fixed map and sequence when passing stl types across the ABI . found a leak in nvidia driver, but skipped it. all winmlapitests pass now * Moved SessionOptions over to the abi * WinML CI (#2412) * Pass flags to build/test WinML in CI * Add initial CMake config for unit tests in WinML * Set winml_unittests standard to C++17 * Add WinML API tests and port them to googletest * Install WinML test collateral * Add LearningModelSessionAPITests ported to googletest * Fix WinML test files encoding * Add GPU tests * Add parameterized test, skip GPU tests * Enable precompiled header * Remove unused code and collateral * Remove brand images * Add dllload.cpp * Remove images not used in API tests * Add LICENSE.md to image collaterals * Add models with licenses * Remove FNS Candy tests * Add API test models * Add ModelInSubdirectory * Install collaterals post-build with copy_if_different, split common lib * fix warnings * Link to gtest_main * Register WinML TraceLogging provider on Onnxruntime.dll (#2455) * Register WinML TraceLogging provider on Onnxruntime.dll * Add ifdef to make sure trace logging provider has telemetry option when LAYERING_DONE * No need for ifdef for TraceLoggingOptionMicrosoftTelemetry * PR feedback * Move etw registration into lotus environment constructor and deresgister in lotus environment destructor * Brianma/cpuwinml (#2466) * allow building winml cpu without dml. * Brianma/breaks (#2469) * fix some more breaks * learning model doesn't need lotusEnvironment and CPU shouldn't include dmlEP headers * move dml checks out of winml and into the adapter * better error handling * Brianma/fi (#2470) * learning model doesn't need lotusEnvironment and CPU shouldn't include dmlEP headers * User/xianz/win ml telemetry (#2410) * add option to enable winml telemetry * add option to enable winml telemetry * clean logs while developping * clean the log of GUID * compile onnxruntime_common with winml telemetry * use option for use_telemetry * rename option winml_use_telemetry to onnxruntime_use_telemetry * little change * Add opset and IR check when loading model (#2413) * Add opset and IR check. * Add test case for future opsets. https://github.com/microsoft/onnxruntime/issues/2371 * WinML CI (#2412) * Pass flags to build/test WinML in CI * Add initial CMake config for unit tests in WinML * Set winml_unittests standard to C++17 * Add WinML API tests and port them to googletest * Install WinML test collateral * Add LearningModelSessionAPITests ported to googletest * Fix WinML test files encoding * Add GPU tests * Add parameterized test, skip GPU tests * Enable precompiled header * Remove unused code and collateral * Remove brand images * Add dllload.cpp * Remove images not used in API tests * Add LICENSE.md to image collaterals * Add models with licenses * Remove FNS Candy tests * Add API test models * Add ModelInSubdirectory * Install collaterals post-build with copy_if_different, split common lib * fix warnings * Link to gtest_main * fix bad merge * Checking in a staging checkpoint point so that Ryan can work with me in parrallel * build break. * Brianma/testfails (#2473) * add missing ir version to dictvectorizer-string.onnx * add missing ir version to relu.onnx * add missing ir version to zipmap*onnx * add IR version to manually generated models * remove an unnecessary ifdef dml * Brianma/windowsai fi (#2475) * update dockerfiles/README (#2336) * Make elementwise op run 4 items per thread (#2335) Description: Describe your changes. Make elementwise op run 4 items per thread unroll for loop to leverage ILP remove unnessary N==0 check inside elementwise GPU kernel Motivation and Context Why is this change required? What problem does it solve? It can improve the performance of GPU elementwise ops. ~2% performance gain on popular NLP bert model. If it fixes an open issue, please link to the issue here. * Add CUDA GatherElements kernel (#2310) * Updates * Update test * Update * Updates * nits * PR feedback * Update * Update * PR feedback * PR comments * Update * Fix build * Fix build * Nits * Fix * Layer Normalization Fusion (#2319) basic layer normalization transform * Add FastGelu Cuda Op for Gelu and Add bias fusion (#2293) * Add FastGelu cuda op * Add AddBiasGelu for experiment * Revert "Add AddBiasGelu for experiment" This reverts commit 5c1ee019858c657e6bb75887265cb85675626e5b. * Add bias * Add unit tests * update comment * update script * fix build error * update coding style * update for CR feedback Enable half2 optimization only when cuda arch >= 7.0 * move _Tanh to common.cuh * implement CPU contrib OP Attention (#2333) * Remove unused initializer from GraphProto as well as name_to_initial_tensor_ in CleanUnusedInitializers. (#2320) * Remove unused initializer from GraphProto as well as name_to_initial_tensor_ in CleanupUnusedInitializers. This means initializers that have been replaced during graph optimizations are not left in the GraphProto when we save an optimized model. * Handle edge case where a model has an unused initializer with matching graph input by also removing the graph input. * Use non-const iterators in std::find_if calls to make centos build happy. * Nuget pipeline changes (#2305) 1. refactor the pipeline, remove some duplicated code 2. Move Windows_py_GPU_Wheels job to Win-GPU-CUDA10. We'll deprecated the "Win-GPU" pool 3. Delete cpu-nocontribops-esrp-pipeline.yml and cpu-nocontribops-pipeline.yml 4. In Linux nuget jobs, run "make install" before creating the package. So that extra RPAH info will be removed * Cuda Reverse Sequence Op, maping types of same size using same template function. (#2281) * Set ElementType to String type of node metadata, instead of byte[] (#2348) * Set ElementType to String type of node metadata, instead of byte[] * Fix spacing * Introduce PrimitiveType into a Type System along with an integer constant (#2307) Improve perf by avoiding GetType<T>() calls. Introduce MLTypeCallDispatcher to switch on Input Type. Add Tensor IsType<T>() fast method. * Fix/test dim value of 0 handling in a couple of places (#2337) * Update the CUDA Where implementation broadcasting logic to handle a dim with value of 0. Add unit test Also add unit test for unary op with dim value of 0 * Exclude ngraph from Where test with 0 dim. * Openvino EP R3.1 onnxrt server (#2357) * onnxrt server with OVEP * onnxrt server with OVEP * Update Dockerfile.server.openvino * onnxrt server OVEP fix reviews * onnxrt server OVEP fix reviews * Implement cuda nonzero op. (#2056) Implement cuda nonzero op. * Direct use python numpy array's memory if already contiguous. (#2355) * Direct use python numpy array's memory if already contiguous. This could greatly improve performance for session with large input, like big image 1920x1080 fastrcnn, 30~40% speed up could be achieved. * Add test case enforce contiguous/non-contiguos numpy array as inputs. * Add helper to create output to minimize binary size. (#2365) Add ConstEigenTensorMap typedef so we don't unnecessarily const_cast the const input Tensor. * fix builds enabling onnxruntime_DEBUG_NODE_INPUTS_OUTPUTS (#2369) * fix builds enabling onnxruntime_DEBUG_NODE_INPUTS_OUTPUTS * update * Add Tracelogging for profiling (#1639) Enabled only if onnxruntime_ENABLE_INSTRUMENT is ON * test bidaf with nuphar for avx target (#2370) increase nuphar test coverage a bit * Fix a bug in TLS refcount that may destabilized CUDA CI (#2374) * update output size calculation for resize (#2366) * change how output size is calculated for resize op * add tests for ver 10 resize * Extend OneHot CPU kernel to support more types (#2311) * Extend OneHot CPU kernel to support input int64_t, depth int32_t, output float * Skip BERT before the test data fix is picked up * Fix bug with Slice. Need to pass in flattened input dimensions so the initial offset into the input is calculated correctly. (#2372) * Add opset 11 version of Split to CUDA ops (#2376) Organize the CUDA ops definitions so all the opset 10 and 11 parts are together (same setup used for CPU ops) * Layer Norm Fusion Fix (#2379) * layer norm fusion fix * Add input shape check in code and unit tests * Fuse Add + Gelu (#2360) Implement the transformer to fuse add + gelu Implement the accurate kernel * Skip layer norm transform (#2350) * skip layer normalization transformer * Another try to stabilize CUDA CI (#2383) The root cause seems to be failure in CUDA dealloc when tear down. cudaFree return code was ignored before, so should the debug check. * fix BUILD.md typo (#2375) build.py: error: argument --config: invalid choice: 'RelWithDebugInfo' (choose from 'Debug', 'MinSizeRel', 'Release', 'RelWithDebInfo') * Fixed compilation with ngraph (#2388) * Fix reuse logic in allocation planner. (#2393) * Fix reuse logic in allocation planner. * PR comments * Add helpful comments * Don't allow reuse across string tensors. * [NupharEP] Multiple optimizations (#2380) Fuse transpose into MatMul Implement Pow and constant scalar simplification Vectorize ReduceMean Improve symbolic shape inference Minor updates for better debugging in fused function name * Avoid using the default logger in the graph lib and optimizers (#2361) 1. Use the session logger if it is available. 2. Don't disable warning 4100 globally. We should fix the warnings instead of disabling it. * Change CUDA implementation of Transpose to support all fixed size tensor types (#2387) * Change CUDA implementation of Transpose to not use a typed kernel so we can support more types with minimum binary size. Add support for 8, 16, 32 and 64 bit types. Add unit tests. Add method so the implementation can be called directly (will be used by CUDA Scan very soon). * Disable TensorRT for MLFloat16 and int8 unit tests. * Address PR comment and add support for calling cublas implementation if type is mlfloat16. * Add opset 11 versions of the existing CUDA operators that had negative axis support explicitly added. (#2398) * Add opset 11 versions of the existing CUDA operators that had negative axis support explicitly added. * [NupharEP] force some low/zero cost ops to be inlined (#2409) * fix cross compile bug (#2415) * Minor optimization: if a node has already been placed, there's no need to find a kernel for it. (#2417) * Add Reshape Fusion (#2395) * Add reshape fusion * Add some comments * update comments * update comment format * update according to feedback * update for recent logger change * fix build error * (1) Support both input and output edges in find path in graphutils (2) Add a test case of only one constant initializer of Concat input. (3) Refactor ReshapeFusion class to allow add more subgraph fusion in the future. * fix error * (1) loose constraint on initializer: non constant is allowed for reshape fusion. (2) Change versions type to vector. (3) Add logging. (4) Return false when multiple output edges matched in FindPath. Add comments. * only allow one direction (input or output) in FindPath * [NupharEP] Update notebook and docker image (#2416) Add BERT squad in Nuphar tutorial Enhance speed comparsion readability * Fix the issue in matmul_add_fusion (#2407) Fix the issue in matmul_add_fusion If Muatmul + Add has shape [K] * [K, N], reset it to [1, K] * [K, N] will make the output shape to [1, N] will also requires a reshape on the output. Fix: just remove the shape reset to not fuse it. Add a negative test case for matmul+add fusion * feat(treeregressor): Update TreeEnsembleRegressor for type support (#2389) Updates the `TreeEnsembleRegressor` to allow for `double`, `float`, `int64`, and `int32` inputs to match the upstream specification. Signed-off-by: Nick Groszewski <nicholas.groszewski@capitalone.com> * onnxrt server documentation update (#2396) * Added support for Pad-2 operator in OpenVINO-EP (#2405) * Add CUDA If operator. (#2377) * Add CUDA If operator. Uses CPU operator for implementation. By adding a CUDA version the inputs/outputs (with the exception of the 'cond' input) stay on GPU, and no other logic is required to avoid a copy to CPU across the control flow node. * Improved documentation for onnxruntime::utils::SwapByteOrderCopy(), added precondition check. * Fix the type constraints on CUDA If operator to exclude strings. (#2431) * add Im2col<uint8_t> (#2438) * Adjust codegen vectorization width from target (#2439) * Adjust codegen vectorization width from target * Add CUDA Scan operator. (#2403) * Add Scan CUDA op. Uses CPU implementation for logic. Added some device specific functors for handling when data needs to be manipulated on a different device. Added ability to override the materialization logic in the OrtValue slicer so DML can plugin their handling. * Fix Windows GPU C API packaging pipeline failure (#2440) Fix Windows GPU C API packaging pipeline failure (#2440) * Correctly handle implicit inputs for fused nodes (#2390) * Correctly handle implicit inputs for fused nodes Previously, nuphar's partitioning function didn't include node's implicit inputs into the inputs list of MetaDef, and hence a crash was triggered in the onnx graph checker. This commit fixed the issue. Furthermore, it also fixed a related issue where we didn't add implicit inputs into graph_inputs_excluding_initializers_ in Graph::SetGraphInputsOutputs. the issue was that graph_inputs_including_initializers_ populated by SetInputs (e.g. called by FunctionImpl::FunctionImpl) may contain implicit inputs which were not of any node's initializers in the graph. Because they were not part of any initializers, these implicit inputs couldn't be visited by going through all nodes' inputs. Consequently, they would *not* be added into graph_inputs_excluding_initializers_. We fixed the issue by first copying the populated graph_inputs_including_initializers_ into graph_inputs_excluding_initalizers_, which then had both initializers and non-initializers as its initial content. Later, we erase initializers from the list. In this way, we can ensure all implicit inputs to remain in graph_inputs_excluding_initializers_. * refined comments and fixed duplicates Address CR by revisiting comments in terms of implicit inputs Also fixed an issue by skipping duplicates while copying inputs from graph_inputs_including_initializers_. * address CR explain why we need to collect nodes' implicit inputs * don't rely on pointer values for iterating std::set Previously, openvino relied on iterating a set of NodeArg pointers to construct inputs and outputs for a fused graph. It could cause non-determinism. The reason was that although iterating std::set by itself is stable, pointer values of NodeArgs may vary. Consequently, we could end up visiting the set's elements in different orders for different runs for the same test, which resulted in constructing inputs (and outputs) with different orders to the fused graph. For example, for the same test, we may have inputs [A, B] in some runs but inputs[B, A] in others. Let's use std::string as the key type to avoid such nondeterminism. This commit also added implicit inputs into meta->inputs while returning the capability from the openvino provider. * Fixed another latent issue in openvino's GetCapability function The issue was that we couldn't simply erase fused_inputs and fused_outputs while iterating the nodes. For example, an output NodeArg may have multiple uses, and it's wrong if we erase it from fused_outputs when we encounter only one of its uses as input. * Remove DeviceAllocatorRegistry class (#2451) Remove DeviceAllocatorRegistry class * CSharp api and test for loading custom op shared library (#2420) - Added C-API test for loading custom op shared lib. - Made some changes in C++ api header and C-api implementation to get it working. - Added C# API and corresponding test for loading custom op shared library. * Parallel Gelu with ParallelFor (#2399) Parallel Gelu to get better performance for Gelu * Clean up build.py (#2446) * Pull the latest image before running docker build * Fuse SkipLayerNorm with Bias (#2453) Fuse SkipLayerNorm with Bias * Allow more than one invocation of CreateEnv in the same process. (#2467) * Allow more than one invocation of CreateEnv in the same process. * Fix centos build * Symbolic shape inference improvements: (#2460) * Symbolic shape inference improvements: - add a mode to guess unknown ops' output rank - add support for GatherND - add support for If - fix a bug in get_int_values when then tensor rank > 1D, by treating it as no sympy data - add symbol to literal merge when ONNX silently merges dims - fix a bug in Concat when input dim is 0 - fix a bug in ConstantOfShape that computed dim is not updated - add support for dynamic shape in ConstantOfShape - fix a bug in Loop output shape that loop iterator dim is not inserted at dim 0 - add support for dynamic padding in Pad - add support for dynamic shape in Reshape - add support for Resize with opset > 10, by treating output dims as dynamic - fix a bug in Slice when starts/ends are dynamic - restrict input model to opset 7 and above - make output model optional to avoid disk write when testing Run model tests for symbolic shape inference Reduce 2GB docker image size of nuphar * add additional test data set for nuget pipeline (#2448) * add SAS token to download internal test data for nuget pipeline * update azure endpoint * fix keyvault download step * fix variable declaration for secret group * fix indentation * fix yaml syntax for variables * fix setting secrets for script * fix env synctax * Fix macos pipeline * attempt to add secrets to windows download data * fix mac and win data download * fix windows data download * update test data set url and location * Revert "Brianma/windowsai fi (#2475)" This reverts commit |