* Eliminate redundant subexpressions
Apply local value numbering to merge graph nodes that will always
evaluate to the same value.
* Rename cpp->cc
* Handle optional arguments
* Add test models
* Add more tests with optional arguments
* Fix processing of subgraphs
Also, be resilient to possible mixture of optional and variadic
parameters
* Fix random operators
* Address PR comments
* Minor changes and a test
* Move CSE before constant folding
* Random* operators are always non-deterministic
Even when seed is provided.
* Fix a CSE test
* Reuse the list of non-deterministic operators with constant folding pass
* Address PR comments
* Fix formatting
* Address PR comment
* Minor cleanup / comments
* Fix build failure in Linux
* Reuse existing optimizer/utils file.
Also, check for graph outputs when removing a node.
* Add a test
* Fix compiler warnings
* Fix build in older compilers
* More compatibility with old STL versions
This commit means that when the thread pool is configured to spin, then we spin at the barrier at the end of parallel sections in the main thread, in addition to having workers spin waiting for work.
The change updates Barrier.h to take an additional boolean to select spin/block, and passes this in based on the thread pool configuration.
It adds an additional test case for barriers, although no problems were identified by the test case.
* Gelu Activation Recompute Draft
* Prototype for localized recompute
* Introduce localized_recompute rewriter
* Command line args for enabling recompute
* Add logger to Gradient Graph Builder
* use const when possible
Update TransposeMatMul to support scaling of the matrix product by a constant scalar value (analogous to the GEMM alpha parameter). Rename TransposeMatMul to TransposeScaleMatMul.
Fuse MatMul with surrounding Mul/Div with constant scalar into TransposeScaleMatMul.
While investigating an unrelated issue, I noticed that the thread pool may drop tasks when a burst of 1024+ tasks is submitted by a thread from inside the pool. Today, in general, we execute work synchronously in this case. However, there is a bug where work submitted by a thread already inside the pool will be discarded instead of executed. Currently the only scenario where I can see this occurring is when the parallel executor is used with a model in which such a large number of nodes become eligible to run all at once. This PR fixes the underlying issue and adds a test case for burst-submission of work.
* Add ability to retrieve inferred shapes when executing a kernel.
This ability helps Recv to know its output shapes without doing
actual cummunication. Of course, if the output shapes cannot be
inferred, Recv still needs to do communication to get shapes from
Send.
* Avoid communicating shape information when it can be inferred statically
* Replace unordered_map with thread-safe wrapper.
We don't want to have racing condition and undefined behavior
when using parallel executor.y
* Remove cout
* Add missing file
* Address comments
* Check dim_value. -1 means missing
* lock properly
* Address comments (remove thread-safe map)
* Remove poc header
* Replace Stream with DeferredReleaseCPUPtr
* Add python API for specifying CUDA device id
* Modification for providing session based python api for specifying
device id
* When include header file pybind11/stl.h, conversion between c++
containers and Python list, vector and dict data structure are
automatically enabled.
https://pybind11.readthedocs.io/en/stable/advanced/cast/stl.html#
Therefore, refactor the code for better leverage this advantage.
* Make struct CudaDeviceOptions as default cuda device options
* Implement sess.set_providers(list_of_providers, list_of_provider_option_dicts)
But still stay consistent with existing sess.set_providers(list_of_provider)
* Add cuda provider option default setting
* Add support for setting cuda cuda_mem_limit and arena_extend_strategy.
Also resolved the merge conflict on session.py
* Use python ctypes to call cuda library to help python unittest
* Refine the code with reviewer's suggestions
* Add the capability of getting execution provider's configuration
- Once we introduced the capability to set execution provider's
configuration, it makes sense to add capability of getting ep's configuration.
* Modify the code with reviewer's suggestions.
* Using stoull() and stoul() depends on 32/64-bits architecture.
* Rewrite the testcases for testing setting CUDA device id
Note: We need to make sure every ORT process be run on one CUDA device
at a time.
* Make sure old session object is destroyed by python gc before new
session object is being created
* Move testcases to original onnxruntime_test_python.py
* Fix bugs to pass CI build
* Make it pass CI build (cont.)
* Make it pass CI build (cont.)
* support bert partition with shared initializer
* address feedback
* address feedback
* address feedback
* add more test
* remove bert-tiny model
* address feedback
* address function comment
* move CreateNodeArg to graph_utils
* rename function name
* rename function name
* fix windows build
* fix windows type conversion warning
* add function comment
Create N-1 threads in a thread pool when configured with intra-op parallelism of N. This ensures we have N active threads, given that the main thread also runs work. To avoid ambiguity on the value returned, rename ThreadPool::NumThreads method to ThreadPool::DegreeOfParallelism, and make corresponding updates in MLAS and operators.
For the special case where all variadic inputs of a kernel are the same shape (i.e. no broadcasting is required) and there are few enough of them, we perform the entire computation in a single kernel. The general implementation (which was previously used for this special case) handles broadcasting by repeatedly invoking a binary kernel on successive inputs.
* add modern standards to function arguments
* code cleanup
* fix code formatting
* add element access convenience function
* change template type name to match rest of code
* remove new At() convenience function
* add better documentation message
* Update function body initialization
* minor fix
* changes per review comments
* minor fix
* format fix
* add function initialization in mixed precision transformer
* more updates
* more fixes
* Move allocators to SessionState so they're decoupled from ExecutionProviders
- when looking up an allocator it's based on OrtMemoryInfo not the EP so SessionState is a more natural place for that infromation to be stored
- add device based lookup
- simplifies logic for copying feeds/fetches across devices
Cleanup SessionState and SessionStateInitializer
- provide more things to SessionState at construction time so we don't construct and instance and immediately after call a bunch of setters
- simplify SessionStateInitializer
- reduced down to FinalizeSessionState method
As a zero-cost wrapper around the C API, the current state of the C++ API is still pretty low-level and requires programmers to use C-style standards to interact with ONNX.
- Move thread hint vectors from thread-local struct
- Add static_assert that the per-thread state in the thread pool is trivially-destructible
- Rename "thread_data" to "worker_data" (only allocated for workers in the pool, not threads calling into the pool)
Updates the thread pool implementation to make work distribution over the Eigen thread pool more closely resemble techniques used in OpenMP. In particular:
(1) A thread entering a parallel loop works on the iterations itself, rather than requiring a thread switch to/from a thread in the pool, if called from outside the thread pool.
(2) To support this, work items pushed to the thread pool run a loop to claim iterations from a shared counter via atomic-fetch-and-add, as opposed to having work items themselves represent individual batches of iterations. This means that any thread working on the loop can execute any batch of iterations, including having the main thread run through all of the batches itself if the loop turns out to be short-running.
(3) As with OpenMP active scheduling, the worker loop spins waiting for work prior to blocking. This avoids OS blocking / wake-up paths in workloads with series of short-running parallel sections.
* Added GetAvailableProviders to C API
* Fix API version and Windows build error
* Changed function name
* Changed ORT_API_VERSION to 4
* Moved all_providers array to constants.h
* Move check for providers to constants.h
* Changed name of array to avoid warning
* Address review comment
* Added unit test
- Update IAllocator setup to move the OrtMemoryInfo to the base class instead of requiring derived classes to have that as a member and override a virtual method to return it.
- Cleanup CreateAllocator setup to take an argument as to whether to wrap the device allocator in an arena allocator. The choice to do that isn't a property of the underlying device allocator.
- Minor cleanups in the various EPs to adjust to the change to IAllocator and CreateAllocator, and to use the create_arena flag consistently when available.
* Enable static memory planning for pipeline.
1. We fix a bug when resolving symbolic shape for scalars.
2. We pass the original inputs to all pipeline stages so that
the symbolic shapes can be resolved.
* Further Improvements
1. Address comments.
2. Further reduce activation size by ~50% when pipeline is on.
This is done by removing all but one gradient tensor from the last
RecordEvent in the backward pass.
* Address a comment
* Fix Windows build
* Fixes from investigating issue running BERT-Squad model with larger batch sizes. When the batch size gets large enough the initial run will be successful (no memory pattern in use) but the second will fail to allocate the memory pattern block.
The cause of this failure is that we still have the smaller blocks from the first run allocated, as BFCArena has no logic to free those. This essentially results in 2x the memory being required to run the model.
There was inconsistency in BFCArena::Extend which on one path threw an exception if it couldn't do the allocation, and on another just returned false (resulting in Alloc returning a nullptr). Make the behavior consistent by always throwing if BFCArena fails to find a buffer to return. There are a huge number of places in the code where we assume Alloc returns a valid pointer so throwing will result in more correct behavior as a whole. It's also consistent with what happens when CUDA or the standard library fails to allocate memory.
Next, update ExecutionFrame to check for this failure and not insert a memory block entry if it happens. With the existing code if BFCArena Alloc returned a nullptr we happily inserted that in the blocks, delaying detection of the failure to when we attempted to use the block in AllocateMLValueTensorSelfOwnBufferHelper.
Finally update AllocateMLValueTensorSelfOwnBufferHelper to expect a location may not have a block. A log message will be provided when the block allocation fails so it's not necessary to have more on each individual allocation that would have used the block. Falls through to default behavior of doing a normal allocation.
* Add ArmNN Execution Provider
Add a new execution provider targeting Arm architecture based on ArmNN.
Validated on NXP i.MX8QM CPU with ResNet50, MobileNetv2 and VGG models.
reviewed-by: mike.caraman@nxp.com
* Minor fixes
- renamed onnxruntime_ARMNN_RELU_USECPU to onnxruntime_ARMNN_RELU_USE_CPU
- fixed acl typo
* remove extra includes. added exception for ArmNN in test
* fix indentation
* Separated the activation implementation from the cpu and fixed the blockage from the endif
Co-authored-by: Andrei-Alexandru <andrei-alexandru.avram@nxp.com>
* online partition
* fix when multiple consumer nodes is in cut info
* fix windows build
* address feedback
* adding test
* feedback
* address feedback
* add parser for cut edge
* windows build
* Add amd migraphx execution provider to onnx runtime
* rename MiGraphX to MIGraphX
* remove unnecessary changes in migraphx_execution_provider.cc
* add migraphx EP to tests
* add input requests of the batchnorm operator
* add to support an onnx operator PRelu
* update migrapx dockerfile and removed one unused line
* sync submodules with mater branch
* fixed a small bug
* fix various bugs to run msft real models correctly
* some code cleanup
* fix python file format
* fixed a code style issue
* add default provider for migraphx execution provider
Co-authored-by: Shucai Xiao <Shucai.Xiao@amd.com>
* Fold Shape node in constant folding.
* bugfix
* Fix test failure.
* Bugfix for C++ frontend.
* Bugfix for C++ frontend.
Co-authored-by: Vincent Wang <weicwang@microsoft.com>
* add build inbox flag
* remove raw tests and wstring for utf filenames
* enable raw tests
* use ToWideString
* create new utf8 helper
* update string helper to utf8
Co-authored-by: Sheil Kumar <sheilk@microsoft.com>
1. Parallel all the activations ops.
2. Parallel the performance critical path of the LRN op, which makes the ONNX model zoo googlenet model runs 60% faster(latency reduced from 21ms to 13ms).
3. Make the Gemm-Activation fusion support with all the activations ops. Before this change, it only supports LeakyRelu/Relu/Sigmoid/Tanh.
4. Delete onnxruntime/test/framework/op_kernel_test.cc because the file is almost empty.
5. Remove the loggings in KernelRegistry::TryFindKernel, return Status with error message instead.