* Add minimal build option to build.py
Group some of the build settings so binary size reduction options are all together
Make some cmake variable naming more consistent
Replace usage of std::hash with murmurhash3 for kernel. std::hash is implementation dependent so can't be used.
Add initial doco and ONNX to ORT model conversion script
Misc cleanups of minimal build breaks.
* cancel night build on pyop
* setup ci pipeline for build of reduced ops
* add back c# test
* remove debugging print
* add testing model
* add more arg in pipeline script
* disable pipeline trigger temporarily
* fix yaml format
* fix yaml format
* fix pipeline error
* rid c# test
* add ops for test cases
* add Conv from domain com.microsoft.nchwc
* remove --reduce_ops
* fix typo
* remove --build_java
* add test case for excluded op
* update doc with --skip_test
* formatting code, renaming files and simplify yaml
* remove debug build from yaml
* remove surplus ops from included_ops.txt
* add MinSizeRel build to yaml
* rename test cases and models
* exclude ir test from minimum build
* restrict ir test to be only applied to reduced ops build
* enable rejecting models based on onnx opset
* enable unreleased opsets in linux and mac CI
* test fixes and more updates
* enable unreleased opsets in CI builds
* enable released opsets in linux cis
* try fix windows ci yml
* yml fixes
* update yml
* yml updates post master merge
* review comments
* bug fix
* Copy samples to build folder and load models from there. Fix CI
* This PR also includes a fix to path validation for save_as_onnx API
* Add torchtext to CI for GPU training
* Remove new frontend tests from CI
Co-authored-by: Thiago Crepaldi <thiago.crepaldi@microsoft.com>
* Removed building ngraph from source
* Disabled some tests temporarily
* Enabled softmax for all dims
* Added onnx importer to link libraries
* int64 changes
* fixed
* temp
* slice update start and end need to be initializer
* Disabled GatherND, ScatterND, ReverseSequence operators
* Added supported ops instead of unsupported ops
* Set precision only for CPU
* Removed some unecessary conditions
* Fixed segfault in slice
* Softmax restriction removed
* changes
* Setting precision for all plugins
* Changes added to include precision
and supported ops for gpu and vpu
* branch op support
* checking for disabled python test failure
* mapped input names and tensors directly rather than copying which was leading to mismatch
* last index is not supported
mkldnn does not support pow between integers
* included the code changes
* Rename inner-scoped variable to avoid MSVC warning
* applied changed to vadm as well and removed the utility function
getinputtensors() completely
* OpenVINO multi version support: CMake changes
* OpenVINO multi version support: C++ support
* removed commented code
* Remove redundant code lines
* Revert "Rename inner-scoped variable to avoid MSVC warning"
This reverts commit 2f650493162675bc6fb70730de9656ec400be332.
Merged separately in master.
* vadm changes disabled reduction op test
* putting test_gather_negative_indices in unsupported list for now
* Update MCR Dockerfile with 2020.4
Installs OpenVINO 2020.4 from deb packages via APT tool.
* Update build docs with 2020.4 info
* Update dockerfile with OV 2020.4 info
Instructions for building OpenVINO based docker image no longer require
downloading installer package as it is installed by the dockerfile
using OpenVINO 2020.4 APT package for Ubuntu 18.04
* Added constant folding bypass logic
* Added cout statements for ci
* Added NDEBUG flag for debug symbols
* Update Ops info in docs
* fixes multiple unit tests
* mathoptest.ceil disabled for gpu and myriad
* activation test temp disabled
* Fix models for CPU
* Fixed a syntax error
* local cmmit
* fixing unit tests for myriad
* Fixed Variadic Split, Topk issues
* fix_model commit
* Fix models in myriad
* Added ifdefs for OpenVINO 2020.4
* temp
* made some changes to not operator
* Added unused parameter
* relu enabled
* Fixed bug in Conv output
* Consolidated GPU failing tests into one category
* Made it compatible to InternalCI 2020.4
* Made changes for ngraph
* Disabled test for mask,fastercnn,tinyyolov3
* Removed proxy for ci
* run_dockerbuild.sh restored to same version
* run_dockerbuild.sh restored to same version
* run_dockerbuild.sh restored to same version
* Updated documentation for 2020.4
* Removed FP32 to FP16 transformation for GPU
* Disabled Coreml-FNS-Candy model test
* Added FP16 transformations
Co-authored-by: sfatimar <sahar.fatima@intel.com>
Co-authored-by: Manohar Karlapalem <manohar.karlapalem@intel.com>
Co-authored-by: sfatimar <sahar.fatima@intel/com>
Co-authored-by: sfatimar <64512376+sfatimar@users.noreply.github.com>
Co-authored-by: intel <you@example.com>
Co-authored-by: gundaarx <aravindx.gunda@intel.com>
* Add ORTTrainerOptions class for the new pytorch frontend (#4382)
Add ORTTrainerOptions class and some placeholders
* Add _ORTTrainerModelDesc to perform validation for model description (#4416)
* Add Loss Scaler classes to the new frontend (#4306)
* Add TrainStepInfo used on the new frontend API (#4256)
* Add Optimizer classes to the new frontend (#4280)
* Add LRScheduler implementation (#4357)
* Add basic ORTTrainer API (#4435)
This PR presents the public API for ORTTrainer for the short term
development.
It also validates and saves input parameters, which will be used in the
next stages, such as building ONNX model, post processing the model and
configuring the training session
* Add opset_version into ORTTrainerOptions and change type of ORTTrainer.loss_fn (#4592)
* Update ModelDescription and minor fix on ORTTrainer ctor (#4605)
* Update ModelDescription and minor fix on ORTTrainer/ORTTrainerOptions
This PR keeps the public API intact, but changes how model description is stored on the backend
Currently, users creates a dict with two lists of tuples.
One list called 'inputs' and each tuple has the following format tuple(name, shape).
The second list is called 'outputs' and each tuple can be either tuple(name, shape) or tuple(name, shape, is_loss).
With this PR, when this dict is passed in to ORTTrainer, it is fully validated as usual.
However, tuples are internally replaced by namedtuples and all output tuples will have
tuple(name, shape, is_loss) format instead of is_loss being optionally present.
Additionally to that normalization in the internal representation (which eases coding),
two internal methods were created to replace a namedtuple(name, shape) to namedtuple(name, shape, dtype)
or namedtuple(name, shape, is_loss, dtype) dependeing whether the tuple is an input or output.
This is necessary as ORTTRainer finds out data types of each input/output during model export to onnx.
Finally, a minor fix was done on ORTTrainer. It could initialize ORTTrainerOptions incorrectly when options=None
* Rename input name for test
* Add ONNX Model Export to New Frontend (#4612)
Co-authored-by: Rayan Krishnan <t-rakr@OrtDevTest2v100.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>
Co-authored-by: Thiago Crepaldi <thiago.crepaldi@microsoft.com>
* Create training session + minor improvements (#4668)
Co-authored-by: Rayan Krishnan <t-rakr@OrtDevTest2v100.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>
* Save ONNX model in file (#4671)
Co-authored-by: Rayan Krishnan <t-rakr@OrtDevTest2v100.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>
* Add eval step (#4674)
Co-authored-by: Rayan Krishnan <t-rakr@OrtDevTest2v100.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>
* Add train_step (#4677)
Co-authored-by: Rayan Krishnan <t-rakr@OrtDevTest2v100.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>
* Add LR Scheduler (#4694)
Co-authored-by: Rayan Krishnan <t-rakr@OrtDevTest2v100.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>
Co-authored-by: Thiago Crepaldi <thiago.crepaldi@microsoft.com>
* Add deterministic compute tests (#4716)
Co-authored-by: Rayan Krishnan <t-rakr@OrtDevTest2v100.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>
Co-authored-by: Thiago Crepaldi <thiago.crepaldi@microsoft.com>
* Add legacy vs experimental ORTTrainer accuracy comparison (#4727)
Co-authored-by: Rayan Krishnan <t-rakr@OrtDevTest2v100.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>
Co-authored-by: Thiago Crepaldi <thiago.crepaldi@microsoft.com>
* Add Mixed precision/LossScaler + several fixes (#4739)
Additionally to the mixed precision/loss scaler code, this PR includes:
* Fix CUDA training
* Add optimization_step into TrainStepInfo class
* Refactor LRSCheduler to use optimization_step instead of step
* Updated several default values at ORTTrainerOptions
* Add initial Gradient Accumulation supported. Untested
* Fix ONNX model post processing
* Refactor unit tests
* Add ONNX BERT example + minor fixes (#4757)
* Fix training issue when passing ONNX file into ORTTrainer
Co-authored-by: Thiago Crepaldi <thiago.crepaldi@microsoft.com>
Co-authored-by: Rayan Krishnan <t-rakr@OrtDevTest2v100.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>
* Add Dynamic Shape support (#4758)
* Update DeepSpeed Zero Stage option to a separate option group (#4772)
* Add support to fetches (#4777)
* Add Gradient Accumulation Steps support (#4793)
* Fix Dynamic Axes feature and add unit test (#4795)
* Add frozen weights test (#4807)
* Move new pytorch front-end to 'experimental' namespace (#4814)
* Fix build
Co-authored-by: Rayan-Krishnan <rayankrishnan@live.com>
Co-authored-by: Rayan Krishnan <t-rakr@OrtDevTest2v100.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>
1. Publish the image ACR, instead of building it every time for every PR
2. Make USE_MKLML and USE_OPENMP be able to co-exist. Currently both of them are enabled in our Linux CI build but indeed only one of them is taking effect.
3. Split nuphar and DNNL to separated pipelines.
4. Fix two warnings in onnxruntime/core/optimizer/matmul_scale_fusion.cc and onnxruntime/test/tvm/tvm_basic_test.cc.
5. Update the manylinux2010_x86_64 image to the latest.
* bump cswinrt version
* add cswinrt
* test dotnetcore 3.0
* rename buildpacakge source
* set folder path to the package source and not the version
* refactor .netframework tests
* build .net core anycpu
Co-authored-by: Sheil Kumar <sheilk@microsoft.com>
Add 'Install ONNX' step to Windows GPU pipeline
Previously it's not a problem because onnxruntime python package explicitly said it depends on ONNX, so ONNX will get installed when we test onnxruntime. However, it was removed in #4073
1. Avoid building ONNX of every history ONNX versions in our CI, it is costly and easy to fail.
2. Run docker command without sudo. Previously the user is not in docker group, now Azure DevOps Service have added it in.
* Revert "Temporarily remove dnnl from Linux CI build to unblock the whole team (#4266)"
Previously it fails because it used too much memory.
Now we only run dnnl EP with opset12 models in unit tests, to reduce peak memory usage.
* Enable onnxruntime_test_all for NNAPI EP
* switch to use ninja for ANdroid CI
* make android elumator boot faster in android ci
* simplify adb push
* more style change
* more tweaking on android ci
* build.py style update
* build e2e cppwinrt tests
* add use nuget task
* make all referenced to package version prop/target-ified
* remove dupe props/targets reference
* work around project.assets.json error by deleting it
* powershell test invocation
* switch to batch script
* print debug info
* update x86->x64
* stdio.h
* pushd/popd
* add csharp tests
* package.config -> packages.config
* typo
* x86 -> anycpu
* debug is default
* add test path
* update csproj as well
* debug
* really replace all package versions
* debug output
* really use [PackageVersion]
* sleep intead of converting async operation to task and waiting
* dont close software bitmap
* switch to powershell script
* remove binding check
* continue on failure
* continuse on error action
* continueOnError and errorActionPreference
* tabbing
Co-authored-by: Sheil Kumar <sheilk@microsoft.com>
* Change NNAPI CI to run on new NNAPI EP
* update android ci to mac 10.15 and remove in install cmake
* update the android ci to targe android api level 29
* remove unnecessary ndk install git submodule call