replace number matching with relaxed comparison in frontend tests
Co-authored-by: liqun <liqun@OrtTrainingDev4.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>
* Cmake changes for 2021.1
* added new ov version 2020.1 for faster rcnn
* Added missing defs
* equal op modified
* changes to incoroporate faster rcnn
* backend util.cc
* hddl_plugin_config.hpp is depreceated . instead use hddl_config.hpp
* changing myriad precision bool to i32
* gather is not enabled for gpu
* conv2D and pooltest auto_pad attribute should not be null
* negative indices are not valid for scatter op in myriad
* non max suppression op only supported in faster rcnn mode
* maxpool indices output is not supported
* Cleaned redundant code in backends
* Added ifdefs for HDDL config
* cast output dimensions check
topk operator k input it seems only resolved for myriad as it is
throwing issues for ask rcnn . need to verify
* we are limiting the subgraph size to 3 here
* taking care of review comments
* Fixed minor bugs
* Modified Slice op checks
* Added NonZero, Upsample
* Removed TopK if it's in the middle of a subgraph
* incorporated upsample conditions too
* Dockerfile changes for 2021.1 release
* dockerfile aptkey update
* Minor fixes
* ceil condition added again
* Fixed few gpu models
* Disabled LSTM and yolov3 in ModelTests
* python softmax cross entropy tests and negative log likelihood
* Update Build.md
Updated for openvino 2021.1
* Update OpenVINO-ExecutionProvider.md
update openvino execution provider for 2021.1
* Update READMe.md
updated new openvino version
* Update Dockerfile.openvino
added environment variable for DEBIAN Frontend
* Fixed myriad models
* Fixed gather condition
* Fixed mask rcnn model on myriad
* Modified Gather condition
* set default target of MCR dockerfile to MYRIAD_FP16
* Fixed tinyolov3 on CPU
* Update OpenVINO-ExecutionProvider.md
update openvino execution provider documentation
* Update Dockerfile.openvino
Removed environment variable
* Update OpenVINO-ExecutionProvider.md
update image manipulation networks supported
* Update onnx_backend_test_series_filters.jsonc
removed test_upsample_nearest from cpu test cases
* New InternalCI changes for 2021.1
* Full protobuf removed for OpenVINO
* Protobuf added
* Updated with apt installation for openvino
* Revert the testing changes
* Reverted testing changes
* File permessions are changed to original
* Deleted openvino installation and cmake change
* Optimized Dockerfile
Removed unnecessary cmake installation, numpy
* Added missing ifdefs
* delete array fix
* backend_utils.cc output_shape
* Revert "set default target of MCR dockerfile to MYRIAD_FP16"
This reverts commit 928d3e2b71e2f589cf51dacd3a133951cf9ca18d.
Co-authored-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
Co-authored-by: sfatimar <sahar.fatima@intel/com>
Co-authored-by: suryasidd <48925384+suryasidd@users.noreply.github.com>
Co-authored-by: S. Manohar Karlapalem <manohar.karlapalem@intel.com>
Co-authored-by: Aravind <aravindx.gunda@intel.com>
Co-authored-by: Aravind Gunda <38353114+gundaarx@users.noreply.github.com>
* add tensor board, remove torch.distributed.lanuch because ort nccl depends on MPI. Use MPI to launch parallel training.
Co-authored-by: liqun <liqun@OrtTrainingDev4.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>
* Copy samples to build folder and load models from there. Fix CI
* This PR also includes a fix to path validation for save_as_onnx API
* Add torchtext to CI for GPU training
* Remove new frontend tests from CI
Co-authored-by: Thiago Crepaldi <thiago.crepaldi@microsoft.com>
* Removed building ngraph from source
* Disabled some tests temporarily
* Enabled softmax for all dims
* Added onnx importer to link libraries
* int64 changes
* fixed
* temp
* slice update start and end need to be initializer
* Disabled GatherND, ScatterND, ReverseSequence operators
* Added supported ops instead of unsupported ops
* Set precision only for CPU
* Removed some unecessary conditions
* Fixed segfault in slice
* Softmax restriction removed
* changes
* Setting precision for all plugins
* Changes added to include precision
and supported ops for gpu and vpu
* branch op support
* checking for disabled python test failure
* mapped input names and tensors directly rather than copying which was leading to mismatch
* last index is not supported
mkldnn does not support pow between integers
* included the code changes
* Rename inner-scoped variable to avoid MSVC warning
* applied changed to vadm as well and removed the utility function
getinputtensors() completely
* OpenVINO multi version support: CMake changes
* OpenVINO multi version support: C++ support
* removed commented code
* Remove redundant code lines
* Revert "Rename inner-scoped variable to avoid MSVC warning"
This reverts commit 2f650493162675bc6fb70730de9656ec400be332.
Merged separately in master.
* vadm changes disabled reduction op test
* putting test_gather_negative_indices in unsupported list for now
* Update MCR Dockerfile with 2020.4
Installs OpenVINO 2020.4 from deb packages via APT tool.
* Update build docs with 2020.4 info
* Update dockerfile with OV 2020.4 info
Instructions for building OpenVINO based docker image no longer require
downloading installer package as it is installed by the dockerfile
using OpenVINO 2020.4 APT package for Ubuntu 18.04
* Added constant folding bypass logic
* Added cout statements for ci
* Added NDEBUG flag for debug symbols
* Update Ops info in docs
* fixes multiple unit tests
* mathoptest.ceil disabled for gpu and myriad
* activation test temp disabled
* Fix models for CPU
* Fixed a syntax error
* local cmmit
* fixing unit tests for myriad
* Fixed Variadic Split, Topk issues
* fix_model commit
* Fix models in myriad
* Added ifdefs for OpenVINO 2020.4
* temp
* made some changes to not operator
* Added unused parameter
* relu enabled
* Fixed bug in Conv output
* Consolidated GPU failing tests into one category
* Made it compatible to InternalCI 2020.4
* Made changes for ngraph
* Disabled test for mask,fastercnn,tinyyolov3
* Removed proxy for ci
* run_dockerbuild.sh restored to same version
* run_dockerbuild.sh restored to same version
* run_dockerbuild.sh restored to same version
* Updated documentation for 2020.4
* Removed FP32 to FP16 transformation for GPU
* Disabled Coreml-FNS-Candy model test
* Added FP16 transformations
Co-authored-by: sfatimar <sahar.fatima@intel.com>
Co-authored-by: Manohar Karlapalem <manohar.karlapalem@intel.com>
Co-authored-by: sfatimar <sahar.fatima@intel/com>
Co-authored-by: sfatimar <64512376+sfatimar@users.noreply.github.com>
Co-authored-by: intel <you@example.com>
Co-authored-by: gundaarx <aravindx.gunda@intel.com>
* Add ORTTrainerOptions class for the new pytorch frontend (#4382)
Add ORTTrainerOptions class and some placeholders
* Add _ORTTrainerModelDesc to perform validation for model description (#4416)
* Add Loss Scaler classes to the new frontend (#4306)
* Add TrainStepInfo used on the new frontend API (#4256)
* Add Optimizer classes to the new frontend (#4280)
* Add LRScheduler implementation (#4357)
* Add basic ORTTrainer API (#4435)
This PR presents the public API for ORTTrainer for the short term
development.
It also validates and saves input parameters, which will be used in the
next stages, such as building ONNX model, post processing the model and
configuring the training session
* Add opset_version into ORTTrainerOptions and change type of ORTTrainer.loss_fn (#4592)
* Update ModelDescription and minor fix on ORTTrainer ctor (#4605)
* Update ModelDescription and minor fix on ORTTrainer/ORTTrainerOptions
This PR keeps the public API intact, but changes how model description is stored on the backend
Currently, users creates a dict with two lists of tuples.
One list called 'inputs' and each tuple has the following format tuple(name, shape).
The second list is called 'outputs' and each tuple can be either tuple(name, shape) or tuple(name, shape, is_loss).
With this PR, when this dict is passed in to ORTTrainer, it is fully validated as usual.
However, tuples are internally replaced by namedtuples and all output tuples will have
tuple(name, shape, is_loss) format instead of is_loss being optionally present.
Additionally to that normalization in the internal representation (which eases coding),
two internal methods were created to replace a namedtuple(name, shape) to namedtuple(name, shape, dtype)
or namedtuple(name, shape, is_loss, dtype) dependeing whether the tuple is an input or output.
This is necessary as ORTTRainer finds out data types of each input/output during model export to onnx.
Finally, a minor fix was done on ORTTrainer. It could initialize ORTTrainerOptions incorrectly when options=None
* Rename input name for test
* Add ONNX Model Export to New Frontend (#4612)
Co-authored-by: Rayan Krishnan <t-rakr@OrtDevTest2v100.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>
Co-authored-by: Thiago Crepaldi <thiago.crepaldi@microsoft.com>
* Create training session + minor improvements (#4668)
Co-authored-by: Rayan Krishnan <t-rakr@OrtDevTest2v100.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>
* Save ONNX model in file (#4671)
Co-authored-by: Rayan Krishnan <t-rakr@OrtDevTest2v100.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>
* Add eval step (#4674)
Co-authored-by: Rayan Krishnan <t-rakr@OrtDevTest2v100.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>
* Add train_step (#4677)
Co-authored-by: Rayan Krishnan <t-rakr@OrtDevTest2v100.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>
* Add LR Scheduler (#4694)
Co-authored-by: Rayan Krishnan <t-rakr@OrtDevTest2v100.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>
Co-authored-by: Thiago Crepaldi <thiago.crepaldi@microsoft.com>
* Add deterministic compute tests (#4716)
Co-authored-by: Rayan Krishnan <t-rakr@OrtDevTest2v100.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>
Co-authored-by: Thiago Crepaldi <thiago.crepaldi@microsoft.com>
* Add legacy vs experimental ORTTrainer accuracy comparison (#4727)
Co-authored-by: Rayan Krishnan <t-rakr@OrtDevTest2v100.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>
Co-authored-by: Thiago Crepaldi <thiago.crepaldi@microsoft.com>
* Add Mixed precision/LossScaler + several fixes (#4739)
Additionally to the mixed precision/loss scaler code, this PR includes:
* Fix CUDA training
* Add optimization_step into TrainStepInfo class
* Refactor LRSCheduler to use optimization_step instead of step
* Updated several default values at ORTTrainerOptions
* Add initial Gradient Accumulation supported. Untested
* Fix ONNX model post processing
* Refactor unit tests
* Add ONNX BERT example + minor fixes (#4757)
* Fix training issue when passing ONNX file into ORTTrainer
Co-authored-by: Thiago Crepaldi <thiago.crepaldi@microsoft.com>
Co-authored-by: Rayan Krishnan <t-rakr@OrtDevTest2v100.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>
* Add Dynamic Shape support (#4758)
* Update DeepSpeed Zero Stage option to a separate option group (#4772)
* Add support to fetches (#4777)
* Add Gradient Accumulation Steps support (#4793)
* Fix Dynamic Axes feature and add unit test (#4795)
* Add frozen weights test (#4807)
* Move new pytorch front-end to 'experimental' namespace (#4814)
* Fix build
Co-authored-by: Rayan-Krishnan <rayankrishnan@live.com>
Co-authored-by: Rayan Krishnan <t-rakr@OrtDevTest2v100.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>
1. Publish the image ACR, instead of building it every time for every PR
2. Make USE_MKLML and USE_OPENMP be able to co-exist. Currently both of them are enabled in our Linux CI build but indeed only one of them is taking effect.
3. Split nuphar and DNNL to separated pipelines.
4. Fix two warnings in onnxruntime/core/optimizer/matmul_scale_fusion.cc and onnxruntime/test/tvm/tvm_basic_test.cc.
5. Update the manylinux2010_x86_64 image to the latest.
1. Avoid building ONNX of every history ONNX versions in our CI, it is costly and easy to fail.
2. Run docker command without sudo. Previously the user is not in docker group, now Azure DevOps Service have added it in.
Add transformer glue test example to show how to use ORTTrainer to fine-tune a transformer model
Co-authored-by: liqun <liqun@OrtTrainingDev4.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>
* Remove 'model_.' prefix for onnx model initializers in training
* fix test case remove redundant device test
* rename
* Fix state_dict/load_state_dict with frozen_weight
* nit
* Add monkey patch for pt opset 10
* remove pt patch in CI
* nit: newline
Update install_deps.sh to use relative path from script directory to symbolic_opset10.py. This allows install_deps.sh to be called from different working directories.
* Enable running PEP8 checks via flake8 as part of the build if flake8 is installed.
Update scripts in \tools and \onnxruntime\python. Excluding \onnxruntime\python\tools which needs a lot more work to be PEP8 compliant. Also excluding orttraining\tools for the same reason.
Install flake8 as part of the static_analysis build task in the Win-CPU CI so the checks are run in one CI build.
Update coding standards doc.
* initial change to transformer.py
* prepare e2e transformer tests
* refactor transformer tests
* put test python files in a flat folder
* fix typo pip install transform(s)
* python 3.6
* python version to 3.6 in install_ubuntu.sh
* remove argparser
* to use opset ver 12
* workaround loss_scale naming patch in case of loss_fn_
* assign self.loss_fn_ so it can be checked
* skip a few un-needed post-process steps
* fix loss_scale_input_name, clean up post process steps
* skip non-frontend tests
* move cpu/cuda related files to coresponding cpu/cuda folder (#3668)
Co-authored-by: Weixing Zhang <wezhan@microsoft.com>
* type cast for ratio is not necessary for dropout (#3682)
Co-authored-by: Weixing Zhang <wezhan@microsoft.com>
* thrustallocator is not needed since cub is used directly for gather now. (#3683)
Co-authored-by: Weixing Zhang <wezhan@microsoft.com>
* GatherND-12 Implementation (#3645)
* Renamed, UT passing
* Move GatherND CUDA Kerenl into onnxruntime
* Merge GatherNDOpTest
* Refactor Test code
* Merge CPU Kernel Impl
* Handle Negative Indice, Fix UT
* Improve CUDA kernel to handle negative index
* Minor Fixes
* Preserve GatherND-1 Cuda kernel
* Fix Mac build
* fix UT
* Fix Build
* fix GatherNDOpTest.double > CUDA error cudaErrorInvalidDeviceFunction:invalid device function
Co-authored-by: Sherlock Huang <bahuang@OrtTrainingDev3.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>
Co-authored-by: Peng Wang (pengwa) <pengwa@microsoft.com>
* update with reviewers' comments
* testBertTrainingGradientAccumulation was not using rtol and may fail occasionally with small (e-06) difference
* fix merge mistakes
Co-authored-by: liqun <liqun@OrtTrainingDev4.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>
Co-authored-by: Weixing Zhang <weixingzhang@users.noreply.github.com>
Co-authored-by: Weixing Zhang <wezhan@microsoft.com>
Co-authored-by: Sherlock <baihan.huang@gmail.com>
Co-authored-by: Sherlock Huang <bahuang@OrtTrainingDev3.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>
Co-authored-by: Peng Wang (pengwa) <pengwa@microsoft.com>
* Added FP16 transformations
* Revert "Added CMAKE_BUILD_TYPE to make building dynamic"
This reverts commit d3e17af1af655cfdc4d2fec33f52055caa525e85.
* Added FP16 transformations for FP16 builds
* Backend logic cleanup
Cleans the backend(intel_graph.*) code in the following ways:-
1. Minimize global usage: Since all the IR graphs need to be
re-generated on every Infer, it is bad practice to rely on globals
for their saving and usage as there would be multiple readers and
writers to the same global variable leading to incorrect usages or
contentions. This change replaces globals with locals where possible.
This change also fixes an existing bug with due to
incorrect global usage.
2. Remove all unused functions.
3. Remove all unused headers and prepocessor directives.
* removed commented out code
* Disabled default optimization for Intel EP
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Fix missed plugins.xml for python bindings
* Fixed the build after latest master changes
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Disabled unsupported ops for accelerators
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Added some more disabled ops
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Added environment variable to enable debugging
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Added more debug statements
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Fixed unsupported ops list for GPU and VPU
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Fixed unsqueeze unit tests
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Added error message to the status
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Overwrite Model proto with shape info from data
Overwrites the shape info of Model proto with the shape from
actual input data. Needed for inferring models with Dynamic
shapes.
* Removed print statement and disabled where op
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Disabled Reshape with Empty initializer
* Added more debug statements for 1P
* Don't allow 1D inputs with symbol for dimension
* Disabled some 3rd phase ops
* Disabled split and added zero dimension check for OutputDefs
* Cleanup zero dimensionality check
* Added different data type check for inputs and initializers
* Added conditions for Mod, Cast and Pad
* Removed unused variable
* Disabled scan and added conditions for squeeze
* Added changes for fixing all C++ unit tests
* Implements Backend Manager class for caching
Backend Manager provides a layer of indirection between EP interface
and OV backend that provides caching services for models with
symbolic dims in input shapes.
* clean up commented blocks
* clang-formatting
* Read I/O type info from ModleProto
Read the tensor element type information from ModelProto object,
as FusedNode is no longer available.
* code cleanup
* clang-formatting
* Added print statement for jenkins
* Disabled some python tests
* Changed the path of convert fp32 to fp16 hpp
* Added conditions for BatchNorm in GetCapability
* Fixed failed tests
* Revert "Added conditions for BatchNorm in GetCapability"
This reverts commit c3c28c3b00d27892c42546b35dacdd807a48ee90.
* Added Intel to onnxruntime backends
* pick up vars set by OV package setupvars.sh
* Added conditions for Identity
* remove a few cout prints
* Added conditions for GPU_FP32 unit tests
* Revert "pick up vars set by OV package setupvars.sh"
This reverts commit 8199e029c03eae21a1a7ef6bfdc93d00e5d0198b.
* Commented out fatal message for protobuf
* Might need to be removed
* Add interface class for current backend
* moved common logic to base class
* simplified cpu backend
* Removed unused headers
* use vectors to save i/o tensors for windows compatibility
* move utils fxns to backend_utils namespace
* rename ov_backend to ibackend
* Factory pattern for backend creation
* rename CPU backend to Basic backend
* renamed to vad-M and added to factory list
* Added conditions for VPU
* Added print statements
* Changed the logic for checking for symbolic shapes
* Modified logic for zero dimension check
* Removed VPU single dimension condition
* Removed comments
* Modified logic in DimensionCheck method
* Remove legacy OpenVINO EP
Remove all the legacy code for OpenVINO EP. UEP code will take its
place going forward.
This change does NOT remove OVEP files in the following areas asa
they will be reused by UEP:-
1. Documentation: All .md files
2. Docker releated files
3. Python bindings
4. Java bindings
5. C# bindings
6. ORT Server
7. CI pipeline setup files
* Rename Intel EP to OpenVINO EP
* Added unique names to the subgraphs
* Removed subgraphs with only constant inputs
* Modified subgraph partitioning algorithm to remove const input subgraphs
* Apply suggestion to onnxruntime/core/providers/openvino/openvino_execution_provider.cc
* Tracking output names to fix the output order bug
* Changed output names to a unordered map
* Modified logic to check for symbolic input shapes
* Fixed a bug in Reshape check
* Added empty model path to Model constructor
* Made necessary changes to cmake to build from the binary package
* Changed INTEL_CVSDK_DIR to INTEL_OPENVINO_DIR
* Enable dyn device selection with C++ API
* Added Round operator to unsupported list
* Modified subgraph partition logic for MYRIAD
* Removed supported ops from the list
* Enable dyn dev selection in Py API's
* Add documentation for dynamic device selection
* Use MYRIAD || HDDL instead of VPU
* Removed temporary cast of Int64 to FP32
* Disabled unit Tests for CPU_FP32 and GPU_FP32
* Removed default "CPU" from unit tests to allow overriding
* Removed ops Concat, Squeeze, Unsqueeze from unsupported list
* Get the device id from info
* Removed overwriting device_id and precision
* Enabled ConvTranspose and EyeLike
* Reordered unsupported ops in alphabetical order
* Fixed syntax error
* Fixed syntax error
* Code clean-up: Handle exceptions, logs and formatting
Code formatted according to ORT coding guidelines.
* remove debug print from pybind code
* updated docs with ops and models
* formatting prints
* Added default values for c and j for openvino
* Overriding the values set for c and j to be 1
* BACKEND_OPENVINO should be empty if openvino is not in build
* Overriding c value with default for perftest
* fix VAD-M device string bug
* Add IE error details to exceptions
* Use IE specific device names in EP
* Add VAD-F (FPGA) device support
* Removed unecessary libraries from whl package
* Code changes for Windows compatibility
* Add VAD-F option to python API
* [revert before merge] cmake changes for RC
* Enable Windows build in CMake
* Unset macro OPTIONAL for windows builds
inference_engine.hpp's include chain defines a macro 'OPTIONAL'
which conflicts with onnx project's headers when using MSVC. So
would need to explictly unset it for MSVC.
* Use a single copy of plugin/IE::Core
Defined as a static member in Backend manager
* Remove restriction of single subgraphs for myriad
* Passed subgraph name to Backend to enhance log statements
* Disabled zero dimension conditions
* Disabled concat to remove zero dims
* Enabled building ngraph as part of ORT
* Removed serializing and added versioning
* Fix CPU_FP32 unit tests
* Removed unecessary condition
* add ngraph.so.0.0 to .whl
* Check for zero dimensions only for inputs and outputs
* Restrict loading only 10 subgraphs on myriad
* Build ngraph.dll within UEP. Doesn't link yet
* Rename Linux included libngraph.so to libovep_ngraph.so
Renames locally built libngraph.so containing ONNX importer to
libovep_ngraph.so in order to avoid linkage conflicts with
libngraph.so supplied by OpenVINO binary installer.
Applies only for Linux builds.
* use output_name cmake properties for lib name
* fix .so name format in lib_name.patch
* CMake code cleanup
* Rename WIN32 included ngraph.dll to ovep_ngraph.dll
To avoid conflict with ngraph.dll distributed by openvino.
* Added myriad config for networks without 4 dimensions
* Loading the 10 max clusters for inference on myriad
* Refactor code and add Batching support
Encapsulate subgraph settings into context structs.
Add batching support for completely supported models.
* Disabled some broken tests
* use input_indexes to avoid batch-checking initializers
* Avoid static initialization order error on WOS
* Added candy to broken tests
* InternalCI changes for 2020.2
* Updated DLDT instructions
* Unsaved changed in install_openvino.sh
* Changes after manual check
* Remove custom ngraph onnx_import build for WOS
ONNX Importer on WOS does not have protobuf issue.
* Remove FP32ToFP16 ngraph pass
This conversion is performed implicitly within IE.
* Surround debug logic by #ifndef NDEBUG
* remove invalid TODO comments
* removed references to ngrpah-ep
* clang-formatting
* remove commented code
* comment edits
* updating copyright year to that of first OpenVINO-EP release
* remove redundant log msg
* Modified operator and topology support
* Update build instructions
* doc formatting
* Fixed clip unit tests
* Revert "Remove FP32ToFP16 ngraph pass"
This reverts commit ec962ca5f315a5658ad980e740196f19de2639c1.
* Applying FP16 transformation only for GPU FP16
* Fixed GPU FP32 python tests
* automatically use full protobuf
* disable onnxrt server for now
* Disabled upsample
* update dockerfile instructions
* Removed MO paths and added ngraph path
* Remove OVEP from ORT Server docs
Will put it back in after validation
* Updated path to Ngraph lib
* Disabled Resize and some other python tests
* Removed unnecesary header files
* Use commit SHA to fetch ngraph repo
* Avoid un-needed file changes due to version update
* Fixed clip tests
* Fixed Pow, max and min onnx tests
* build.md doc typo
* Update cmake patch command for ngraph src
* remove dead cmake code for onnxruntime_USE_OPENVINO_BINARY
* use spaces instead of tab
* remove commented code
* Add info about protobuf version
* edit debug env var and enable for WIN32
* specify only version tag of 2020.2 for dockerbuilds
* remove unnecessary file changes
* Pass empty string as default argument to C# tests
* Use ${OPENVINO_VERSION} to name openvino install directory in CI builds
* Enabled unnecessarily disabled tests
* Fixed ngraph protobuf patch
* Fixed error in protobuf patch
* Revert "Use ${OPENVINO_VERSION} to name openvino install directory in CI builds"
This reverts commit 89e72adb8bf3b9712f5c81c5e13fe68c6c0df002.
* Remove unsetting OPTIONAL macro
This is no longer used in recent ONNX update onnx/onnx@da13be2,
so this unset workaround is no longer necessary.
* Use a null string default argument for C# API
* Set OpenVINO version yml files and pass to CI Docker builds
Git Tag info for DLDT as well as install directory are set
using this value.
This reverts commit 9fa9c20348ed72ae360a95c98e9b074d2f9fafc5.
* Documentation: recommendation and instructions for disabling ORT graph optimizations
* more doc updates
* Reduced the number of models according to CI time constraints
Co-authored-by: ynimmaga <yamini.nimmagadda@intel.com>
Co-authored-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
Co-authored-by: Mikhail Treskin <mikhail.treskin@intel.com>
Co-authored-by: mbencer <mateusz.bencer@intel.com>
Co-authored-by: Aravind <aravindx.gunda@intel.com>
Co-authored-by: suryasidd <48925384+suryasidd@users.noreply.github.com>
* add frontend minst test
* to use torch nightly with torchvision
* remove incorrect comment per reviewer's comment
* experiment torchvision import failure
* experiment install_deps.sh
* more experiment install_deps.sh
* experiment install_deps.sh with --upgrade
* Experiment with install_deps.sh.
* Experiment with install_ubuntu.sh.
* Use Ubuntu 18.04 and Python 3.6 for CI.
* Update cmake version for CI.
* Install MPI on Ubuntu 18.04 for CI.
* Increase tolerance for MNIST test.
* Go back to Ubuntu 16.04 for CI, fix installing from deadsnakes ppa.
* Clean-up.
* Update ort_trainer.py from ort_training.
* Get default Ubuntu Python ver back to 3.5.
* Add underscore to opset_version parameter name in ORTTrainer constructor.
* Move loss/model wrap before the call for sample output.
* Update expected values for MNIST test.
Co-authored-by: liqun <liqun@OrtTrainingDev4.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>
Co-authored-by: Sergii Dymchenko <sedymche@microsoft.com>
1. Fix onnxruntime server docker file build failure. Tested with the notebook in ONNX tutorial, it works well.
2. Delete the docker files for the other EPs, because currently they don't work and I don't have enough time to update them.
We want to implement SoftmaxCrossentropy and NegativeLossLikelihoodLoss forward training ops for opset-12 but that requires ONNX submodule to point to the latest commit to have the latest and greatest ONNX spec!
- Reverse integrate changes from *.in.proto files in github ONNX repo.
- Regenerate csharp/test/Microsoft.ML.OnnxRuntime.Tests/OnnxMl.cs
- Disable ONNX tests that don't have op implementation for the latest opset.