* test
* [gwang] make cmake compile work
* [gwang] enble build apks
* some build update
* add simple sigmoid test android project and cmake
* add build.py
* refine and remove unused import lib
* address CR comments
* remove unnecessary files
* add README.md
* minor update
* remove
* minor change
* fix ci failure and minor update
* fix typo in project folder
* remove
* remove and minor update
* refine
* minor fix
* fix
* fix typo
* add gradle spotlessApply task to fix CI failure
* fix
* enable spotlessApply in build gradle
* revert some changes
* minor fix
* run spotless apply for format
* address CR comments and fix CI version and format
* refine
* Refine
* address comments
* refine
* refine
* modify
* reformat
* resolve version conflicts
* minor update
* minor update
* address comments
* minor update
Co-authored-by: Guoyu Wang <wanggy@outlook.com>
* Install and use conda on ortmodule CI pipelines
* Update build script to install onnxruntime wheel before running unit tests
* Remove python 3.5 from install_python_deps
* Pinning deepspeed version to 0.3.15
* initial draft for kernel invoke api
* initial implementation of kernel invoker
* [eager] fix build on Mac
* [eager] increment input name in kernel invoker
* temp fix for type in eager mode
* use global default log manager
* rollback the previous commit since it break linux build
* Revert "rollback the previous commit since it break linux build"
This reverts commit 58c2c3423a.
* Eager Mode: fix linking on macOS
* optimizer_execution_frame: ignore unused lambda capture (model_path)
* fix link issue
* ORTInvoker: set correct input argument tensor element proto types
Do not set a type proto on output arguments to allow ORT to deduce them
* ORTInvoker: create only one logging manager
* Minor fix to set execution provider type correctly. (#7000)
Co-authored-by: Chandru Ramakrishnan <chandru-r@github.com>
* training fix
* support config output ml values in frame, so we can use it to implement inplace update
* Fix range loop error while building. (#7087)
Co-authored-by: Chandru Ramakrishnan <chandru-r@github.com>
* Conditionally link with nsync_cpp if not windows. (#7151)
Co-authored-by: Chandru Ramakrishnan <chandru-r@github.com>
* Fixed initialization order in ORT kernel invoker (#7342)
* Updated constructor of ort_kernel_invoker to take a logger.
* Changed linking order.
* Updated test.
* add inplace ut
* add build option
* Update include/onnxruntime/core/eager/ort_kernel_invoker.h
Co-authored-by: Derek Murray <Derek.Murray@microsoft.com>
* resolve comments in pr
* fix build break;merge from master
* fix build break
Co-authored-by: Cheng Tang <chenta@microsoft.com>
Co-authored-by: Aaron Bockover <abock@microsoft.com>
Co-authored-by: Chandru Ramakrishnan <41447659+chandru-r@users.noreply.github.com>
Co-authored-by: Chandru Ramakrishnan <chandru-r@github.com>
Co-authored-by: Derek Murray <Derek.Murray@microsoft.com>
* Include ORT format model conversion scripts and infrastructure in ORT python package.
- tweak existing script setup so it can be easily run directly and from the ORT python package
Add config file and readme for Android minimal build package
Update ORT Mobile doco
Disable warning if 'all' optimizations are enabled but NCHWc transformer is excluded (device specific optimizations don't apply in this scenario so the warning is moot).
* Address PR comments
* Pass cuda stream to thrust function to not use default stream.
In the commit 299ace0, ORT has been changed to not use cuda default stream.
* update amd_hipify.py
* remove un-necessary stream sync
Co-authored-by: Weixing Zhang <wezhan@microsoft.com>
* first attempt rocm training wheel
* modifications needed to python packaging pipeline for Rocm 4.1
* changges to not conflict with cuda
missed stage1 changes
remove package push
add option r to getopt
try again without python install
try again without python install
try again without python install
split pipelines and add back push to remote storage
try on cuda gpu pool
try again
try again
try running without az subscription set
try again on original pipeline
change pool
passing AMD Rocm whl on AMD-GPU pool
split rocm pipeline from cuda pipeline
remove comments
* try adding Rocm tests as well
* try with tests in place
* fix trailing ws
* add training data
* try again as root for tests
* use python3
* typo
* try to map video, render group into container
* try again
* try again
* try to avoid yum error code
* make UID 1001
* try without yum downgrade
* define rocm_version=None
* remove CUDA related comments for Rocm Dockerfile
* Dont pin nightly torch torchvision torchtext versions as they expire (for now nightly is required for Rocm 4.1)
* missed requirements-rocm.txt from last commit
* fix whitespace
* working on re-organizing js code for ortweb
* remove dup files
* move folder
* fix common references
* fix common es5
* add webpack to common
* split interfact/impl
* use cjs for node
* add npmignore for common
* update sourcemap config for common
* update node
* adjust folder/path in CI and build
* update folder
* nit: readme
* add bundle for dev
* correct nodejs paths
* enable ORT_API_MANUAL_INIT
* set name for umd library
* correct name for commonjs export
* add priority into registerBackend()
* fix npm ci pwd
* update eslintrc
* revise code
* revert package-lock lockfileVersion 2->1
* update prebuild
* resolve comments
* update document
* revise eslint config
* update eslint for typescript rules
* revert changes by mistake in backend.ts
* add env
* resolve comments