* try to run inside 4.3.1 container
* no \ in container run command
* remove networking options
* try with adding video render groups
* add job to build docker image
* try without 1st stage
* change alpha, beta to float
* try adding service connection
* retain huggingface directory
* static video and render gid
* use runtime expression for variables
* install torch-ort
* pin sacrebleu==1.5.1
* update curves for rocm 4.3.1
* try again
* disable determinism and only check tail of loss curve and with a much larger threshold of 0.05
* disable RoBERTa due to high run variablity on ROCm 4.3.1
* put reduction unit tests back in
* Globally enable ms-experimental ops
* change meaning of ms_experimental to mean *all* ms_experimental ops. Some experimental ops will still be enabled globally without this flag like audio ops.
* add cmath
* add cmath to signal_defs.cc
* move audio back into experimental, verify on mac
* remove experimental from mac builds
Co-authored-by: Sheil Kumar <sheilk@microsoft.com>
* install protobuf from source
* fix rm command in Dockerfile
* fix options on rm command
* fix cd into protobuf source directory
* try again
* remove strip step
* debug list the files
* ls on /usr
* more debug
* more debug
* adjust LD_LIBRARY_PATH
* try remove protobuf before ORT build
Add documentation for these C API functions:
RunOptionsGetRunLogSeverityLevel
RunOptionsGetRunLogVerbosityLevel
RunOptionsGetRunTag
RunOptionsSetRunLogSeverityLevel
RunOptionsSetRunLogVerbosityLevel
RunOptionsSetRunTag
Update some existing documentation.
* initial change for eager/ortmodule integration
* pdate to latest pytorch api
* add test model;fix torch version issue
* fix comments in pr
* fix python test break
* fix api change
* fix comments in PR
* pass device into the fw function
Co-authored-by: Chen Fu <fuchen@microsoft.com>
Bug was introduced from PR #8716
When restricting cpuinfo to only known platforms, compilation flag change was not thorough, which accidentally turned off hybrid core detection for ARM systems.
This PR fixes this bug
(1) Attention Fusion for gpt-2 model from Megatron.
(2) Update symbolic shape inference of Attention to support 4D mask.
(3) Add an otpion in save_model_to_file to save external data in one file or not, and warning of existing external data
(4) Fix deprecation: logger.warn => logger.warning
(5) Add model loader to test model without external data
(6) Add an API of optimize_by_fusion, and topological sort after optimization.
* Update to CUDA11.4 and TensorRT-8.0.3.4
* update trt pool, remove cudnn from setup_env_gpu.bat
* revert pool
* test gpu package pipeline on t4
* back out changes
* back out changes
Co-authored-by: George Wu <jywu@microsoft.com>
* updates for picking pnnx commit
* add tests filter to c# tests
* plus test fixes
* fix versioning for contrib ops
* fix tests
* test filter for optional ops
* more versioning related updates
* fix test
* fix layernorm spec
* more updates
* update docs
* add more test filters
* more filters
* update binary size threshold
* update docs
* draft - enable model local function
* enable model local functions in ORT
* update to latest rel onnx commit
* plus tests
* plus more updates
* plus updates
* test updates
* Fix for nested functions + shape inference
* plus bug fix and updates per review
* plus fixes per review
* plus test updates
* plus updates per review
* plus fixes
* fix a test