Commit graph

1441 commits

Author SHA1 Message Date
Jian Chen
34cb293c6b
Remove unused ADO YML pipeline template (#15857)
### Description
Remove unused ADO YML pipeline template



### Motivation and Context
Clean up and reduce our codebase.
2023-05-09 09:15:04 -07:00
Yulong Wang
0457fd0b40
upgrade emsdk to 3.1.37 (#15817)
### Description
upgrade emsdk to 3.1.37

WIP branch to debug the mystery memory issue in web assembly
multi-thread build.
2023-05-08 16:49:47 -07:00
Yi Zhang
045c623415
Make Nuget workflow easy to debug (#15808)
### Description
Fix the bug in #15693 



### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
2023-05-08 20:53:08 +08:00
Nat Kershaw (MSFT)
5e9b42326c
Fix packaging pipeline for nightly builds (#15839) 2023-05-07 20:42:38 -07:00
PeixuanZuo
41457885e0
[ROCm] add rocm5.5 to python package pipeline (#15820)
add rocm5.5 to python packaging pipeline.

https://aiinfra.visualstudio.com/Lotus/_build/results?buildId=306082&view=results

TODO:
Remove version 5.2.3, 5.3.2 and 5.4 in the next PR.
2023-05-06 10:21:15 +08:00
Nat Kershaw (MSFT)
ed31e4b737
Add nuget release version suffix to support publishing rcs to nuget.org (#15791) 2023-05-05 18:18:24 -07:00
Adrian Lizarraga
45f5c27632
[QNN EP] Update default QNN SDK to version 2.10.0 (#15818)
### Description
- Updates the default QNN SDK for CI pipelines to version 2.10.0.
- Disables convolution op tests that run on the QNN CPU backend due to a
potential bug with QNN SDK 2.10.0.

### Motivation and Context
Allows us to test the latest QNN SDK in default CI pipeline runs.
2023-05-05 13:01:21 -07:00
Guenther Schmuelling
5a43828b3d
update ort extensions to 94142d8391c9791ec71c38336436319a2d4ac7a0 (#15688)
needed to get tokenizers/decode for whisper

---------

Co-authored-by: Shalva Mist <shalvamist@microsoft.com>
2023-05-05 09:48:07 -07:00
Scott McKay
d1b2b35cd2
Various fixes to the CSharp setup (#15782)
### Description
<!-- Describe your changes. -->
Various fixes to the CSharp setup
- fix warnings
- fix invalid tests
- update test sdk nuget package
  - enables testing on linux
  - fixes issue with some unit tests not running in CI
- run unit tests in linux pipeline using dotnet

### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
Unit tests weren't breaking in CIs for both Windows and Linux builds and
should have been.
2023-05-05 14:27:30 +10:00
Yulong Wang
4712009f8a
[js/web] add target ort.webgpu.min.js (#15780)
### Description
add target ort.webgpu.min.js

WebGPU is experimental feature, so I don't want to put webgpu into the
ort.min.js file. This change adds 2 ways for users to access ort-web
with webgpu:
- using script tag: by URL
`https://cdn.jsdelivr.net/npm/onnxruntime-web@1.15.0/dist/ort.webgpu.min.js`
( this URL is not ready yet )
- using `import()`: use `import { Tensor, InferenceSession } from
'onnxruntime-web/webgpu';` - 'onnxruntime-web/webgpu' instead of
'onnxruntime-web'
2023-05-04 10:05:39 -07:00
Yulong Wang
33d1372729
[wasm] revert emsdk to v3.1.19 (#15793)
### Description
latest emsdk generated multi-thread version sometimes crash with unknown
reason ( error: memory access out of bounds ).

we don't want to break existing ort-web users, so revert emsdk back to
3.1.19 (same to what ort v1.14.0 uses)
2023-05-04 01:15:01 -07:00
Baiju Meswani
e464588a0e
Avoid generating training documentation during packaging (#15795) 2023-05-03 19:09:07 -07:00
Changming Sun
d53324d4a7
Update cmake version in a few places (#15775)
### Description
They were missed in #15707 , because they are not in common places for Dockerfiles.

Though this commit updated tools/ci_build/github/pai/rocm-ci-pipeline-env.Dockerfile, it won't automatically take effect. The image needs to be manually generated and pushed to a place, and before doing that our CMakeLists.txt also needs to be tweaked a little bit.
2023-05-02 22:56:28 -07:00
Yulong Wang
ef1f17f3dc
[wasm/JSEP] add threaded build to artifacts (#15777)
### Description
This is the first part to create a webassembly artifacts for ort-web
webgpu EP (wasm build).

there will be following steps to consume the artifacts in web build
2023-05-02 17:53:44 -07:00
Jian Chen
abdd4f518a
Update TRT Windows Cuda 11.6 to 11.8 (#15746)
### Description
Update TRT Windows cuda 11.6 to 11.8



### Motivation and Context
We are adapting newer version of cuda systemwide.
2023-05-02 12:23:13 -07:00
Changming Sun
328cabb194
Download protoc from Github Release instead of Nuget (#15731)
### Description
Download protoc from Github Release instead of Nuget to avoid having
dependency on nuget.exe on Linux

### Motivation and Context
To avoid having dependency on nuget.exe on Linux. Many users' build
environment do not have nuget or dotnet.
2023-05-02 12:18:59 -07:00
Ashwini Khade
0ffae8073b
Creating Nuget and Android packages for Training (#15712)
### Description
This PR creates Nuget and Android for Training. 


### Motivation and Context
These packages are intended to be released in ORT 1.15 to enable
On-Device Training Scenarios.

## Packaging Story for Learning On The Edge Release
### Nuget Packages:
1. New Native package -> **Microsoft.ML.OnnxRuntime.Training** (Native
package will contain binaries for: win-x86, win-x64, win-arm, win-arm64,
linux-x64, linux-arm64, android)
2. C# bindings will be added to existing package ->
**Microsoft.ML.OnnxRuntime.Managed**

### Android Package published to Maven:
1. New package for training (full build) ->
**onnxruntime-training-android-full-aar**

### Python Package published to PyPi:
1. Python bindings and offline tooling will be added to the existing ort
training package -> **onnxruntime-training**
2023-05-01 12:59:56 -07:00
Changming Sun
176161348e
Revert "make nuget workflow easy to debug. (#15693)" (#15744)
This reverts commit 53ff50d19a because it
make the nuget pipeline fail.
2023-04-29 19:05:01 -07:00
Jian Chen
ec2f038c6d
Update Nuget pipeline's Linux CUDA job to cuda 11.8 (#15516)
### Description
Fixed AB#14497
2023-04-29 07:38:18 -07:00
Adrian Lizarraga
191deb4235
[QNN EP] Nuget package (#15711)
Adds pipeline for QNN NuGet package (x64 and arm64).
2023-04-28 19:33:14 -07:00
Edward Chen
c415bc725f
Add 'name' key to xcodebuild 'destination' option. (#15690) 2023-04-28 08:52:18 -07:00
Changming Sun
5b826b1bc3
Update cmake version in Linux build (#15707)
### Description
All our Windows build pipelines already uses cmake 3.26 except one
pipeline: QNN ARM64.
This PR does the same for Linux build pipelines.

### Motivation and Context
This change is related to #15704 .
2023-04-27 20:02:33 -07:00
Adrian Lizarraga
be5c582e65
[QNN EP] Update to QNN SDK 2.9.0 (#15709)
### Description
- Update to QNN SDK 2.9.0 for QNN pipelines
- Temporarily disable warnings as errors for QNN Windows x64 pipeline
- Note that this pipeline did not previously run to completion. It also
currently does not run for pull requests.

### Motivation and Context
Need to update and test the latest available version of the QNN SDK.
2023-04-27 13:44:09 -07:00
Changming Sun
d3d232b047
Rename onnxruntime-Linux-CPU-2019 machine pool (#15691)
Rename onnxruntime-Linux-CPU-2019 machine pool to
"onnxruntime-Ubuntu2004-AMD-CPU". The old one has an internal error and
stuck there. I cannot make any change to it. It has been like this for
more than 1 week. So I created a new pool with the same setting except
the name is different.
Also, move some android pipelines to
"onnxruntime-Linux-CPU-For-Android-CI" which uses a standard image from
https://github.com/actions/runner-images
2023-04-27 12:46:18 -07:00
yf711
2e1f92a986
Fix EP Perf pipeline (#15507)
### Description
* Update TensorRT 8.6 lib dependencies in dockerfile of TRT EP Perf
pipeline
* Avoid using `--allow_running_as_root` and build ORT with non-root user


### Motivation and Context
To fix the build issue on EP perf pipeline

Fixed
[AB#14615]
2023-04-27 10:09:14 -07:00
Yi Zhang
8cda1ffa28
Fix error in post-merge pipeline (#15717)
### Description
Get the right drive letter on Windows

### Motivation and Context
Build Directory might be in drive C
2023-04-27 10:05:15 -07:00
Yi Zhang
53ff50d19a
make nuget workflow easy to debug. (#15693)
### Description
Add parameters to make some stages could use other run's intermediate
output.

### Motivation and Context
nuget workflow has 38 stages of 4 layers.
We had to run the whole workflow from begining to test one stage.
It could make life easier to run only one stage for testing.
like

![image](https://user-images.githubusercontent.com/16190118/234453721-e6e9a4bd-5e0b-4101-a18e-d5cf60615c9f.png)

### N.B.
In this PR, Nuget_Test_Linux_CPU, Nuget_Test_LinuxGPU and
Jar_Packaging_GPU are enabled as the first step.
So I can start to move tests from Linux host to container
2023-04-27 14:54:14 +08:00
yf711
28985c47b7
[TensorRT EP] Unleash opset16-17 onnx model tests (#15657)
### Description
In 2021 we restricted onnx node test CI execution in range of opset
14-15 for ORT-TRT, which was the latest opset that TRT EP could support

Update this range to opset 14-17 to improve the ORT-TRT unit test
coverage, as [Nvidia announced that TRT 8.6 supported
opset17](https://github.com/onnx/onnx-tensorrt/blob/main/docs/operators.md)
2023-04-26 11:44:19 -07:00
yf711
d701dcd027
Fix Linux MultiGPU TensorRT CI (#15697)
### Description
* Reverting default TensorRT version to 8.5 as temporary fix
  
* Apart from that, this PR temporarily leaves this CI as a place to
validate user behavior that uses TRT 8.5 with latest ORT

### Context
* This CI pool equips 2xTesla M60 GPUs, which are no longer supported by
TensorRT 8.6.
* Currently, other CIs are using single-T4 VM but there's no VM with
2xT4 or other suitable dualGPU in the range.
* Once we decide which VM instance for this CI to migrate to, TRT8.6 can
be enabled on this CI

* According to
[Nvidia](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/index.html):
* TensorRT 8.5.3 was the last release supporting NVIDIA Kepler (SM 3.x)
and NVIDIA Maxwell (SM 5.x) devices. *These devices are no longer
supported in TensorRT 8.6*. NVIDIA Pascal (SM 6.x) devices are
deprecated in TensorRT 8.6.
2023-04-26 10:01:33 -07:00
Xavier Dupré
699c9a520b
Fix TVM pipelines (#15653)
### Description
Fix TVM pipelines by adding missing dependancy of TVM (attrs).
2023-04-26 09:55:05 +02:00
Yulong Wang
b98317b907
[js/webgpu] following up for JSEP/WebGPU code cleanup (#15666)
### Description
This PR resolves a part of non-critical comments from code review
comments in #14579.

- use `USE_JSEP` instead of `USE_JS` in build definition to make it less
ambiguous
- remove unused util functions from util.ts
- fix transpose.h
- other misc fixes
2023-04-25 21:20:03 -07:00
Changming Sun
b1b6e5522e
Update cuda 11.6 to 11.8 for Windows pipelines (#15684)
### Description
Update cuda 11.6 to 11.8 for Windows pipelines
This PR is just for Windows CUDA pipelines. It does include any change
for Linux pipelines or TensorRT pipelines

### Motivation and Context
It is a planned feature for the upcoming ONNX Runtime release.
2023-04-25 20:23:57 -07:00
Yulong Wang
14cc02c65c
[js/web] WebGPU backend via JSEP (#14579)
### Description
This change introduced the following new components into ONNX Runtime
Web:
- JavaScript Execution Provider (JSEP)
  - Asynchronized inferencing execution powered by Emscripten's Asyncify
- WebGPU backend implemented in TypeScript
  - initial implementation of kernels:
    - elementwise operators (22)
    - binary operators (5)
    - tensor: Shape, Reshape, Transpose, Gemm
    - nn: Conv, {Global}Maxpool, {Global}AveragePool


Code need to be polished. still working on it.

## Q&A
What is JSEP?
> JSEP, aka JavaScript Execution Provider, is a new ONNXRuntime
execution provider that specifically works on Web environment
(browsers). JSEP allows JavaScript code to kick in from various places
when ONNX Runtime inferences a model.

Why JSEP?
> JSEP is a hybrid mode EP that contains both C/C++ and
TypeScript/JavaScript implementation. There are 2 strong reasons why we
introduces JSEP:
> 1. the C/C++ part helps JSEP to leverage ONNX Runtime's capabilities
as much as possible including graph transformer, optimizers and also the
capabilities to fallback to CPU EP. TypeScript/JavaScript helps JSEP to
develop and debug much easier in the browser for the kernel
implementation.
> 2. the requirement of asynchronized execution from JavaScript API (eg.
`buffer.mapAsync()`) makes it impossible to run `OrtRun()` in a
synchronized context (see "async problem" section below). This is done
by using Emscripten's Asyncify.

What is WebGPU?
> WebGPU is the new GPU API that available in browser. It's one of the
only 2 APIs that currently available to access the GPU from browser (the
other is WebGL).
> WebGPU is designed with more advanced and stronger features comparing
to WebGL and is potentially solution that offer the best GPU performance
for model inferencing that currently available.

What is the async problem and why we have the problem?
> The "async problem" is a problem that you cannot call an async
function in a synchronous context. Think about the following C++ code:
> ```c
> // C-style declarations (API)
> typedef void (*ON_COMPLETE)(PVOID state, DATA *data);
> void read_data_from_file(FILEHANDLE file, ON_COMPLETE on_complete);
> 
> // implementation
> DATA * my_impl_read_data_from_file_sync(FILEHANDLE file) {
>   // how to implement?
> }
> ```
> The answer is, it's impossible to implement this function. Usually we
try to find a sync version API, or launch a thread to call the async
function and sync-wait on the main thread. Unfortunately, in browser
environment, neither is possible.
>
> WebGPU does not offer any synchronized API for data downloading (GPU
to CPU). This is the only operation that MUST be async. As `OrtRun()`
will eventually call into DataTransfer for copy data from GPU to CPU,
and `OrtRun()` is a synchronized function, this cannot be done in normal
way.

What is Emscripten? How is the Asyncify feature resolved the problem?
> Emscripten is the C/C++ compiler for WebAssembly. It's what we use to
compile ORT and generates the WebAssembly artifacts which runs on
browsers.
>
> Asyncify is a [compiler
feature](https://emscripten.org/docs/porting/asyncify.html) that allows
calling async functions from a synchronized context. In short, it
generates code to unwind and rewind call stack to emulate async
execution. With this feature, we are able to call the async function
inside `OrtRun()` call.

## Design Overview

**Inter-op**

JSEP is doing pretty much same thing to just another EP. It exposes an
interface for inter-op with JavaScript, which is defined in
onnxruntime/wasm/js_internal_api.js:
```js
// init JSEP
Module["jsepInit"] = function (backend, alloc, free, copy, copyAsync, createKernel, releaseKernel, run) {
    Module.jsepBackend = backend;
    Module.jsepAlloc = alloc;
    Module.jsepFree = free;
    Module.jsepCopy = copy;
    Module.jsepCopyAsync = copyAsync;
    Module.jsepCreateKernel = createKernel;
    Module.jsepReleaseKernel = releaseKernel;
    Module.jsepRun = run;
};
```
This simple JavaScript snippet defines all language barrier level
functions that requires by JSEP to achieve implementing kernels and data
transfers using JavaScript inside ONNX Runtime:
- `jsepBackend`: assign the singleton object to webassembly module
- `jsepAlloc` and `jsepFree`: implementation of data transfer's Alloc()
and Free()
- `jsepCopy`: synchronized copy ( GPU to GPU, CPU to GPU)
- `jsepCopyAsync`: asynchronized copy ( GPU to CPU)
- `jsepCreateKernel` and `jsepReleaseKernel`: a corresponding object
that maintained in JS to match lifecycle of Kernel in ORT
- `jsepRun`: OpKernel::Compute() should call into this

The abstraction above allows to tie as little as possible connections
and dependencies between C/C++ and TypeScript/JavaScript.

**Resource Management**

Lifecycle of tensor data and kernels are managed by ORT(C/C++) but the
implementation are left to JavaScript. JavaScript code are responsible
to implement the callbacks correctly.

For WebGPU, the GPU data is managed by JavaScript using a singleton map
(tensot_data_id => GPUBuffer). GPU pipeline is managed as singleton.
Shaders are managed using a singletonmap (shader_key => gpu_program),
while shader_key is generated by cache_key (OP specific, including
attributes) and input shapes.

**about data transfer**
`js::DataTransfer::CopyTensor` implemented to call either synchronized
or asynchronized copy callback, depending on the destination is GPU or
not. Emscripten's macro `EM_ASYNC_JS` is used to wrap the async function
to be called in the synchronized context.

**run kernel in JS**

Kernel class constructor calls once `jsepCreateKernel()` with an
optional per-kernel specific serialization to pass attributes into
JavaScript.

`Compute()` are implemented in a way that a metadata serialization is
performed in a base class and JavaScript code can access the data using
the Emscripten specific builtin macro `EM_ASM_*`.

**disabled features**
memory pattern is force disabled, because the WebGPU data is not
presented by a general memory model (a buffer can be represented by
offset + size).
concurrent run support is disabled. WebGPU is stateful and it also has
async function call. To support concurrent run will significantly
increase the complexity and we don't get any real benefit from it.

**prefer channels last**
JSEP prefers channels last and returns `DataLayout::NHWC` in method
`GetPreferredLayout()`. This will let the graph transformers to
preprocess the graph into a channels last form so that a more optimized
WebGPU shader can be used.

**Testing code**
It's impossible to test JSEP directly because JSEP itself does not
contain any kernel implementation. However, it has the kernel
registration which need to work together with the corresponding
JavaScript code. There are unit tests that run onnx models from
JavaScript API.

---------

Co-authored-by: Scott McKay <skottmckay@gmail.com>
2023-04-24 15:21:18 -07:00
George Wu
8dd32fed47
[TensorRT EP] avoid excessive library load/unload overhead when running unit tests. (#15639)
TensorRT will load/unload libraries as builder objects are created and
torn down. This will happen for
every single unit test, which leads to excessive test execution time due
to that overhead.
This overhead has steadily increased over the past few TensorRT versions
as the library objects get bigger leading to
8 hours to run all the unit tests. Nvidia suggests to keep a placeholder
builder object around to avoid this.
2023-04-24 14:43:13 -07:00
Rachel Guo
2cb3fb18b5
Integrate React Native E2E test with detox framework (#15133)
### Description
<!-- Describe your changes. -->

Integrate react native e2e test framework with detox.
https://wix.github.io/Detox/

Good build in CI:

https://dev.azure.com/onnxruntime/onnxruntime/_build/results?buildId=946695&view=results

### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

Write cross-platform end-to-end tests in JavaScript. 
Resolve flaky e2e tests in react native ci pipelines.

---------

Co-authored-by: rachguo <rachguo@rachguos-Mini.attlocal.net>
Co-authored-by: rachguo <rachguo@rachguos-Mac-mini.local>
2023-04-21 09:46:26 -07:00
Adrian Lizarraga
f3d04cd1be
[QNN EP] Update Windows ARM64 pipeline to use Visual Studio 2022 (#15607)
### Description
- Updates the QNN Windows ARM64 pipeline to use a new image with Visual
Studio 2022 (updated from VS 2019)
- Creates a new gtest fixture class that skips tests for the QNN CPU
backend if we detect that the QNN CPU backend is not
available/functional. The current windows arm64 vm does not support any
QNN backend.

### Motivation and Context
Visual Studio 2022 adds support for native arm64 compilation. This
pipeline will help catch any build regressions on Windows ARM64 w/ VS
2022.
2023-04-21 09:31:10 -07:00
Yi Zhang
84746a8efe
Revert "Retry the step of Start Android simulator (#15584)" (#15620)
This reverts commit 64b63921a2.


### Motivation and Context
From
https://dev.azure.com/onnxruntime/onnxruntime/_build/results?buildId=970086&view=logs&s=28fb2bf2-39c5-5feb-1887-4904233f6193&j=de302ec2-2305-57e0-e8c6-cd89c569f2a3
It's useless to rerun the step.
2023-04-21 08:33:18 -07:00
Edward Chen
4b74cb1741
Make docker command fail if bash command fails. (#15564)
Add `set -e` so that failing bash commands will cause the containing docker command to fail.
2023-04-20 13:38:58 -07:00
Baiju Meswani
11b0a18de6
Add support for cuda 11.8 and python 3.11 for training (#15548) 2023-04-20 12:56:45 -07:00
Scott McKay
446c478fbd
Add iOS Swift Package Manager support (#15297)
### Description
<!-- Describe your changes. -->
Add Swift Package Manager (SPM) support for ORT based on  #14621
- uses the existing objective-c bindings
- some re-organization of the directory structure was required but the
contents of the files are unchanged, apart from adjustments due to file
movements

Add tool for updating ORT native pod used in the SPM package
Update CIs to use ORT native pod from build, and build/test using SPM



### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
iOS developers are using SPM as much as cocoapods, so adding SPM means
both are catered for.
2023-04-20 16:18:35 +10:00
Yi Zhang
64b63921a2
Retry the step of Start Android simulator (#15584)
### Description
Add Retry once There's a failure in `Start Android Simulator`. 

### Motivation and Context
`Start Android Simulator` isn't stable enough and the pipeline would
hang.

We could find many instances in
https://dev.azure.com/onnxruntime/onnxruntime/_pipeline/analytics/stageawareoutcome?definitionId=188&contextType=build
2023-04-20 12:06:35 +08:00
Yi Zhang
5b6f79e79b
Improve windows build cache steps (#15537)
### Description
1. Split deps' compilation cache and ort's
2. reduce the caches generation in merge branch.

### Motivation and Context
Reduce pipeline cache stage.
2023-04-20 09:42:22 +08:00
Yi Zhang
573e4cf95f
[Fix] Python Packaging Pipeline exception. (#15568)
### Description
supplement of #15299

### Motivation and Context
It broke Python Packaging Pipeline since April 12.
2023-04-19 21:57:14 +08:00
liqun Fu
919d8f2660
update with onnx main (#14929) 2023-04-18 08:42:51 -07:00
Justin Chu
a36caba073
Bump ruff in CI (#15533)
### Description

Bump ruff version in CI and fixed new lint errors. 

- This change enables the flake8-implicit-str-concat rules which helps
detect unintended string concatenations:
https://beta.ruff.rs/docs/rules/#flake8-implicit-str-concat-isc
- Update gitignore to include common python files that we want to
exclude.


### Motivation and Context

Code quality
2023-04-17 10:11:44 -07:00
Yi Zhang
4e1f75810c
Add compilation cache in 2 Linux CPU pipelines and refactor the Linux build step with cache (#15484)
### Description
1. Add compilation cache in Linux CPU ARM and Linux Minimal Build.
2. Integrate 4 Linux CPU build step with cache into one.
3. install ccache from source code in Linux ARM64 image.

### Motivation and Context
1. Enable more build steps with compilation cache.
2. Make it easier to add cache.

It could save 40 more minutes of compilation time in Linux ARM64.

https://dev.azure.com/onnxruntime/onnxruntime/_build/results?buildId=959619&view=logs&j=1e0830bb-fd74-5d0a-5029-1c63b4266d7b&t=75260ed7-7566-5947-2095-566660191920
2023-04-14 23:56:59 +08:00
Changming Sun
5bed8d0285
Disable XNNPack EP's tests in Windows CI pipeline (#15406)
### Description

1. Disable XNNPack EP's tests in Windows CI pipeline
The EP code has a known problem(memory alignment), but the problem does
not impact the usages that we ship the code to. Now we only use XNNPack
EP in mobile apps and web usages. We have already pipelines to cover
these usages. We need to prioritize fixing the bugs found in these
pipelines, and there no resource to put on this Windows one. We can
re-enable the tests once we reached an agreement on how to fix the
memory alignment bug.

2.  Delete anybuild.yml which was for an already deleted pipeline.
3. Move Windows CPU pipelines to AMD CPU machine pools which are
cheaper.
4. Disable some qdq/int8 model tests that will fail if the CPU doesn't
have Intel AVX512 8-bit instructions.
2023-04-13 12:19:32 -07:00
Yulong Wang
e1e8852213
[build/npm] dump ORT_COMMON_FROM from validation (#15475)
### Description
dump ORT_COMMON_FROM from validation

This writes environment variable ORT_COMMON_FROM for later steps in the
release pipeline to use.
2023-04-12 13:48:19 -07:00
yf711
8cd5f3ad9c
[TensorRT EP] support TensorRT 8.6-EA (#15299)
### Description

<!-- Describe your changes. -->

* Integrate TRT 8.6EA on relevant Linux/Windows/pkg pipelines
  * Update onnx-tensorrt to 8.6
  * Add new dockerfiles for TRT 8.6 and clean old ones
* Update
[CGManifest](https://github.com/microsoft/onnxruntime/tree/main/cgmanifests)
files and ort build deps version
  * yml/script update
* Enable built-in TRT parser option on TRT related pipelines by default
* Exclude test TopKOperator.Top3ExplicitAxisInfinity out of TRT EP tests
(8.6-EA has issue with topk operator)
2023-04-12 11:34:59 -07:00
Numfor Tiapo
e3086b2ed8
Move DML CI Pipeline to A10 (#15468)
This change moves the DML CI pipeline to the A10 machines and fixes or
disables tests that were failing from this change.

- Max error rate threshold was increased for Image Tests
- Some failing batch tests were disabled

---------

Co-authored-by: Changming Sun <chasun@microsoft.com>
2023-04-12 10:19:40 -07:00