onnxruntime/tools/python/run_CIs_for_external_pr.py
Ashrit Shetty 4b5b5f7101
Update win-ort-main to tip main 250123 (#23473)
### Description
This PR is to update the win-ort-main branch to the tip main branch as
of 2025-01-23.

### PR List
ddf0d377a7 [QNN EP] Add LoggingManager::HasDefaultLogger() to provider
bridge API (#23467)
05fbbdf91f [QNN EP] Make QNN EP a shared library (#23120)
1336566d7f Add custom vcpkg ports (#23456)
2e1173c411 Update the compile flags for vcpkg packages (#23455)
1f628a9858 [Mobile] Add BrowserStack Android MAUI Test (#23383)
009cae0ec8 [js/webgpu] Optimize ConvTranspose (Continue) (#23429)
04a4a694cb Use onnx_protobuf.h to suppress some GCC warnings (#23453)
2e3b62b4b0 Suppress some strict-aliasing related warnings in WebGPU EP
(#23454)
b708f9b1dc Bump ruff from 0.9.1 to 0.9.2 (#23427)
c0afc66b2a [WebNN] Remove workarounds for TFLite backend (#23406)
8a821ff7f9 Bump vite from 6.0.7 to 6.0.11 in
/js/web/test/e2e/exports/testcases/vite-default (#23446)
220c1a203e Make ORT and Dawn use the same protobuf/abseil source code
(#23447)
b7b5792147 Change MacOS-13 to ubuntu on for
android-java-api-aar-test.yml. (#23444)
19d0d2a30f WIP: Dp4MatMulNBits accuracy level 4 matmul for WebGPU EP
(#23365)
95b8effbc4 [QNN EP]: Clean up QNN logging resources if an error occurs
during initialization (#23435)
626134c5b5 Bump clang-format from 19.1.6 to 19.1.7 (#23428)
0cf975301f Fix eigen external deps (#23439)
f9440aedce Moving RN_CI Android Testing to Linux (#23422)
1aa5902ff4 [QNN EP] workaround for QNN validation bug for Tanh with
uint16 quantized output (#23432)
7f5582a0e2 Seperate RN andriod and IOS into 2 separated Stages. (#23400)
73deac2e7f Implement some missing element wise Add/Sub/Mul/Div/Neg
operations for CPU and CUDA EPs (#23090)
949fe42af4 Upgrade Java version from react-native/android to Java 17
(#23066)
0892c23463 Update Qnn SDK default version to 2.30 (#23411)
94c099bcec Fix type cast build error (#23423)
d633e571d1 [WebNN EP] Fix AddInitializersToSkip issues (#23354)
e988ef00e2 [QNN EP] Fix regression for MatMul with two quantized/dynamic
uint16 inputs (#23419)
7538795f6b Update onnxruntime binary size checks ci pipeline's docker
image (#23405)
6c5ea41cad Revert "[QNN EP] Clean up correctly from a partial setup
(#23320)" (#23420)
e866804bbe Enable comprehension simplification in ruff rules (#23414)
0a5f1f392c bugfix: string_view of invalid memory (#23417)
4cc38e0277 fix crash when first input of BatchNormalization is 1-D
(#23387)
033441487f Target py310 and modernize codebase with ruff (#23401)
87341ac010 [QNN EP] Fix segfault when unregistering HTP shared memory
handles (#23402)

### Motivation and Context
This update includes the change to make QNN-EP a shared library.

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: Adrian Lizarraga <adlizarraga@microsoft.com>
Co-authored-by: Justin Chu <justinchuby@users.noreply.github.com>
Co-authored-by: Yulong Wang <7679871+fs-eire@users.noreply.github.com>
Co-authored-by: Edward Chen <18449977+edgchen1@users.noreply.github.com>
Co-authored-by: Changming Sun <chasun@microsoft.com>
Co-authored-by: Peishen Yan <peishen.yan@intel.com>
Co-authored-by: Tianlei Wu <tlwu@microsoft.com>
Co-authored-by: Hector Li <hecli@microsoft.com>
Co-authored-by: Jian Chen <cjian@microsoft.com>
Co-authored-by: Alexis Tsogias <1114095+Zyrin@users.noreply.github.com>
Co-authored-by: junchao-zhao <68935141+junchao-loongson@users.noreply.github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: sushraja-msft <44513542+sushraja-msft@users.noreply.github.com>
Co-authored-by: Wanming Lin <wanming.lin@intel.com>
Co-authored-by: Jiajia Qin <jiajiaqin@microsoft.com>
Co-authored-by: Caroline Zhu <wolfivyaura@gmail.com>
2025-01-23 09:12:03 -08:00

141 lines
4.7 KiB
Python

#!/usr/bin/env python3
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
from __future__ import annotations
import argparse
import json
import os
import subprocess
import sys
def get_pipeline_names():
# Current pipelines. These change semi-frequently and may need updating.
# There is no easy way to get the list of "required" pipelines using `azp` before they are run,
# so we need to maintain this list manually.
# NOTE: This list is also used by run_CIs_for_branch.py
pipelines = [
# windows
"Windows ARM64 QNN CI Pipeline",
"Windows x64 QNN CI Pipeline",
"Windows CPU CI Pipeline",
"Windows GPU CUDA CI Pipeline",
"Windows GPU DML CI Pipeline",
"Windows GPU Doc Gen CI Pipeline",
"Windows GPU TensorRT CI Pipeline",
"ONNX Runtime Web CI Pipeline",
# linux
"Linux CPU CI Pipeline",
"Linux CPU Minimal Build E2E CI Pipeline",
"Linux GPU CI Pipeline",
"Linux GPU TensorRT CI Pipeline",
"Linux OpenVINO CI Pipeline",
"Linux QNN CI Pipeline",
# mac
"MacOS CI Pipeline",
# training
"orttraining-linux-ci-pipeline",
"orttraining-linux-gpu-ci-pipeline",
# checks
"onnxruntime-binary-size-checks-ci-pipeline",
# big models
"Big Models",
# android
"Linux Android Emulator QNN CI Pipeline",
# not currently required, but running these like internal PRs.
"Android CI Pipeline",
"iOS CI Pipeline",
"ONNX Runtime React Native CI Pipeline",
"CoreML CI Pipeline",
"Linux DNNL CI Pipeline",
"Linux MIGraphX CI Pipeline",
"Linux ROCm CI Pipeline",
]
return pipelines
def _parse_args():
parser = argparse.ArgumentParser(
os.path.basename(__file__),
formatter_class=argparse.RawDescriptionHelpFormatter,
description="""Trigger CIs running for the specified pull request.
Requires the GitHub CLI to be installed. See https://github.com/cli/cli#installation for details.
After installation you will also need to setup an auth token to access the ONNX Runtime repository by running
`gh auth login`. Easiest is to run that from a directory in your local copy of the repo.
""",
)
parser.add_argument("pr", help="Specify the pull request ID.")
args = parser.parse_args()
return args
def run_gh_pr_command(command: list[str], check: bool = True):
try:
return subprocess.run(["gh", "pr", *command], capture_output=True, text=True, check=check)
except subprocess.CalledProcessError as cpe:
print(cpe)
print(cpe.stderr)
sys.exit(-1)
def main():
args = _parse_args()
pr_id = args.pr
# validate PR
print("Checking PR is open")
gh_out = run_gh_pr_command(["view", "--json", "state", pr_id])
info = json.loads(gh_out.stdout)
if "state" not in info:
print(f"Could not get current state from `gh pr view` response of\n{gh_out.stdout}")
sys.exit(-1)
if info["state"] != "OPEN":
print(f"PR {pr_id} is not OPEN. Currently in state {info['state']}.")
sys.exit(0)
# This will return CIs that have run previously but not passed. We filter the CIs to run based on this, so it's
# fine for the initial response to have no info in it.
# `gh pr checks` exits with non-zero exit code when failures in pipeline exist, so we set `check` to False.
print("Checking for pipelines that have passed.")
gh_out = run_gh_pr_command(["checks", pr_id, "--required"], check=False)
# output format is a tab separated list of columns:
# (pipeline name) "\t" (status) "\t" (ran time) "\t" (url)
checked_pipelines = [
columns[0]
for columns in (line.strip().split("\t") for line in gh_out.stdout.split("\n"))
if len(columns) == 4 and columns[1] == "pass"
]
pipelines = get_pipeline_names()
# remove pipelines that have already run successfully
pipelines = [p for p in pipelines if p not in checked_pipelines]
print("Pipelines to run:")
for p in pipelines:
print("\t" + p)
# azp run is limited to 10 pipelines at a time
max_pipelines_per_comment = 10
start = 0
num_pipelines = len(pipelines)
print("Adding azp run commands")
while start < num_pipelines:
end = start + max_pipelines_per_comment
if end > num_pipelines:
end = num_pipelines
run_gh_pr_command(["comment", pr_id, "--body", f"/azp run {str.join(',', pipelines[start:end])}"])
start += max_pipelines_per_comment
print(f"Done. Check status at https://github.com/microsoft/onnxruntime/pull/{pr_id}/checks")
if __name__ == "__main__":
main()