onnxruntime/tools/ci_build/github
Wang, Mengni fe463d4957
Support SmoothQuant for ORT static quantization (#16288)
### Description

Support SmoothQuant for ORT static quantization via intel neural
compressor

> Note:
Please use neural-compressor==2.2 to try SmoothQuant function.

### Motivation and Context
For large language models (LLMs) with gigantic parameters, the
systematic outliers make quantification of activations difficult. As a
training free post-training quantization (PTQ) solution, SmoothQuant
offline migrates this difficulty from activations to weights with a
mathematically equivalent transformation. Integrating SmoothQuant into
ORT quantization can benefit the accuracy of INT8 LLMs.

---------

Signed-off-by: Mengni Wang <mengni.wang@intel.com>
2023-07-26 18:56:45 -07:00
..
android [NNAPI doc] add reducemean to supported op list (#16414) 2023-06-21 00:29:20 -07:00
apple Update upload_pod_archive_and_update_podspec.sh to take path pattern (#16810) 2023-07-25 08:55:31 -07:00
azure-pipelines replace onnxruntime-Win-CPU-2019 with onnxruntime-Win-CPU-2022 (#16844) 2023-07-25 23:05:34 +08:00
js [Better Engineering] Bump ruff to 0.0.278 and fix new lint errors (#16789) 2023-07-21 12:53:41 -07:00
linux Support SmoothQuant for ORT static quantization (#16288) 2023-07-26 18:56:45 -07:00
pai [ROCm] Optimize ROCm CI pipeline 2 (#16691) 2023-07-24 13:57:48 +08:00
windows Disable PERF* rules in ruff to allow better readability (#16834) 2023-07-25 15:38:22 -07:00
Doxyfile_csharp.cfg Implement Optional Metadata support and C# test support (#15314) 2023-04-11 09:41:59 -07:00