onnxruntime/tools/ci_build/github/linux/docker/scripts
Wang, Mengni fe463d4957
Support SmoothQuant for ORT static quantization (#16288)
### Description

Support SmoothQuant for ORT static quantization via intel neural
compressor

> Note:
Please use neural-compressor==2.2 to try SmoothQuant function.

### Motivation and Context
For large language models (LLMs) with gigantic parameters, the
systematic outliers make quantification of activations difficult. As a
training free post-training quantization (PTQ) solution, SmoothQuant
offline migrates this difficulty from activations to weights with a
mathematically equivalent transformation. Integrating SmoothQuant into
ORT quantization can benefit the accuracy of INT8 LLMs.

---------

Signed-off-by: Mengni Wang <mengni.wang@intel.com>
2023-07-26 18:56:45 -07:00
..
manylinux Support SmoothQuant for ORT static quantization (#16288) 2023-07-26 18:56:45 -07:00
training Fix orttraining-ortmodule-distributed CI (#16569) 2023-07-03 13:18:59 +08:00
install-protobuf.sh
install_ninja.sh
install_openmpi.sh
install_os_deps.sh
install_protobuf.sh
install_python_deps.sh
install_rust.sh
install_ubuntu.sh
requirements.txt