onnxruntime/docs/python
Xavier Dupré 0573952499 Update the documentation, run all examples during the generation of the documentation (replace #89) (#103)
* Minor update in the documentation

* Run examples during the generation of the documentation.
2018-12-05 10:12:25 -08:00
..
_templates Update the documentation, run all examples during the generation of the documentation (replace #89) (#103) 2018-12-05 10:12:25 -08:00
examples Update the documentation, run all examples during the generation of the documentation (replace #89) (#103) 2018-12-05 10:12:25 -08:00
api_summary.rst Initial bootstrap commit. 2018-11-19 16:48:22 -08:00
conf.py Update the documentation, run all examples during the generation of the documentation (replace #89) (#103) 2018-12-05 10:12:25 -08:00
examples_md.rst Initial bootstrap commit. 2018-11-19 16:48:22 -08:00
intro.rst Update the documentation, run all examples during the generation of the documentation (replace #89) (#103) 2018-12-05 10:12:25 -08:00
README.rst Rel 0.1.5 (#70) 2018-11-30 16:23:47 -08:00
tutorial.rst Update the documentation, run all examples during the generation of the documentation (replace #89) (#103) 2018-12-05 10:12:25 -08:00

ONNX Runtime
============

ONNX Runtime
enables high-performance evaluation of trained machine learning (ML)
models while keeping resource usage low.
Building on Microsoft's dedication to the
`Open Neural Network Exchange (ONNX) <https://onnx.ai/>`_
community, it supports traditional ML models as well
as Deep Learning algorithms in the
`ONNX-ML format <https://github.com/onnx/onnx/blob/master/docs/IR.md>`_.
Documentation is available at
`Python Bindings for ONNX Runtime <https://aka.ms/onnxruntime-python>`_.

Example
-------

The following example demonstrates an end-to-end example
in a very common scenario. A model is trained with *scikit-learn*
but it has to run very fast in a optimized environment.
The model is then converted into ONNX format and ONNX Runtime
replaces *scikit-learn* to compute the predictions.

::

    # Train a model.
    from sklearn.datasets import load_iris
    from sklearn.model_selection import train_test_split
    from sklearn.ensemble import RandomForestClassifier
    iris = load_iris()
    X, y = iris.data, iris.target
    X_train, X_test, y_train, y_test = train_test_split(X, y)
    clr = RandomForestClassifier()
    clr.fit(X_train, y_train)

    # Convert into ONNX format with onnxmltools
    from onnxmltools import convert_sklearn
    from onnxmltools.utils import save_model
    from onnxmltools.convert.common.data_types import FloatTensorType
    initial_type = [('float_input', FloatTensorType([1, 4]))]
    onx = convert_sklearn(clr, initial_types=initial_type)
    save_model(onx, "rf_iris.onnx")

    # Compute the prediction with ONNX Runtime
    import onnxruntime as rt
    import numpy
    sess = rt.InferenceSession("rf_iris.onnx")
    input_name = sess.get_inputs()[0].name
    label_name = sess.get_outputs()[0].name
    pred_onx = sess.run([label_name], {input_name: X_test.astype(numpy.float32)})[0]

Changes
-------

0.1.5
^^^^^

GA release as part of open sourcing onnxruntime (patch to 0.1.4).

0.1.4
^^^^^

GA release as part of open sourcing onnxruntime.

0.1.3
^^^^^

Fixes a crash on machines which do not support AVX instructions.

0.1.2
^^^^^

First release on Ubuntu 16.04 for CPU and GPU with Cuda 9.1 and Cudnn 7.0,
supports runtime for deep learning models architecture such as AlexNet, ResNet,
XCeption, VGG, Inception, DenseNet, standard linear learner,
standard ensemble learners,
and transform scaler, imputer.