onnxruntime/docs/python/training/content.rst
Thiago Crepaldi 867804bea1
Add auto doc gen for ORTModule API during CI build (#7046)
In addition to ORTModule auto documentation during packaging, this PR also update golden numbers to fix CI
2021-03-22 10:20:33 -07:00

54 lines
1.8 KiB
ReStructuredText
Raw Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

This document describes ORTModule PyTorch frontend API for the ONNX Runtime (aka ORT) training acelerator.
What is new
===========
Version 0.1
-----------
#. Initial version
Overview
========
The aim of ORTModule is to provide a drop-in replacement for one or more torch.nn.Module objects in a users PyTorch program,
and execute the forward and backward passes of those modules using ORT.
As a result, the user will be able to accelerate their training script gradually using ORT,
without having to modify their training loop.
Users will be able to use standard PyTorch debugging techniques for convergence issues,
e.g. by probing the computed gradients on the models parameters.
The following code example illustrates how ORTModule would be used in a users training script,
in the simple case where the entire model can be offloaded to ONNX Runtime:
.. code-block:: python
#Original PyTorch model
classNeuralNet(torch.nn.Module):
def__init__(self,input_size,hidden_size,num_classes):
...
defforward(self,x):
...
model=NeuralNet(input_size=784,hidden_size=500,num_classes=10)
model = ORTModule(model) # Only change to original PyTorch script
criterion=torch.nn.CrossEntropyLoss()
optimizer=torch.optim.SGD(model.parameters(),lr=1e-4)
#TrainingLoop is unchanged
fordata,targetindata_loader:
optimizer.zero_grad()
output=model(data)
loss=criterion(output,target)
loss.backward()
optimizer.step()
API
===
.. automodule:: onnxruntime.training.ortmodule
:members:
:show-inheritance:
:member-order: bysource
.. :inherited-members: