Fix to modules.rst: indent line with activation functions (#139667)

At line 205, I believe the code `x = self.activations[act](x)` should be indented so that it is in the body of the for loop. Otherwise, applying the four linear modules has the same effect as applying a single linear module, in the sense that it is still just a linear map so there is no point in having four of them.  In other words, each layer of this network should have a nonlinearity.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139667
Approved by: https://github.com/malfet
This commit is contained in:
John MacCormick 2024-11-08 01:12:52 +00:00 committed by PyTorch MergeBot
parent 103cbd7231
commit 81d077cca2

View file

@ -202,7 +202,7 @@ register submodules from a list or dict:
def forward(self, x, act):
for linear in self.linears:
x = linear(x)
x = self.activations[act](x)
x = self.activations[act](x)
x = self.final(x)
return x