mirror of
https://github.com/saymrwulf/pytorch.git
synced 2026-05-14 20:57:59 +00:00
Fix to modules.rst: indent line with activation functions (#139667)
At line 205, I believe the code `x = self.activations[act](x)` should be indented so that it is in the body of the for loop. Otherwise, applying the four linear modules has the same effect as applying a single linear module, in the sense that it is still just a linear map so there is no point in having four of them. In other words, each layer of this network should have a nonlinearity. Pull Request resolved: https://github.com/pytorch/pytorch/pull/139667 Approved by: https://github.com/malfet
This commit is contained in:
parent
103cbd7231
commit
81d077cca2
1 changed files with 1 additions and 1 deletions
|
|
@ -202,7 +202,7 @@ register submodules from a list or dict:
|
|||
def forward(self, x, act):
|
||||
for linear in self.linears:
|
||||
x = linear(x)
|
||||
x = self.activations[act](x)
|
||||
x = self.activations[act](x)
|
||||
x = self.final(x)
|
||||
return x
|
||||
|
||||
|
|
|
|||
Loading…
Reference in a new issue