pytorch/torch/nn/modules
George Qi 8af39b7668 AdaptiveLogSoftmaxWithLoss no_batch_dim support (#69054)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/69054

Test Plan: Imported from OSS

Reviewed By: jbschlosser

Differential Revision: D33200166

Pulled By: george-qi

fbshipit-source-id: 9d953744351a25f372418d2a64e8402356d1e9b7
2021-12-29 10:25:26 -08:00
..
__init__.py
_functions.py
activation.py [nn] mha : no-batch-dim support (python) (#67176) 2021-12-14 13:21:21 -08:00
adaptive.py AdaptiveLogSoftmaxWithLoss no_batch_dim support (#69054) 2021-12-29 10:25:26 -08:00
batchnorm.py
channelshuffle.py
container.py Fixed wrong return type in ModuleList getitem (#69083) 2021-12-22 11:38:17 -08:00
conv.py
distance.py
dropout.py
flatten.py
fold.py
instancenorm.py [nn] InstanceNorm : no batch dim for modules (#65323) 2021-12-22 18:00:36 -08:00
lazy.py
linear.py Bilinear no_batch_dim (#69539) 2021-12-20 09:44:07 -08:00
loss.py GaussianNLLLoss no_batch_dim docs and testing (#69783) 2021-12-23 09:27:53 -08:00
module.py Fix docs rendering for nn.Module.named_modules() (#70491) 2021-12-29 10:08:53 -08:00
normalization.py
padding.py add extra_repr for nn.ZeroPad2d (#69206) 2021-12-01 13:53:19 -08:00
pixelshuffle.py
pooling.py FractionalMaxPool3d with no_batch_dim support (#69732) 2021-12-22 14:30:32 -08:00
rnn.py [rnn,gru,lstm]cell : no batch dim (#70236) 2021-12-29 09:27:32 -08:00
sparse.py
transformer.py Transformer{DecoderLayer} : no batch dim (#70322) 2021-12-23 10:13:31 -08:00
upsampling.py
utils.py