pytorch/caffe2/python/layers
Kittipat Virochsiri 25b1221579 Allow scalar output in functional layer
Summary: Some operators, e.g., SoftmaxWithLoss, returns scalar-typed tensor. This would allow us to use those ops without having to write layer manually.

Reviewed By: xianjiec, kennyhorror

Differential Revision: D4703982

fbshipit-source-id: f33969971c57fc037c9b44adb37af1caba4084b6
2017-03-14 15:32:47 -07:00
..
__init__.py fbsync. TODO: check if build files need update. 2016-11-15 00:00:46 -08:00
batch_lr_loss.py NextScopedBlob with well-defined behavior and respect namescope 2017-02-16 17:16:36 -08:00
concat.py small change to concat layer to make tensor board vis nicer 2017-03-12 23:01:18 -07:00
dot_product.py NextScopedBlob with well-defined behavior and respect namescope 2017-02-16 17:16:36 -08:00
expand_dims.py NextScopedBlob with well-defined behavior and respect namescope 2017-02-16 17:16:36 -08:00
fc.py model and preprocessor can handle empty dense inputs 2017-02-22 11:19:15 -08:00
functional.py Allow scalar output in functional layer 2017-03-14 15:32:47 -07:00
layers.py Add SparseNN workflow for feed. 2017-03-01 11:02:38 -08:00
simple_operator_layers.py Fix random issues with some of the layers getting missing from registry. 2017-01-10 15:14:31 -08:00
sparse_lookup.py clean old unit test, add sum processor and sqrt pooling 2017-03-08 23:04:19 -08:00
sparse_to_dense.py NextScopedBlob with well-defined behavior and respect namescope 2017-02-16 17:16:36 -08:00
split.py NextScopedBlob with well-defined behavior and respect namescope 2017-02-16 17:16:36 -08:00
tags.py fbsync. TODO: check if build files need update. 2016-11-15 00:00:46 -08:00