pytorch/caffe2/python/layers
Andrey Malevich ec51f887bf Create only one instance of SigridTransform in DPerExample.
Summary:
DPer example have been creating multiple copies of the transform config in net
defition till this moment, that resulted in the fact that I've hit the limit of
ProtoBuf (64MB) for a certain Task requests (especially visible because of the
ValidationPipeline that I was adding).

After this diff we're going to store SigridTransforms in one instance per
machine for training (or 1 instance per reading).

Difference in sizes of the plans for some simple SparseNN model ~30 MB (even including the fact that second model have validation plan as well).

TODO: Do similar logic for NNPreProc as well (it's also pretty large).

Reviewed By: dzhulgakov

Differential Revision: D4441441

fbshipit-source-id: 4452dd86a4dc49b2c7f5b7642f443aed5720b047
2017-01-22 19:29:16 -08:00
..
__init__.py fbsync. TODO: check if build files need update. 2016-11-15 00:00:46 -08:00
batch_lr_loss.py fbsync at f5a877 2016-11-18 15:41:06 -08:00
concat.py fbsync at f5a877 2016-11-18 15:41:06 -08:00
dot_product.py implement sparse nn using layers 2016-11-29 15:18:38 -08:00
expand_dims.py implement sparse nn using layers 2016-11-29 15:18:38 -08:00
fc.py fbsync at f5a877 2016-11-18 15:41:06 -08:00
layers.py Create only one instance of SigridTransform in DPerExample. 2017-01-22 19:29:16 -08:00
simple_operator_layers.py Fix random issues with some of the layers getting missing from registry. 2017-01-10 15:14:31 -08:00
sparse_lookup.py implement user-only metadata for input_record 2016-12-15 12:01:29 -08:00
sparse_to_dense.py implement user-only metadata for input_record 2016-12-15 12:01:29 -08:00
split.py implement sparse nn using layers 2016-11-29 15:18:38 -08:00
tags.py fbsync. TODO: check if build files need update. 2016-11-15 00:00:46 -08:00