Summary:
Generalize SpatialBatchNorm CPU Op to compute Spatial batch normalization for
1D, 2D & 3D input tensors.
Reviewed By: dutran
Differential Revision: D5043563
fbshipit-source-id: 7fcb933a628dd47f13aa622f63601a87382f09cd
Summary:
Added several features to the ImageInputOp:
- bounding box (per image as well as default for the operator). For per-image, it
only works in Caffe2 format and is passed as the third tensor in the form
(ymin, xmin, height, width). For the operator, pass bounding_xmin, bounding_ymin,
bounding_width and bounding_height as parameters.
- per-channel mean/std. You can use the usual mean/std to pass a single
value to be used for all channels or also pass mean_per_channel and std_per_channel
to specify different values per channel. Order of channels is BGR.
- A minimum size parameter that can be specified instead of the scale parameter.
The minsize parameter will only scale the image if it is smaller than required.
This differs from scale which will scale up as well as down. You can only specify
one of scale or minsize.
Added a test case to test some of the features
Differential Revision: D4874988
fbshipit-source-id: 437191052a46e9916defe8b100d7cc7864373f61
Summary:
In Dper utility, add a function `load_parameters_from_model_init_options` to
allow init parameters from pretrained models
Reviewed By: xianjiec
Differential Revision: D4926075
fbshipit-source-id: 5ab563140b5b072c9ed076bbba1aca43e71c6ac5
Summary: Relax requirement on token uniqueness since a few use cases broke after the uniqueness requirement was added in a previous diff.
Reviewed By: kittipatv
Differential Revision: D5034132
fbshipit-source-id: 327eb065923e6ea152a360324316f81b7fb9564b
Summary: We can avoid this extra Reshape.
Reviewed By: jamesr66a
Differential Revision: D5032874
fbshipit-source-id: 92bd568bc6bec53d7f81a64cfa96d2c610823f8c
Summary:
In transfer learning, parameter initialized from pretrained model might require
a different learning rate than otherwise initialized. To this end, here we
implement a python solution where `base_learning_rate` is scaled by `scale`,
which is in turn set by `scale_learning_rate`; Alternatively, we can achieve
same effect by rewriting the LearningRate operator in C++
Reviewed By: kennyhorror
Differential Revision: D4992827
fbshipit-source-id: 8d7e87a61c95b3eb8ef733ec436f4060e865c0ac
Summary:
Adds a parameter cost estimation step before the actual training starts. The costs are later used in order to better shard the parameters across instances of the parameter server.
Things I needed to modify:
- A few changes to make ModelLayerHelper picklable
- Add support for stopping a distributed job after a number of stats reporting steps.
- Refactored run_dist_job to support collocating the reader with the trainer even when PS are present.
- Option to disable dense updates (when num_dense_servers=0).
Currently there's a huge overhead posed by having to launch a child workflow. I'll try and address next in a subsequent diff.
This is WIP because the other workflows need to be migrated as well.
I can break this down into smaller diffs if reviewers would prefer it.
Reviewed By: kennyhorror
Differential Revision: D4974752
fbshipit-source-id: 04c336acb2945f8f11324a221ffc6967818c0672
Summary: For distributed jobs, we were relying on the order the PythonOps were registered, which was very fragile.
Reviewed By: dzhulgakov
Differential Revision: D5016847
fbshipit-source-id: f5601467c5b0569d5e8a0efdd76abad0d703c5f5
Summary:
cuDNN versions of dropout and LRN (for native fp16 support), port of Caffe's max pooling algo that uses an explicit mask to store locations (also supports fp16 storage)
Closes https://github.com/caffe2/caffe2/pull/396
Reviewed By: akyrola
Differential Revision: D4990880
Pulled By: asaadaldien
fbshipit-source-id: a716acffb656843e9b31e3e6808bd2d8aa959d03
Summary:
Incorporating definition of cell's output and illustraing it's usage by adding dropout to all types of cell.
I think that we should try to get rid of aliases in RecurrentNetwork, so output of applied_over_sequence is also always (state_1_all, state_2_all, ...). This way we can merge get_output_from_single_step, get_output_from_sequence and get_outputs_with_grads into a single method
Let me know what do you think!
Reviewed By: jhcross
Differential Revision: D4992913
fbshipit-source-id: 737939be336ad145f84e8733cd255d4f7188ef70
Summary: decoder_hidden_encoder_outputs_sum_tmp is tiny after D5010109, no need to recompute it.
Reviewed By: akyrola
Differential Revision: D5014335
fbshipit-source-id: cc9e8f91372889d10bd99c79366018cb3943a435
Summary:
Segment based Ops requires increasing seg id, and without gap. Lengths based Ops does not
have this requirements.
Otherpooling methods, e.g., LogExpMean does not have Lengths based Ops available yet.
Differential Revision: D5019165
fbshipit-source-id: ab01a220e10d4ed9fa2162939579d346607f905e
Summary:
Specialized implementation of ResizeNearest for width_scale=2 and height_scale=2. This implementation doesn't use divides or calls to std::min, and is unrolled 2x over the width dimension. Also add a correctness test.
About 6x faster.
Reviewed By: ajtulloch
Differential Revision: D4928579
fbshipit-source-id: 5cc92a52bd688690fee907b4333d9c84b666f9c9
Summary: External inputs must be computed before updating the _ops_output structure, otherwise if the net to be appended outputs the external input, it is not added correctly
Differential Revision: D5013496
fbshipit-source-id: 6a83d0a6f1c63ef8ae7bec4d862c0ac2a690d47b
Summary: Adding a simple video data layer which allows to read video data from frames, videos and output 5D tensor. It also allows multiple labels. The current implementation is based on ffmpeg
Differential Revision: D4801798
fbshipit-source-id: 46448e9c65fb055c2d71855447383a33ade0e444
Summary:
This diff creates a generalized AttentionCell class, which will allow us to construct attention decoders out of arbitrary RNNCell components (with a particular view to using stacked, multi-layer RNNs).
In order to do this, we introduce a new optional input for RNNCell._apply which allows us to provide an additional input that is not processed by prepare_input(). Note that this is an argument only to _apply, not apply, since it is only meant to be used for additional recurrent connections to "embedded" cells, not for standalone RNNs.
Reviewed By: urikz
Differential Revision: D4998465
fbshipit-source-id: 473009ea4917e86e365f9d23aa2f11a46a94fd65
Summary: It is good practice to provide __dir__ whenever __getattr__ is defined so that tooling will work intelligently. In particular, it is hard to explore the available methods in iPython without tab completion.
Reviewed By: dzhulgakov
Differential Revision: D5006545
fbshipit-source-id: 1a150d91d54637d80b292764513943ff70d971b4
Summary:
Script caffe2/caffe2/python/examples/resnet50_trainer.py can be used to train a ResNet-50 model with Imagenet data (or similar).
However, currently the script does not actually save the model, so it is kind of useless.
Task 1: After each Epoch, save the model in a file "<filename>_X.mdl' where X is the epoch number and <filename> is given as a command line parameter. By default, use "resnet50_model" as filename.
Task 2: Add a functionality to restore the model from a previous file:
- add a command line parameter "load_model", which user can use to specify a filename.
- if this parameter is set, load the model parameters from the previous file
Reviewed By: prigoyal
Differential Revision: D4984340
fbshipit-source-id: 333e92679ba52a7effe9917fdfc2d55d652b868f
Summary:
Part of project to make all gradient accumulation business ops in RecurrentNetworkGradientOp, this makes the accumulateInputGradients ops.
Also added way to mark operators private so they don't appear in docs.
Reviewed By: salexspb
Differential Revision: D5006698
fbshipit-source-id: 226d7afb473290c8d0f936d2cc87640be3e06615
Summary:
Added the possibility to add 'tiles' and 'axis' as input
as opposed to arguments for the Tile Operator. If provided, the input
values will override the argument values. Now with proper CUDA code
Differential Revision: D4930347
fbshipit-source-id: b44b032b327c7d7bddfce63abf4e3289d7e74bfb
Summary: Layer for LastNWindowCollector op. We need this since it's an in-place operator.
Reviewed By: chocjy
Differential Revision: D4981772
fbshipit-source-id: ec85dbf247d0944db422ad396771fa9308650883
Summary:
Use the rnn_cell's multi-cell for LSTM benchmark. While doing this, i had not changed the initial_states and I got a inconsistent result from rnn_cell, so added an assertion to check initial states length is 2 * num layers.
+ fix division by zero error
Reviewed By: salexspb
Differential Revision: D5003177
fbshipit-source-id: a8250b825394c352428a0f067098dfcd7516ab2a
Summary: Use `CopyItems` so that it accepts any type of tensor. Also, move the cursor to input blob so that it's checkpoint friendly. Output is now also part of input so that inference can work correctly.
Reviewed By: xianjiec
Differential Revision: D4920987
fbshipit-source-id: da532736225ec27f409ff763ff69a0629235151c
Summary:
Add a parameter dont_rebatch to data_workers. This disables batching of input from fetcher to equal-batch size chunks. This is not desired with RNNs where with longer sequence length we might want to have smaller batches etc.
For some reason the graceful-shutdown test interfered with other tests, so I removed it.
Reviewed By: jay-mahadeokar
Differential Revision: D4988549
fbshipit-source-id: cbab46d77c948f2e293e79e6eb538dde17d800ee
Summary:
- Adding ScatterWeightedSumOp for CUDA.
- This version does not support input weight (weight0). In other words, the input weight has to be 1.0, otherwise the op exits.
- To check the value of weight0, we copy its value from device to host at: https://github.com/caffe2/caffe2/pull/443/files#diff-2a77f80797072e8443f4867cb709fb40R244
Closes https://github.com/caffe2/caffe2/pull/443
Reviewed By: akyrola
Differential Revision: D4971910
Pulled By: asaadaldien
fbshipit-source-id: 2282e968f95364f0b3b8126502b053fe7a32ba20
Summary: Add Python support for arbitrary (unidirectional) recurrent networks with MultiRNNCell abstraction. Since the combined step net for all layers is created at one time (in method _apply), this may be optimizable as-is. LSTM() function is extended to accept a list of numbers of units for the dim_out argument, producing a multi-layer LSTM in that case.
Reviewed By: salexspb
Differential Revision: D4965001
fbshipit-source-id: 39c069468d5b40bf803503cf62046a479ca83cbb
Summary: The code snippet below is invalid in the add unit test is invalid but it may or may not cause exception. Disable the syntax so people don't accidentally use it.
Reviewed By: dzhulgakov
Differential Revision: D4985030
fbshipit-source-id: ffa2b26f7b29128b196aba1b1001a97c87e381cf
Summary:
We need a warm-up stage because otherwise first iteration
speds too much timedoing all the allocations
Reviewed By: akyrola
Differential Revision: D4986201
fbshipit-source-id: f60a75520988ff3f1540bb157cdc69634f307db4
Summary:
Layer to allow model to follow different paths for each instantiation context and join later. Together with tagging system cleanup (this is a separate issue), this should reduce the need to write a layer to differentiate between context.
Re: tagging system clean up, we should make exclusion more explicit: EXCLUDE_FROM_<CONTEXT>. This would simplify instation code. TRAIN_ONLY should become a set of all EXCLUDE_FROM_*, except EXCLUDE_FROM_TRAIN.
Reviewed By: kennyhorror
Differential Revision: D4964949
fbshipit-source-id: ba6453b0deb92d1989404efb9d86e1ed25297202
Summary: Make NCCL optional in data_parallel_model due to continuing reliablity (deadlock) issues.
Reviewed By: pietern
Differential Revision: D4988950
fbshipit-source-id: 8a2192f01b5f3c0e847137cd37aefc69e553a56f
Summary:
RFC. This is a naive implementation of Rebatchin Queue for MultiTask
effort. Full disclaimer, I'm very new to Caffe/Machine Learning and I'm doing
dodge science here (under Dmytros supervision), so please be extra tough on
this review so I
can learn best practices :)
Differential Revision: D4871970
fbshipit-source-id: 924820ef0fce45b5e2bdabeec9885cbafa23a880
Summary: I ran into this earlier and the debug messages were not helpful enuogh
Reviewed By: kennyhorror
Differential Revision: D4985754
fbshipit-source-id: b3d12b5e2cfa1b54fca9126768c84c902664ef28
Summary:
When appending net A to net B, an external input of net A should not be added as
an external input of net B if net B is outputting that blob.
Reviewed By: dzhulgakov
Differential Revision: D4975921
fbshipit-source-id: a5c0ada7b96d851e57d345244d322dd93c7be8e4