Summary: mutex is only supported on CPU. need to make sure mutex and following atomicIter are both on CPU. This is critical for gpu SparseNN training
Differential Revision: D5093184
fbshipit-source-id: 021e6ba699a3208449fa4761cad6b0ec4544957e
Summary:
deprecate CNNModelHelper in python/operator_test dir
BTW I found that there is 2 mkl_speed_test. I am confused...
Reviewed By: salexspb
Differential Revision: D5094122
fbshipit-source-id: f6526f4de334f2245eb4c1f204a8ec9f23750d78
Summary: We will start our API migration process. Before that, I want to make sure people don't add new CNNModelHelper instance to our opensource code. So that I put deprecation warning here in advance
Reviewed By: salexspb
Differential Revision: D5093556
fbshipit-source-id: 74bf4a7782c2d882f72f202d48c72255d152b68a
Summary:
CUDNN dilated convolution was added to V6. This version of CUDNN does not support NHWC for dilated convolution.
Fix conv_test.py so that it does not test CUDNN for dilated convolution in NHWC format.
Closes https://github.com/caffe2/caffe2/pull/598
Reviewed By: akyrola
Differential Revision: D5084835
Pulled By: asaadaldien
fbshipit-source-id: 3c0c5ed02c5d9232fca567e387ab6260d71e5aaf
Summary: I noticed that Sigmoid was taking an inordinate amount of time in our NMT benchmark, so I looked at the implementation and it didn't seem optimal. I replaced the implementation with an Eigen version so that when the Eigen update goes through, we will get proper AVX(2) vectorization.
Differential Revision: D5082464
fbshipit-source-id: aa951f7d730fc05198f7dd04076ec58d471b74c8
Summary: Added L1Distance Operator for CUDA, as well as tests.
Reviewed By: bwasti
Differential Revision: D5071966
fbshipit-source-id: 4c3d862605e9123d955bf091efa67d0731bd816a
Summary:
Incorporate arbitrary dropout for encoder and decoder layers for Caffe2 NMT models using current configuration. This involves separate output processing (_prepare_output() and _prepare_output_sequence()) for the final layer in a MultiRNNCell.
Switching to using the newly introduced forward_only switch for RNN cells revealed an unrelated bug in our NetGradientChecker test, which urikz is investigating.
Reviewed By: salexspb
Differential Revision: D5031964
fbshipit-source-id: 19b49607d551aa3e2140041ef4e585f128c8f178
Summary: Add a RandomFailureOp and handling to elastic data parallel model of the status code
Reviewed By: andrewwdye
Differential Revision: D5065936
fbshipit-source-id: 24224f9ea414ee535c9e90cc28add5189354b0ef
Summary:
Migrate experiments folder to fb/sparse folder. Keep FunHashOp and SparseFunHashOp because they are now assumed as a default Op in depr. What I did
# Migrate FunHashOp and SparseFunHashOp and their unitests to core-caffe2, make sure tests are passed.
# Migrate other Ops in experiment folder to fb/sparse folder. Write new TARGETS files for them. Make sure tests are passed.
# Make sure all related tests passed.
# Fix MKL definition btw. Make sure that FC_Sparse is not compiled when there is no MKL support
Reviewed By: salexspb
Differential Revision: D4952993
fbshipit-source-id: 86c03676ab4e47f04d2d0dd438a4a1c849bbbff0
Summary:
Residual connections for multilayer RNN encoder/decoder for Caffe2 NMT model. Only supporting 'add' connections (the standard approach, which ves's TF experiments concluded was at least as good as other approaches), and also only implementing for residual_level >= 1 (which also fits our use case).
It is the responsibility of the config to ensure dimension compatibility: each level at and beyond residual_level (in both the encoder and decoder) should have the same number of units, with the exception that a bidirectional initial encoder layer should have half the number of units of the succeeding layer if that next layer is a residual layer.
Differential Revision: D5023160
fbshipit-source-id: f38c1b140638fee78cf3ef7d6b4602dd462484ee
Summary:
Update rnn_cell.py and char_rnn.py example with new `brew` model.
- Deprecated CNNModelHelper
- replace all helper functions with brew helper functions
- Use `model.net.<SingleOp>` format to create bare bone Operator for better clarity.
Reviewed By: salexspb
Differential Revision: D5062963
fbshipit-source-id: 254f7b9059a29621027d2b09e932f3f81db2e0ce
Summary:
the FC ModelLayer needs an optimizer, also seems the catch-all
that sets a default for missing optimizers had a bug
Reviewed By: xianjiec
Differential Revision: D5048302
fbshipit-source-id: cbbf641fb9ee4f4f89c5dbb132f7837ecdbe37a5
Summary: new resnet building with brew
Reviewed By: akyrola
Differential Revision: D4945418
fbshipit-source-id: d90463834cbba2c35d625053ba8812e192df0adf
Summary:
A Single machine multi-GPU version of BMUF algorithm. BMUF is a modification to
model averaging where updates to global model is implemented as a filter:
param_t = param_(t-1) + delta
delta = \beta delta_(t-1) + \alpha average(param_t) - param_(t-1)
Reviewed By: akyrola
Differential Revision: D4995057
fbshipit-source-id: 48176ba66d67eaf3fa4dee16d50d9589825ddba4
Summary: based on our discussion, we want an arg_map in ModelHelper and create arg_scope for that model within brew. Now it is realized
Reviewed By: salexspb
Differential Revision: D5042983
fbshipit-source-id: ddd2c7e9bca1be2f08a32f7252b44d3b60a57996
Summary:
Generalize SpatialBatchNorm CPU Op to compute Spatial batch normalization for
1D, 2D & 3D input tensors.
Reviewed By: dutran
Differential Revision: D5043563
fbshipit-source-id: 7fcb933a628dd47f13aa622f63601a87382f09cd
Summary:
Added several features to the ImageInputOp:
- bounding box (per image as well as default for the operator). For per-image, it
only works in Caffe2 format and is passed as the third tensor in the form
(ymin, xmin, height, width). For the operator, pass bounding_xmin, bounding_ymin,
bounding_width and bounding_height as parameters.
- per-channel mean/std. You can use the usual mean/std to pass a single
value to be used for all channels or also pass mean_per_channel and std_per_channel
to specify different values per channel. Order of channels is BGR.
- A minimum size parameter that can be specified instead of the scale parameter.
The minsize parameter will only scale the image if it is smaller than required.
This differs from scale which will scale up as well as down. You can only specify
one of scale or minsize.
Added a test case to test some of the features
Differential Revision: D4874988
fbshipit-source-id: 437191052a46e9916defe8b100d7cc7864373f61
Summary:
In Dper utility, add a function `load_parameters_from_model_init_options` to
allow init parameters from pretrained models
Reviewed By: xianjiec
Differential Revision: D4926075
fbshipit-source-id: 5ab563140b5b072c9ed076bbba1aca43e71c6ac5
Summary: Relax requirement on token uniqueness since a few use cases broke after the uniqueness requirement was added in a previous diff.
Reviewed By: kittipatv
Differential Revision: D5034132
fbshipit-source-id: 327eb065923e6ea152a360324316f81b7fb9564b
Summary: We can avoid this extra Reshape.
Reviewed By: jamesr66a
Differential Revision: D5032874
fbshipit-source-id: 92bd568bc6bec53d7f81a64cfa96d2c610823f8c
Summary:
In transfer learning, parameter initialized from pretrained model might require
a different learning rate than otherwise initialized. To this end, here we
implement a python solution where `base_learning_rate` is scaled by `scale`,
which is in turn set by `scale_learning_rate`; Alternatively, we can achieve
same effect by rewriting the LearningRate operator in C++
Reviewed By: kennyhorror
Differential Revision: D4992827
fbshipit-source-id: 8d7e87a61c95b3eb8ef733ec436f4060e865c0ac
Summary:
Adds a parameter cost estimation step before the actual training starts. The costs are later used in order to better shard the parameters across instances of the parameter server.
Things I needed to modify:
- A few changes to make ModelLayerHelper picklable
- Add support for stopping a distributed job after a number of stats reporting steps.
- Refactored run_dist_job to support collocating the reader with the trainer even when PS are present.
- Option to disable dense updates (when num_dense_servers=0).
Currently there's a huge overhead posed by having to launch a child workflow. I'll try and address next in a subsequent diff.
This is WIP because the other workflows need to be migrated as well.
I can break this down into smaller diffs if reviewers would prefer it.
Reviewed By: kennyhorror
Differential Revision: D4974752
fbshipit-source-id: 04c336acb2945f8f11324a221ffc6967818c0672
Summary: For distributed jobs, we were relying on the order the PythonOps were registered, which was very fragile.
Reviewed By: dzhulgakov
Differential Revision: D5016847
fbshipit-source-id: f5601467c5b0569d5e8a0efdd76abad0d703c5f5
Summary:
cuDNN versions of dropout and LRN (for native fp16 support), port of Caffe's max pooling algo that uses an explicit mask to store locations (also supports fp16 storage)
Closes https://github.com/caffe2/caffe2/pull/396
Reviewed By: akyrola
Differential Revision: D4990880
Pulled By: asaadaldien
fbshipit-source-id: a716acffb656843e9b31e3e6808bd2d8aa959d03
Summary:
Incorporating definition of cell's output and illustraing it's usage by adding dropout to all types of cell.
I think that we should try to get rid of aliases in RecurrentNetwork, so output of applied_over_sequence is also always (state_1_all, state_2_all, ...). This way we can merge get_output_from_single_step, get_output_from_sequence and get_outputs_with_grads into a single method
Let me know what do you think!
Reviewed By: jhcross
Differential Revision: D4992913
fbshipit-source-id: 737939be336ad145f84e8733cd255d4f7188ef70
Summary: decoder_hidden_encoder_outputs_sum_tmp is tiny after D5010109, no need to recompute it.
Reviewed By: akyrola
Differential Revision: D5014335
fbshipit-source-id: cc9e8f91372889d10bd99c79366018cb3943a435
Summary:
Segment based Ops requires increasing seg id, and without gap. Lengths based Ops does not
have this requirements.
Otherpooling methods, e.g., LogExpMean does not have Lengths based Ops available yet.
Differential Revision: D5019165
fbshipit-source-id: ab01a220e10d4ed9fa2162939579d346607f905e
Summary:
Specialized implementation of ResizeNearest for width_scale=2 and height_scale=2. This implementation doesn't use divides or calls to std::min, and is unrolled 2x over the width dimension. Also add a correctness test.
About 6x faster.
Reviewed By: ajtulloch
Differential Revision: D4928579
fbshipit-source-id: 5cc92a52bd688690fee907b4333d9c84b666f9c9
Summary: External inputs must be computed before updating the _ops_output structure, otherwise if the net to be appended outputs the external input, it is not added correctly
Differential Revision: D5013496
fbshipit-source-id: 6a83d0a6f1c63ef8ae7bec4d862c0ac2a690d47b
Summary: Adding a simple video data layer which allows to read video data from frames, videos and output 5D tensor. It also allows multiple labels. The current implementation is based on ffmpeg
Differential Revision: D4801798
fbshipit-source-id: 46448e9c65fb055c2d71855447383a33ade0e444
Summary:
This diff creates a generalized AttentionCell class, which will allow us to construct attention decoders out of arbitrary RNNCell components (with a particular view to using stacked, multi-layer RNNs).
In order to do this, we introduce a new optional input for RNNCell._apply which allows us to provide an additional input that is not processed by prepare_input(). Note that this is an argument only to _apply, not apply, since it is only meant to be used for additional recurrent connections to "embedded" cells, not for standalone RNNs.
Reviewed By: urikz
Differential Revision: D4998465
fbshipit-source-id: 473009ea4917e86e365f9d23aa2f11a46a94fd65
Summary: It is good practice to provide __dir__ whenever __getattr__ is defined so that tooling will work intelligently. In particular, it is hard to explore the available methods in iPython without tab completion.
Reviewed By: dzhulgakov
Differential Revision: D5006545
fbshipit-source-id: 1a150d91d54637d80b292764513943ff70d971b4
Summary:
Script caffe2/caffe2/python/examples/resnet50_trainer.py can be used to train a ResNet-50 model with Imagenet data (or similar).
However, currently the script does not actually save the model, so it is kind of useless.
Task 1: After each Epoch, save the model in a file "<filename>_X.mdl' where X is the epoch number and <filename> is given as a command line parameter. By default, use "resnet50_model" as filename.
Task 2: Add a functionality to restore the model from a previous file:
- add a command line parameter "load_model", which user can use to specify a filename.
- if this parameter is set, load the model parameters from the previous file
Reviewed By: prigoyal
Differential Revision: D4984340
fbshipit-source-id: 333e92679ba52a7effe9917fdfc2d55d652b868f
Summary:
Part of project to make all gradient accumulation business ops in RecurrentNetworkGradientOp, this makes the accumulateInputGradients ops.
Also added way to mark operators private so they don't appear in docs.
Reviewed By: salexspb
Differential Revision: D5006698
fbshipit-source-id: 226d7afb473290c8d0f936d2cc87640be3e06615