Summary: Super rough implementation of recurrent attention. Planning to factor out the common code between the two functions as well as train and eval. I want to get this out and get eyes on it sooner rather than later
Differential Revision: D4647837
fbshipit-source-id: 54bc4e8ed0df6f04c86c425926decbe89f73b068
Summary: In case of distributed task, load_from_db() loads to wrong workspace (when used inside a Python op). Passing which workspace to use explicitly so that it loads to the one Python op is being run.
Reviewed By: kennyhorror
Differential Revision: D4653692
fbshipit-source-id: 94585c012b05ee38b9ce5e8ef0efdd50aa41dd2b
Summary: The evaluation part of the two tower workflow is missing. This diff is to complete it. Part of the newly added functions can be used for other workflows, eg, feed. As the eval workflow in different workflows will be overlapped, a generic eval workflow will be added in a separate diff.
Reviewed By: kennyhorror
Differential Revision: D4646880
fbshipit-source-id: 4d6eb35df10f6f613533d442f2a04dc0332386f8
Summary: Add gradient support for Caffe2 operator SumElements (for use in Translation RNN training pipeline).
Differential Revision: D4669036
fbshipit-source-id: 502760a2a624b20b3241e83a2f208f450b6ff36f
Summary:
The current optimizer code in c2/python has the following issues:
(1) the optimizers in sgd.py cannot config per param-blob optimizer;
(2) sgd.py is a bad file name. optimizer.py is a better name;
(3) layer_model_helper.py has another set of optimizer code (which supports per param-blob optimizer)
This diff did the following
(1) create optimizer objects so that we can config per param-blob optimizer and that are also compatible to the existing optimizer code
(2) the new optimizer code are much more modulized
(3) move the optimizer code to file with better name (optimizer.py)
(4) replace the optimizer imports in the existing code
will do in next diffs
(1) optimizers with structured parameters for dper2
(2) get rid of the optimizer code in layer_model_helper.py
Reviewed By: salexspb
Differential Revision: D4609013
fbshipit-source-id: 2e2d6dfa8685d10498f89069157453d9feca3f27
Summary:
1. Allow EnsureDense Op to do both in-place pass or copy
2. In MTML, add EnsureDense Op before gather
3. Change the unittest values (adding another operator changes the random seed,
which causes the model initialization also changes)
Reviewed By: xianjiec
Differential Revision: D4625219
fbshipit-source-id: b3c748c3651d1dedd75420912a9698b7e46187c5
Summary: This diff is migrating existing DPER workflows to use new metric abstractions in training.
Reviewed By: xianjiec
Differential Revision: D4656576
fbshipit-source-id: 1b3b16b390fc0757480e41df1c4214c11cd76e8a
Summary: Renamed ElementwisePower to Pow for better discoverability. Added CUDA version and Gradient + tests.
Reviewed By: kennyhorror
Differential Revision: D4665550
fbshipit-source-id: dd33d8ad3917d71504e363ab397af50d38a63b1f
Summary: Add a simple op to sum the elements, with optional averaging. This is basically copy from AverageLossOp that we should alias to this. And maybe develop this towards a generic norm op.
Reviewed By: jhcross
Differential Revision: D4664591
fbshipit-source-id: 0e0c0efe9e415e2ad2feecfa42b03db2c83bee70
Summary: Due to popular demand, added an op to compute element-wise square + gradient for it (just for the fun of it).
Reviewed By: Yangqing
Differential Revision: D4664797
fbshipit-source-id: 0a29c7c249fdc72f51412bebd6ae352a7801cf05
Summary: Simple elementwise Max implementation for CUDA. Given N inputs, it will do N-1 pairwise maxes. I am not sure if it would be much better to iterate through all the inputs in the kernel, since this has better locality. We can also optimize later.
Reviewed By: Yangqing
Differential Revision: D4659953
fbshipit-source-id: 3a23b7fb3dbdf1d43bf3134ece03af4a791844dd
Summary:
This diff is modifying the way we're specifying metrics from doing reporter, that should know all the blobs which is should access in advance, to reporter that is connected through schema.
This diff is also reporting any arbitrary number of learning curves to Flow and provides really flexible way to specify all the metrics we care about.
TODO: Modify model helper to allow providing intermediate results for reporting
TODO: Add evaluation net (instead of prediction net).
TODO: Move all other places in DPER 2.0 to use that abstractions instead.
TODO: Get rid of LogScoreEstimator in favor of metric that is going to be really suiting our needs.
Reviewed By: azzolini, dzhulgakov, kittipatv
Differential Revision: D4577548
fbshipit-source-id: 3515bd41e0f92263ff90ce2f7207abf65d01b1f7
Summary: so that the utils can be used by a wider range of audience.
Reviewed By: xianjiec
Differential Revision: D4637462
fbshipit-source-id: f0695f430902aef26360efa511069b3755eaf52a
Summary:
To avoid Numpy warning: using a non-integer number instead of an integer will result in an error in the future
Closes https://github.com/caffe2/caffe2/pull/64
Differential Revision: D4658348
Pulled By: Yangqing
fbshipit-source-id: 3a1b33cbb27849bc167b08147d078e8d487567f4
Summary: Added validation for load op when doing load_all by refactoring validation logic for loading specific blobs.
Reviewed By: kennyhorror
Differential Revision: D4641986
fbshipit-source-id: e0075a12188ca09d7628add72c143b40d5d9f382
Summary:
- Replaces strip_regex implementation in SaveOp. It deletes the prefix of blob names upto a given substring.
- Adds the same functionality to LoadOp. Needed for loading checkpoints that are stored using the strip_prefix feature.
Closes https://github.com/caffe2/caffe2/pull/129
Differential Revision: D4512234
Pulled By: Yangqing
fbshipit-source-id: d926c1c5adcc7a711365cede11f21421bb7d4138
Summary: fix a check if the net is net_dict
Reviewed By: kennyhorror
Differential Revision: D4647493
fbshipit-source-id: e0a62fc5847c99c85857c5635b4e39d59c66d5ce
Summary:
the existing code uses vector<T> to store the given tensor and then copy to output.
If T=bool, vector<bool> stores the data as bits and then copy does not work.
we use TensorCPU to store it instead.
Also add unittest.
Reviewed By: kennyhorror
Differential Revision: D4622325
fbshipit-source-id: 95c27b5d1cfbc836d2419d01cacde5a3172f4d7e
Summary:
Verify shape and type inference in op unittests via assertReferenceChecks(). For now catch exceptions from InferShapeAndTypes() and log a warning.
TBD: Determine if there existing inference/output mismatches, and if so, change test asserts to warnings until they are resolved.
Differential Revision: D4639343
fbshipit-source-id: 605e72f53198e1a100fe7ba18b72c34c9ddbb727
Summary:
- Do not set default for cudnn_ws. Will use the default set by cuDNN ops.
- Do not use cudnn_ws for MLP.
- Do not run the benchmark if the required args are not set. Previously tried to run and errors out.
Closes https://github.com/caffe2/caffe2/pull/177
Differential Revision: D4633143
Pulled By: Yangqing
fbshipit-source-id: e89a7d01984e599d92a330d0ee4ba106feba65b8
Summary:
Update cuDNN RNN interface (mostly fixing ordering of arguments). Set seed so that test can pass consistently
Closes https://github.com/caffe2/caffe2/pull/62
Reviewed By: Yangqing
Differential Revision: D4348966
fbshipit-source-id: f9b56be37739e5bffabec130e3407492b2aef656
Summary: The shape inferenec did not check for spatial mode.
Reviewed By: andrewwdye
Differential Revision: D4638218
fbshipit-source-id: f15419738587013dea39e04a3da086890938c4e2
Summary:
At the moment LocalSession creates a new workspace if none if provided. As a
result anything that have been executed in local session is not going to be
avaiable to the external caller, i.e. everything that is using SingleRunner can
only observe side-effects and not really access intermediate blobs.
This diff is modifying LocalSession to run in current workspace instead (unless
it gots some really weird effects because we rely on privateness of the
workspace it should work).
Differential Revision: D4634743
fbshipit-source-id: 975bed154c7ca215dc3fc0d60f05a7c092711482
Summary: vigneshr has been experiencing randomly that the process does not exit in the end. We don't know what causes this, so this will help with two ways: (1) by putting timeout_guard.EuthanizeIfNecessary(600) in the end of the operator, you ensure that the process is killed in 10 minutes, allowing for retry; (2) this killing will cause python stack traces to be dumped, helping debug the real issue.
Differential Revision: D4635781
fbshipit-source-id: b558418c80671c00effdd514e4ddc01e935c95df
Summary: Add SparseNN workflow for feed. I haven't fully thought about the change needed for ads, as I added a property called 'preproc_output_schema' for LayerModelHelper.
Reviewed By: xianjiec
Differential Revision: D4585796
fbshipit-source-id: 060d08f4beb928e7e7863f2e563f612c358951fb
Summary: See http://bugs.python.org/issue6721. Since everstore loaders use ProcessPoolExecutor, which is based on forks, and there was perhaps update of the numpy library or some unralted lirbary, we started getting subprocesses stuck at np.random.randint(). Also changed logging to prints, since logging is known to have issues with multiprocessing. See https://www.prod.facebook.com/groups/fbpython/permalink/1438647216176641/
Differential Revision: D4633725
fbshipit-source-id: ae948a1827c71a3a2119d6a3248706728984df31
Summary:
A bit too much stuff in one diff, so sorry:
1. Add inference for gradient types by using the fact that x_grad is gradient of x and must be of same shape. This is kind of awkward to use string matching, but in addition I rely on the operator being actually a gradient op.
2. dzhulgakov was write, scalar shape is () and not (1). Sorry, my claim easlier was #fakenews.
3. Added inference functions for MakeTwoClass, MomentumSGDUpdate and Cross entropy ops.
Reviewed By: dzhulgakov
Differential Revision: D4569758
fbshipit-source-id: 0db13f33819777fdddefe21d4b1ebf906fcaf98c
Summary: Just generate some random data and put it through LSTM (Cafef2 RNN based) using its own output as gradient value for benchmark purposes. With default parameters it fits my dev GPU memory. On default parameters provided in this diff I have got 300k entries per second processed. These entries are split into blocks of seq_length * block_size. Each entry is of size hidden_dim, LSTM takes in hidden_dim sized input and produces output of the same size.
Reviewed By: salexspb
Differential Revision: D4605815
fbshipit-source-id: dd529302a0a93e8711784c67e4c777c8d6a8cdf4
Summary:
Add cudnn v6 support, including testing support for dilated convolution.
Add a check to ensure that the versions of cuDNN used to compile Caffe2 and run it are compatible
Closes https://github.com/caffe2/caffe2/pull/85
Reviewed By: bwasti
Differential Revision: D4387690
Pulled By: Yangqing
fbshipit-source-id: 312960134398dd4afe6ee0c01cdc160046c904e8
Summary:
previously fp16 type was supported in SparseLengthsSum operator, now it
works in all other segment operator as well.
Reviewed By: dzhulgakov
Differential Revision: D4624312
fbshipit-source-id: c9d72110e3762167270bb088405eaf9c56e88493
Summary:
This diff is trying to address one of the concerns that Xianjie have had - requirements create a layer for all operators and attach pass shapes and other info around.
The basic idea of the diff:
1. Try to create a layer with a given name, but if it's not available try to fallback on operator with that name (that is expected to have no parameters).
2. For all operators that we're adding through this functional style of creation - try to use C2 Shape/Type inference logic to get output type. If we fail to get - it just return untyped record and expect user to annotate it when it's really needed.
Reviewed By: xianjiec
Differential Revision: D4408771
fbshipit-source-id: aced7487571940d726424269970df0eb62670c39
Summary:
If init_params is False, the parameters should not be initialized.
This is particularly important when testing a model that provides values for these BN parameters.
Closes https://github.com/caffe2/caffe2/pull/174
Differential Revision: D4621791
Pulled By: Yangqing
fbshipit-source-id: 518443925990a12c1d5729b0971ebe19ba5d8998
Summary: It is better for the workers to share the python-side queue, since I saw a case where workers assigned for one GPU was lagging behind others. Also, reduced logging as requested by rpenggithub.
Differential Revision: D4620487
fbshipit-source-id: 73353f9570b07788c8cd71c9fec9308cd93a44dd
Summary: Inference function for the Im2ColOp: caffe2/caffe2/operators/im2col_op.cc.
Differential Revision: D4608663
fbshipit-source-id: d26ffb403c2acb7a5ead5f58f044ee3340c8311a
Summary:
Mysterious deadlocks after epoch has finished have occured randomly but quite frequently recently for myself, vigneshr and others. Looking at a stack trace of vigneshr's job (P57129798), I noticed a couple of threads were calling BlobsQueue.blockingWrite (or something like that). That call stucks when the caffe2/c++ side queue is at capacity (we use capacity of 4 with data workers). So in cases when this call was just being made while the script was to be terminated, the thread did not close and the whole process did not close either (not completely sure why that is since thread is a daemon thread, but this might be a flow-related issue since we run inside a flow container).
This is quite easy to fix: just call CloseBlobsQueue() when terminating the process. I modified coordinator.stop() and wait_for_finish() to return a status code based on whether threads that were joined actually closed within the 1.0sec timeout. This allowed creating an unit test to test for this issue. Before my change, the unit test failed.
Reviewed By: pietern
Differential Revision: D4619638
fbshipit-source-id: d96314ca783977517274fc7aadf8db4ee5636bdf
Summary:
Reduce test input size to instance norm gradient check. Larger size is currently timing out on stress tests.
e.g. failed: Timeout: Ran out of time before finding a satisfying example for test_instance_norm_gradients. Only found 2 examples in 125.39s.
Reviewed By: Yangqing
Differential Revision: D4608828
fbshipit-source-id: ce17a3ad28752d808efcbf79f1ea4238e63fb005
Summary:
For code in layer model helper, layers. It's intentionally to not have NameScope by default.
This looks another place that may need default NameScope.
https://fburl.com/wdwtxp0m
Reviewed By: kennyhorror
Differential Revision: D4606971
fbshipit-source-id: b560bf59d3242e3f9443cd5aeda5c7e2e4e89079
Summary: D4348953 added support for accuracy for top_k>1, which is only supported on CPU, requiring data to be copied to CUDA. But that diff did not take into account that we have top_k=1 version of AccuracyOp for CUDA. This diff ensures we use the CUDA version for top_k=1.
Differential Revision: D4607767
fbshipit-source-id: 8becda23890343043eb79ad04e4c6196e9010f0c
Summary: as title. Add num of examples limit for group collect. Add option for enabling sum loss in BatchLRLoss
Reviewed By: xianjiec
Differential Revision: D4602311
fbshipit-source-id: 5b2a244f1f0e9f1ab0f4590e94828fd18d018d8d
Summary: curandGenerateNormal can only generate arrays of multiple of 2 lengths. MSRAFill and GaussianFill operators use RandGaussian utility method which in turn uses curandGenerateNormal. This is a test which runs the operators on both devices to generate odd sized random arrays.
Differential Revision: D4602819
fbshipit-source-id: e65f5c731e925886cfa14afff482f7053bd020a0