Summary:
previously fp16 type was supported in SparseLengthsSum operator, now it
works in all other segment operator as well.
Reviewed By: dzhulgakov
Differential Revision: D4624312
fbshipit-source-id: c9d72110e3762167270bb088405eaf9c56e88493
Summary: Inference function for the Im2ColOp: caffe2/caffe2/operators/im2col_op.cc.
Differential Revision: D4608663
fbshipit-source-id: d26ffb403c2acb7a5ead5f58f044ee3340c8311a
Summary:
Reduce test input size to instance norm gradient check. Larger size is currently timing out on stress tests.
e.g. failed: Timeout: Ran out of time before finding a satisfying example for test_instance_norm_gradients. Only found 2 examples in 125.39s.
Reviewed By: Yangqing
Differential Revision: D4608828
fbshipit-source-id: ce17a3ad28752d808efcbf79f1ea4238e63fb005
Summary: curandGenerateNormal can only generate arrays of multiple of 2 lengths. MSRAFill and GaussianFill operators use RandGaussian utility method which in turn uses curandGenerateNormal. This is a test which runs the operators on both devices to generate odd sized random arrays.
Differential Revision: D4602819
fbshipit-source-id: e65f5c731e925886cfa14afff482f7053bd020a0
Summary:
Implementation of ##LSTMWithAttention##
Still TBD:
1. There are problems with back propagation, because gradient is not implemented for ops with broadcasting
2. I need to make initial_recurrent_state to be of shape [dim] rather than [1, batch_size, dim], so one doesn't need to provide batch_size to LSTMWithAttention
Differential Revision: D4298735
fbshipit-source-id: 8903fcff4d6a66647ee6d45a6ef28803fc3091e5
Summary:
In-place is ~30% speedup, but needs a change to torch2caffe
or a graph rewrite on the client.
Differential Revision: D4577582
fbshipit-source-id: c31bf8ba97f4fa4cedf355cf2475eb7bab48b304
Summary: Another part of making DPER compatible with half-floats. This diffs adds supoprt of fp16 to segment reduction operators used in DPER.
Reviewed By: dzhulgakov
Differential Revision: D4587560
fbshipit-source-id: 0ae10648a7286a820bffaee802464dd9464584bc
Summary: this is to fix the bug with eigen implementation which calculating crossentropy
Reviewed By: salexspb
Differential Revision: D4582078
fbshipit-source-id: 4c92047e9dbbe219fcbef618a45c584c2fbfaad5
Summary:
- Key-value store for counters.
- Counters are updated via macros that also export USTD probes.
- Counter values can be exported using caffe2 operators.
- Snapshot mechanism for tracking time-window counter values.
Reviewed By: dzhulgakov, pietern
Differential Revision: D4553761
fbshipit-source-id: 25a1a91a3168dcff2159c6fba7b357d3fd3aa9bf
Summary:
Remove the use of `NextName` in layer model helper, so that the same function return `model_helper` that should construct identical `Net`, when under the same NameScope.
The `NextScopedBlob` should only take effect when there is real name conflicting, otherwise it returns ScopedBlobReference.
This is critical for parameter blobs. In long run, we need to be able to specify parameter blobs more explicitly. (kennyhorror is working on this). This solution works in short term for e.g., two tower sparse nn models.
Reviewed By: kennyhorror
Differential Revision: D4555423
fbshipit-source-id: 2c4b99a61392e5d51aa878f7346466a8f14be187
Summary:
Pass through the h-value recurrent output unchanged at each LSTM step beyond the valid part of a sequence (computed based on seqLengths, allowing batching of sequences of different length). This enables using the final-step output of each sequence as the output when one vector is desired for the entire sequence. Gradient also passed back unchanged.
Also made some cosmetic changes to recurrent_network_test.py (seq_lengths offset corrected, should be in [1, T] rather than [0, T-1]).
Reviewed By: urikz
Differential Revision: D4540307
fbshipit-source-id: 73a9f6326069d713dcb0cdc8d17869317c6dbe96
Summary: This diff adds shape inference for the SoftmaxWithLoss Operator
Differential Revision: D4565835
fbshipit-source-id: 1c2db398524c765977ec4d8a22c9b986bf9faf82
Summary:
One can find a reason, why I need gradient for CopyOp in this post - https://fb.facebook.com/groups/1405155842844877/permalink/1639683782725414/
Gradient for CopyOp is trivial in case the device was the same (cpu, or same gpu), but get's a little harder, when the copy was made across two different gpu.
I introduce new operator CopyOnDeviceLike, which has additional second input. The op copies the first input to the same device as the second one. The default implementation is exactly the same as CopyOp, but I specialize it for CUDAContext.
Please, let me know if I'm doing anything wrong here! That's my first caffe2 diff, related to operators definitions.
Reviewed By: Yangqing
Differential Revision: D4557258
fbshipit-source-id: 9494be589cc1e5696bbbfe25b7622aaa4c9efe4a
Summary: As in headline. I had missed these originally.
Reviewed By: kennyhorror
Differential Revision: D4560255
fbshipit-source-id: e69458e8a2574b981e40e915d87c8e16dadee7d6
Summary:
(Caffe2) Modified RecurrentNetworkGradient operator so that training is possible with any of the output blob(s) receiving gradient during the backward pass. This is realized through a new argument for the RecurrentNetwork op, outputs_with_grads, which takes a list of the indices of the output blobs which will receive gradient. The default case (only receiving gradient from the first output blob) remains the default.
New unit test covers the case where outputs_with_grads = [1, 2] using Python LSTM wrapper.
Reviewed By: urikz
Differential Revision: D4518516
fbshipit-source-id: 5c531582b20f3cf727d1aa91239b4d5a2b8a7c1f
Summary:
The existing op tranforms the input in a general way. It needs M transform mappings to transform a NxM input tensor.
But for binary predictions X (Nx2 tensor), we know that X[:, 0] = 1 - X[:, 1].
So we just need one mapping for X[:, 1]. After being transformed, we can compute X[:, 0].
This diff is to handle this.
Differential Revision: D4550441
fbshipit-source-id: 42d8c6e88d830c97628ee930b543740a32acf904
Summary: This is like `UniformIntFill` but guarantee to return unique elements in the output, excluding the optional avoiding elements.
Reviewed By: xianjiec
Differential Revision: D4511814
fbshipit-source-id: 5dc98ee580616e60e46ee74ebb3f5ddd29a09965
Summary: These operators update the state of the instance and therefor should have the instance in the output list.
Reviewed By: xianjiec
Differential Revision: D4554773
fbshipit-source-id: 556d484fcf58878308aa6b0f7cd7ea2446d3f29e
Summary:
Shape inference allows Caffe2 to compute shapes of blobs without running a model. Update InferShapesAndTypes() to accept an optional blob:dimensions map so that external input blobs do not need to be part of the workspace.
InferShapesAndTypes() in workspace.py conditionally calls the ...from_workspace or ...from_map bindings. Note I favored a small amount of code duplication here for the sake of readability. InferShapesAndTypes() in operator.cc has been refactored into mirrored entry points, invoking a common helper.
Other minor changes to address linter warnings.
Reviewed By: dzhulgakov
Differential Revision: D4524873
fbshipit-source-id: 56f863b759c016d7f23523f06fda3aa5bba22357
Summary:
This is a bit large diff, sorry about it. It includes basic shape and type inference functionality, based on YQ's Schema scaffolding. I added some helper functions to make it easier to write simple translations.
Bigger refactoring was needed for ConvPoolBase so that we could use the shape inference already there in the schema.
I annotated enough operators to be able to infer forward-pass of shapes for basic convnet, and added test for that. I intend to bootcamp some annotations and annotate enough to handle Resnets fully. Need to think about gradients, if they could be annotated in an easier way.
Only shapes are now exposed to Python, types will follow later. Also the inference is not called yet anywhere but unit test.
Also I am not sure if everything is in the best location in the code, but shouldn't be hard to move stuff around.
Reviewed By: dzhulgakov
Differential Revision: D4436818
fbshipit-source-id: eebee5937ccc9ac09c245465302388a1fae6933c
Summary: This allows to save the previous value of the counter and send it upstream without losing counts.
Reviewed By: kennyhorror
Differential Revision: D4497854
fbshipit-source-id: 28a7ad0ff1020bde26f78b1f59614b094d1e1881
Summary:
I have forgotten to remove this one. The rest of indexing
instead of string names is comming after D4446813 lands as scratches
aren't inputs or outputs and thus can't be indexed.
Reviewed By: urikz
Differential Revision: D4465748
fbshipit-source-id: 2ccbedfb35541ef4a2231d1480eef59025bd5290
Summary: On some inputs TestWarden was failing
Reviewed By: Yangqing
Differential Revision: D4487293
fbshipit-source-id: 3da4b310a619c2b57f033b2dd7727f71403bfd68
Summary: looks like we don't a good job with initial recurrent input gradients yet. Here is some fix, but gradient doesn't check yet. The shape is correct now though
Reviewed By: salexspb
Differential Revision: D4475447
fbshipit-source-id: 280f1f59f19e487fd0dce0d440609c50ddce294a
Summary: This diff use stack workspaces in RecurrentNetwork, which allows to simplify the implementation and get rid of scratches.
Reviewed By: salexspb
Differential Revision: D4446813
fbshipit-source-id: 514eec7e4300bdf492a9cb192b40cf4f89acf656
Summary:
We get fluky lstm tests on a numerical gradient check. I
would like to improve accuracy of the latter. But first need an
example. After lading this TestWarden would find a bad input for me.
Reviewed By: urikz
Differential Revision: D4467223
fbshipit-source-id: 68d4bf22af11190f39fa28332c6d99efbb192132
Summary: Fixes segfaults that occur in Eigen and im2col/sgemm backends.
Reviewed By: Yangqing
Differential Revision: D4451772
fbshipit-source-id: 3cf21e5afb2fe300db4228933a82063db5f7091f
Summary: Remove usage of recurrent_sizes, so recurrent states' sizes can depend on input (in case of attention matrix for beam decoder). I removed recurrent_sizes from forward and backward steps.
Reviewed By: salexspb
Differential Revision: D4427688
fbshipit-source-id: 580420a294d309c86ec5cb4e677058623b7228e1
Summary:
In this diff I stop passing parameters by name and also remove hardcoded output ids which were there specifically for LSTM to work. It also allows to avoid using recurrent_sizes in the backward pass (for forward this is done in D4427688)
Using similar technic it should be simple enough to eliminate blob name passing at all. Then we can fix scoping. These can be done in a next diff.
Reviewed By: urikz
Differential Revision: D4444614
fbshipit-source-id: 3580a76365502b9f2f09e3d8b7e78084ca739f00
Summary:
lets have a test for this so we don't break existing usecases
while iterating over RecurrentOp's code
Reviewed By: urikz
Differential Revision: D4456404
fbshipit-source-id: 79f2b88c1eed16106adf5b793b4c74441c7146c6
Summary:
Spatial Softmax allows specifying locations that are not counted for the loss. If none of the locations are counted, this resulted in NaNs, and headache. This diff fixes that by explicitly handling these cases.
+ assertion for label blob dimension(0)
Created a new test as well.
Differential Revision: D4442939
fbshipit-source-id: 8641bfad2a994e517ca3eda39345380a6ca1ba50
Summary:
In this diff :
[1] Change the output from generating all paths from root to labels to TreeProto.
TreeProto itself is required by inference and we can use hsm_util to get the
paths from TreeProto.
[2] Fix hsm_util index assigment.
Differential Revision: D4416731
fbshipit-source-id: 657d8b9b4df6fa30c9f92d391cf7e07b5c5db1f8
Summary: Change labels indices range to be in the range [0, num_classes[
Differential Revision: D4416685
fbshipit-source-id: b16ca8539fd538ad62bf1298dbad3f1553956241
Summary: DivOp missed a gradient for CUDA, so implemented it. Also added operator test.
Differential Revision: D4396638
fbshipit-source-id: 9949e47aa3735bb418a0db003e2b2f4896056a71
Summary:
Essentially, when number of pairs is around 1000, then only positive samples in the list gets a massive boost from all the negative examples. This diff normalizes the gradient and the loss with the number of pairs.
This diff also adds protection against NaN and more logging to help debug.
Reviewed By: kdub0
Differential Revision: D4359782
fbshipit-source-id: 7240344ddb1f2f670d1eec1b03e7f6e413f3dfcc
Summary:
It used to be that only the cudnn engine supports it, and now it should be
fully supported by any conv engine.
To ignore bias, simply use a convolution op that has two inputs instead of
3. The gradient operator will automatically figure out that it does not
compute the bias gradient.
Reviewed By: prigoyal
Differential Revision: D4354183
fbshipit-source-id: cf71b6289a254d15a6a663a85df63fbbaec3702b
Summary:
This is a first step in improving our RNN story. It provides a wrapper around current RecurrentNetworkOp implementation which infers most of the redundant parameters and makes API much simpler.
Also in order to support general step nets I added an extra argument to the RecurrentNetworkOp.
Future work:
1. Inferring step net output and internal blobs (scratches) sizes and type
2. Avoid accessing blobs by names in c++ part
3. Remove requirement for inputs / output 1:1 correspondence in the step net
4. Make python API support networks with operators like Sum being on the boarder of the Cell net (currently there is an issue with such networks where gradient blobs which are on the side are not explicitly created).
Differential Revision: D4268503
fbshipit-source-id: f8a66491c2b55daa730caeed7e9f2b3921541b49