Summary:
Layer to allow model to follow different paths for each instantiation context and join later. Together with tagging system cleanup (this is a separate issue), this should reduce the need to write a layer to differentiate between context.
Re: tagging system clean up, we should make exclusion more explicit: EXCLUDE_FROM_<CONTEXT>. This would simplify instation code. TRAIN_ONLY should become a set of all EXCLUDE_FROM_*, except EXCLUDE_FROM_TRAIN.
Reviewed By: kennyhorror
Differential Revision: D4964949
fbshipit-source-id: ba6453b0deb92d1989404efb9d86e1ed25297202
Summary: Previously, the code below would go out of bound.
Reviewed By: xianjiec
Differential Revision: D4968037
fbshipit-source-id: 3760e2cddc919c45d85ac644ac3fabf72dbaf666
Summary: Current eval nets contain loss operators; see example: https://fburl.com/6otbe0n7, which is unnecessary. This diff is to remove them from the eval net.
Differential Revision: D4934589
fbshipit-source-id: 1ba96c20a3a7ef720414acb4124002fb54cabfc7
Summary: A layer that takes raw ids as inputs and outputs the indices which can be used as labels. The mapping will be stored with the model.
Reviewed By: kittipatv
Differential Revision: D4902556
fbshipit-source-id: 647db47b0362142cdba997effa2ef7a5294c84ee
Summary: added a new context to layers.py
Reviewed By: kennyhorror
Differential Revision: D4817124
fbshipit-source-id: 36f08964b86092e81df24c1b9d4b167293a7ffb8
Summary:
Currently, the functional layer infers the output types and shapes by running the operator once.
But in cases where special input data are needed to run the operator, the inferrence may fail.
This diff allows the caller to manually specify the output types and shapes if the auto infererence may fail.
Reviewed By: kennyhorror
Differential Revision: D4864003
fbshipit-source-id: ba242586ea384f76d745b29a450497135717bdcc
Summary: Having to pack the input to schema doesn't make much sense since the structure is not recognized by operators anyway.
Differential Revision: D4895686
fbshipit-source-id: df78884ed331f7bd0c69db4f86c682c52829ec76
Summary: Perform gather on the whole record. This will be used for negative random sampling.
Reviewed By: kennyhorror
Differential Revision: D4882430
fbshipit-source-id: 19e20f7307064755dc4140afb5ba47a699260289
Summary:
The basic idea of bucket-based calibration:
1. given a model and a calibration data set
2. apply the model to the calibration data set and sort the prediction scores
3. bucketize the prediction scores
4. for the samples in each bucket, compute the proportion of positive samples
5. build a set of piecewise linear functions that map from the bucket range to the proportion
6. appends an operator of piecewise linear transform to the prediction net that is supposed to calibrate the raw predictions.
7. to support calibration in realtime training, we create a new type of Net -- bucket calibration net. This needs a new Context to add_calibration_ops(), to export and load the new Net.
This includes a series of diffs.
This diff implements a layer that adds different operators for train/cali/eval for bucket based calibration.
Reviewed By: dragonxlwang
Differential Revision: D4817119
fbshipit-source-id: 44f8fcad2a94f40f7439cc1ad47e7bae5e17397d
Summary: Somehow, feed-non-ranking training data usually have this type of column. Add option to support it.
Reviewed By: xianjiec, kennyhorror
Differential Revision: D4773960
fbshipit-source-id: 5a7ef4618a070e04f3cd8ddfcbf2b7441c00d92d
Summary:
multiple places broken, blocking the push :(
- fix the weighted training for ads and feeds
- fix the publishing if no exporter model is selected
- fix the feeds retrieval evaluation
- added the default config for retrieval workflows. plan to use for flow test (in next diff)
- clean up not used code
- smaller hash size for faster canary test
Reviewed By: chocjy
Differential Revision: D4817829
fbshipit-source-id: e3d407314268b6487c22b1ee91f158532dda8807
Summary:
This diff does the followings:
1. Add optimization options to model options in the UI for all workflows.
2. Allow different parameters to use different optimizers (or same optimizer with different settings, eg, learning rate).
3. Remove the default values for the `sparseDedupAggregator` field in the thrift file as the default value for that should just be `None` instead of 'sum'.
4. `fb/dper/layer_models/mlp_sparse.py` is deprecated.
5. Add calibration to two tower workflows.
Reviewed By: kittipatv
Differential Revision: D4767004
fbshipit-source-id: de92ea63fb0ff33f8581b1693479b723a68cd2d1
Summary:
Add distributed training to dper2 and keep the dper1 working.
* Created a ModelDelegator to wrap ModelHelper and LayerModelHelper to mitigate the difference.
* To get the average length for sparse feature, I extracted some information in feature_processor. There should be some better way to do it after we have new compute_meta.
* metric right now only runs on the first trainer.
* The model is saved correctly for evaluation. But I'm still not sure how to handle the weights for adagrad.
Reviewed By: kennyhorror
Differential Revision: D4767745
fbshipit-source-id: 0559d264827a7fd9327071e8367d1e84a936bea9
Summary:
Adding support for multilabel in multiclass workflow. `input_feature_schema` and `trainer_extra_schema` are now a function taking in the preprocessor option and output the schema. This allows dynamic schema definition based on the option.
Changing default value will be in the next diff.
Reviewed By: xianjiec
Differential Revision: D4750064
fbshipit-source-id: 896143f432e963bc1723c0153749efeb39a83bec
Summary: This layer will be used to sample negative labels for sampled softmax.
Differential Revision: D4773444
fbshipit-source-id: 605a979c09d07531293dd9472da9d2fa7439c619
Summary:
This diff is adding eval nets to layer model helper. It should be useful for
the cases when train/eval nets need some extra input (usually some supervision)
for train/eval. For example various sampled layers, etc.
Differential Revision: D4769453
fbshipit-source-id: 7a8ec7024051eab73b8869ec21e20b5f10fd9acb
Summary:
`SamplingTrain` layer is a wrapper around another layer subclassing `SamplingTrainableMixin`. When initiated in the training context, `SamplingTrain` produces sparse output of the wrapped layer. Output can be paired with `indices` to create Map schema. When initiated in prediction context, the full output of the wrap layer is produced.
This is liked the SampledFC function in model helper, https://fburl.com/gi9g1awh, with the ability to initiated in both trainig and prediction context.
I'd like to get consensus whether we should introduce the `SamplingTrain` layer and the accompaying mixin. This can probably be accomplished in some other way, but I think this is not too bad.
Reviewed By: xianjiec
Differential Revision: D4689887
fbshipit-source-id: 7be8a52d82f3a09a053378146262df1047ab26a8
Summary:
currently the output schema and blobs are names as "field_i" which is
bad for debugging. This diff allows us to specify output names.
Reviewed By: kennyhorror
Differential Revision: D4744949
fbshipit-source-id: 8ac4d3c75cacbb4c9b5f55793ac969fe1cf20467
Summary: Created `BatchDistillLRLoss` layer and added support for it in DPer2.
Differential Revision: D4718333
fbshipit-source-id: b873954ea704daafed94ac65fef47a20d56858e2
Summary:
1. migrate the basic mtml model to dper 2
2. test dper 2 mtml model
3. test all optimizers
Reviewed By: kittipatv
Differential Revision: D4680215
fbshipit-source-id: 7aac5c59bdac22fcad8ed869b98e9e62dca1d337
Summary: layer that takes a label, prediction pair and outputs the L2 loss
Reviewed By: kittipatv
Differential Revision: D4702111
fbshipit-source-id: 09f2ede44d1b548e61096de741f1b2aa0b66bbcb
Summary: For some embedding task, we don't want to include bias term in embedding computation.
Reviewed By: xianjiec
Differential Revision: D4689620
fbshipit-source-id: 4168584681d30c0eaa1d17ceaf68edda11924644
Summary: Some operators, e.g., SoftmaxWithLoss, returns scalar-typed tensor. This would allow us to use those ops without having to write layer manually.
Reviewed By: xianjiec, kennyhorror
Differential Revision: D4703982
fbshipit-source-id: f33969971c57fc037c9b44adb37af1caba4084b6
Summary:
otherwise the blob will be in different namescope, e.g., `_nested`: https://fburl.com/ntlsaezv.
this make tensorboard ugly.
Reviewed By: dzhulgakov
Differential Revision: D4696946
fbshipit-source-id: 73627feccd7c4896964e6c549b7241bcce4f49a7
Summary: sum processor and sqrt pooling is to mimic the DoubleHelix model.
Differential Revision: D4678413
fbshipit-source-id: fc1ccfe3c92c540ce5914dfd8ff1a040805c48db
Summary: Add SparseNN workflow for feed. I haven't fully thought about the change needed for ads, as I added a property called 'preproc_output_schema' for LayerModelHelper.
Reviewed By: xianjiec
Differential Revision: D4585796
fbshipit-source-id: 060d08f4beb928e7e7863f2e563f612c358951fb
Summary:
previously fp16 type was supported in SparseLengthsSum operator, now it
works in all other segment operator as well.
Reviewed By: dzhulgakov
Differential Revision: D4624312
fbshipit-source-id: c9d72110e3762167270bb088405eaf9c56e88493
Summary:
This diff is trying to address one of the concerns that Xianjie have had - requirements create a layer for all operators and attach pass shapes and other info around.
The basic idea of the diff:
1. Try to create a layer with a given name, but if it's not available try to fallback on operator with that name (that is expected to have no parameters).
2. For all operators that we're adding through this functional style of creation - try to use C2 Shape/Type inference logic to get output type. If we fail to get - it just return untyped record and expect user to annotate it when it's really needed.
Reviewed By: xianjiec
Differential Revision: D4408771
fbshipit-source-id: aced7487571940d726424269970df0eb62670c39
Summary: we may not need dense feature inputs in some models (e.g., double helix).
Reviewed By: dzhulgakov
Differential Revision: D4568755
fbshipit-source-id: 6850508f86fafb53f81783b2a2a38776be5455d7
Summary: Another part of making DPER compatible with half-floats. This diffs adds supoprt of fp16 to segment reduction operators used in DPER.
Reviewed By: dzhulgakov
Differential Revision: D4587560
fbshipit-source-id: 0ae10648a7286a820bffaee802464dd9464584bc
Summary:
First part of adding half-floats support to DPER 2.0. Let's add an option use_half_floats to enable converting some weights of the model from fp32 to fp16 before saving it to predictor models parts. For now it's for SparseLookup layer's embeddings. All conversion is done after training is finished and saved models are ready to be used on remote predictors as-is (they will be stored compacted in memory). New fp16 blobs are saved to the model instead of original ones, under the same names, so we don't modify MetaNetDef at all.
Next steps:
1) support on delivery side -- operators working with these blobs should support both float and float16 input types
2) benchmark performance to make sure there is no regression
a) of serialization
b) of delivery
3) support realtime training (I'm thinking about adding new pre-publishing net which will be executed each time the realtime trainer stops to publish a new snapshot)
Depends on D4567304
Reviewed By: kennyhorror
Differential Revision: D4571710
fbshipit-source-id: 19967a17d3bd84878d66e8c0ed8c5342bf38d979
Summary: Do I understand correctly? It must be of size 1 for sigrid
Reviewed By: kennyhorror
Differential Revision: D4576541
fbshipit-source-id: 92fa8dc62e36ff095e14cceeb80b03c0028f5695
Summary:
Remove the use of `NextName` in layer model helper, so that the same function return `model_helper` that should construct identical `Net`, when under the same NameScope.
The `NextScopedBlob` should only take effect when there is real name conflicting, otherwise it returns ScopedBlobReference.
This is critical for parameter blobs. In long run, we need to be able to specify parameter blobs more explicitly. (kennyhorror is working on this). This solution works in short term for e.g., two tower sparse nn models.
Reviewed By: kennyhorror
Differential Revision: D4555423
fbshipit-source-id: 2c4b99a61392e5d51aa878f7346466a8f14be187
Summary:
DPer example have been creating multiple copies of the transform config in net
defition till this moment, that resulted in the fact that I've hit the limit of
ProtoBuf (64MB) for a certain Task requests (especially visible because of the
ValidationPipeline that I was adding).
After this diff we're going to store SigridTransforms in one instance per
machine for training (or 1 instance per reading).
Difference in sizes of the plans for some simple SparseNN model ~30 MB (even including the fact that second model have validation plan as well).
TODO: Do similar logic for NNPreProc as well (it's also pretty large).
Reviewed By: dzhulgakov
Differential Revision: D4441441
fbshipit-source-id: 4452dd86a4dc49b2c7f5b7642f443aed5720b047
Summary:
It looks like for the types that are created directly through type(...)
function call, we don't store the strong references anywhere. As a result
a GC call in Python might/or might not clean up these classes depending on the
phase of the moon and other random things. This results in a fact that in some
cases simple layers as a Relu might disappear.
cat_shame
Reviewed By: xianjiec
Differential Revision: D4396289
fbshipit-source-id: ba4e9b7ef54ee43349853b0acc3d3f40c74e4d73
Summary: As title. We want to have request_only net which runs on user_only sparse features. Submitting to get early feedback.
Reviewed By: dzhulgakov
Differential Revision: D4282783
fbshipit-source-id: 71241bf5444550075884c788c2da4783659bc1e0
Summary:
We want to implement request only net and to do this we decided to split the work into two parts. The first part will propagate required metadata and the second part will cut the nets properly.
This diff is to propagate request_only metadata across the layers.
A few notes about implementation:
- Each layer contains a field request_only which can be set based on the input_record. If all the scalars from the input_record are marked request_only we mark a layer as request_only;
- Sparse-To-Dense layer sets request_only metadata;
- SigridTransformation and SparseLookup layers propagate request_only status;
- As for now we join request_only and other sparse features together in input_record, but ideally we may want to separate this, because request_only should be served separately;
Reviewed By: xianjiec
Differential Revision: D4259505
fbshipit-source-id: db8a30ef92cba84f1a843981b9dde3a8b9633608