pytorch/caffe2/python/layers
sf-wind 602a09dde7 Update caffe2 from facebook 4f527ef46abf (#2234)
* [GanH]: two_task_discriminator

as titled

and adding label smooth

* [Dper2] Simplified UI options needed for blob magnitude visualization

* [GanH]: fix tags

as titled

* Added type and shape inference for GatherRange operator

This helps with type / shape inference when using this operator in layers.
Also just a nice to have in general.

* Demonstrate Caffe2 exception handling with StoreHandlerTimeoutError in Python

We'd like to catch and recover from certain Caffe2 net exceptions. Use this diff to demonstrate a pattern of registering a pybind exception mapping and catching in Pythonusing caffe2::StoreHandlerTimeoutException.

* Bind Gloo IoException to IoError in Python

Allow peer failure handling and recovery using an exception based mechanism. This diff registers gloo::IoException with pybind.

* [GanH]: add label smoothing to softmax with loss

as titled

* [C2] Enable LARS in Adagrad and hook it to DPER

* [DPER] Don't pass LayerModelHelper in create_trainer_nodes

Since we're planning to get rid of it eventually and I want to get access to
NetDef only interface ASAP - I'm looking towards removing all references to
LMH, where we don't really need them.

* fix bugs in LambdaRankNdcgOp

the loss and gradient in LambdaRankNdcgOp are incorrect. The loss should be negative log of probs instead of log.

* Restrict thread pool on iOS to only big cores

Historically, iPhones exposed only one type of cores, and Caffe2 thread pool used all of them.
However, iPhone 8/iPhone X exposes 2 big + 4 LITTLE cores. As our thread pool doesn't support work stealing or other forms of load balancing, fast cores end up waiting for the slow ones, and it may be better to restrict execution to only 2 fast cores, like we do on Android.

* Remove SparseLength Sum/WeightedSum/Mean operators with fp16 engine

Remove SparseLength Sum/WeightedSum/Mean operators with fp16 engine

* make clang happy and get fewer warnings

make clang happy and get fewer warnings

* [Personalization] Support add_output_schema() in layer_model_helper

Problem:
Currently the output_schema of sparse_nn can only be set once. https://fburl.com/efth5zer.

Solution:
For flexibility, we want to add fields to output_schema incrementally.

Plan:
Wrap the change of `model._output_schema` into a new function `add_output_schema()` for adding additional output_schema.

Callsite:
The add_output_schema() should be called instead at https://fburl.com/efth5zer

Reference:
The newly added `add_output_schema()` will be similar to `add_loss()` in https://fburl.com/t2ii8njh
2018-03-12 12:22:59 -07:00
..
__init__.py Re-license to Apache 2017-09-28 16:22:00 -07:00
add_bias.py Re-license to Apache 2017-09-28 16:22:00 -07:00
arc_cosine_feature_map.py Re-license to Apache 2017-09-28 16:22:00 -07:00
batch_distill_lr_loss.py Distill loss with SigmoidCrossEntropyWithLogits 2017-10-26 15:18:34 -07:00
batch_lr_loss.py Update caffe2 from facebook 4f527ef46abf (#2234) 2018-03-12 12:22:59 -07:00
batch_mse_loss.py Support regression with output transform in MTML for feed 2017-12-11 17:20:20 -08:00
batch_normalization.py Re-license to Apache 2017-09-28 16:22:00 -07:00
batch_sigmoid_cross_entropy_loss.py Re-license to Apache 2017-09-28 16:22:00 -07:00
batch_softmax_loss.py Update caffe2 from facebook 4f527ef46abf (#2234) 2018-03-12 12:22:59 -07:00
build_index.py Re-license to Apache 2017-09-28 16:22:00 -07:00
concat.py testPairwiseDotProduct 2018-01-26 11:33:08 -08:00
conv.py Re-license to Apache 2017-09-28 16:22:00 -07:00
dropout.py Re-license to Apache 2017-09-28 16:22:00 -07:00
fc.py add error msg in fc input_record 2018-01-23 14:48:15 -08:00
fc_without_bias.py Re-license to Apache 2017-09-28 16:22:00 -07:00
feature_sparse_to_dense.py Use EMBEDDING feature type instead of FLOAT_TENSOR 2017-10-11 13:50:03 -07:00
functional.py Make TypeInference work for HalfToFloat & FloatToHalf. 2018-02-08 15:33:43 -08:00
gather_record.py Re-license to Apache 2017-09-28 16:22:00 -07:00
last_n_window_collector.py Re-license to Apache 2017-09-28 16:22:00 -07:00
layers.py add error msg in get_key 2018-01-23 11:04:05 -08:00
margin_rank_loss.py Add rank loss for retrieval models with random negative sample 2017-10-25 16:19:41 -07:00
merge_id_lists.py Re-license to Apache 2017-09-28 16:22:00 -07:00
pairwise_dot_product.py testPairwiseDotProduct 2018-01-26 11:33:08 -08:00
position_weighted.py Revert D6026557: [caffe2][PR] Fix "No handlers could be found for logger" 2017-10-12 20:21:52 -07:00
random_fourier_features.py Re-license to Apache 2017-09-28 16:22:00 -07:00
reservoir_sampling.py Re-license to Apache 2017-09-28 16:22:00 -07:00
sampling_train.py Re-license to Apache 2017-09-28 16:22:00 -07:00
sampling_trainable_mixin.py Re-license to Apache 2017-09-28 16:22:00 -07:00
select_record_by_context.py Update caffe2 from facebook 4f527ef46abf (#2234) 2018-03-12 12:22:59 -07:00
semi_random_features.py Re-license to Apache 2017-09-28 16:22:00 -07:00
sparse_feature_hash.py [c2] update SparseFeatureHash layer 2018-02-26 10:26:25 -08:00
sparse_lookup.py Update caffe2 from facebook 4f527ef46abf (#2234) 2018-03-12 12:22:59 -07:00
split.py Re-license to Apache 2017-09-28 16:22:00 -07:00
tags.py Allow custom component tagging in DeviceOptions.node_name 2018-02-13 11:14:41 -08:00
uniform_sampling.py Re-license to Apache 2017-09-28 16:22:00 -07:00