pytorch/caffe2/python/layers
Jiyan Yang 714344a976 Specify to use Float16UniformFill if necessary in sparse lookup layer (#18499)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18499

If the init op is not fp16 compatible, it should throw.
However, in the special case where the original init op is UniformFill,
we replace it with Float16UniformFill

Reviewed By: kennyhorror

Differential Revision: D14627209

fbshipit-source-id: eb427772874a732ca8b3a25d06670d119ce8ac14
2019-04-23 10:14:08 -07:00
..
__init__.py
adaptive_weight.py
add_bias.py
arc_cosine_feature_map.py
batch_distill_lr_loss.py
batch_lr_loss.py try to enable uncertainty for lr loss (#17236) 2019-04-11 07:35:19 -07:00
batch_mse_loss.py
batch_normalization.py
batch_sigmoid_cross_entropy_loss.py
batch_softmax_loss.py
blob_weighted_sum.py
bucket_weighted.py Implement bucket-based attention pooling for IdScoreList features (#13004) 2018-10-25 18:04:08 -07:00
build_index.py
concat.py
constant_weight.py
conv.py
dropout.py add dropout during eval (#17549) 2019-02-28 23:21:29 -08:00
fc.py fc layer accept axis argument (#13822) 2018-11-11 13:44:57 -08:00
fc_without_bias.py
feature_sparse_to_dense.py Revert D13551909: [fbcode] logdevice for generic feature type 2019-01-25 00:33:06 -08:00
functional.py
gather_record.py
homotopy_weight.py
label_smooth.py
last_n_window_collector.py
layer_normalization.py
layers.py
margin_rank_loss.py
merge_id_lists.py
pairwise_similarity.py
position_weighted.py
random_fourier_features.py
reservoir_sampling.py
sampling_train.py
sampling_trainable_mixin.py
select_record_by_context.py
semi_random_features.py
sparse_feature_hash.py
sparse_lookup.py Specify to use Float16UniformFill if necessary in sparse lookup layer (#18499) 2019-04-23 10:14:08 -07:00
split.py
tags.py
uniform_sampling.py