pytorch/caffe2/python/layers
Benny Chen d23d62cb1e Fix unaries to export fp16 instead of fp32 when rest of the model export to int8
Summary: Currently accelerators does not have the concept for fp32, it only has understandings of fp16 and int8 in terms of data input. In order to fixe the issue here, we want to make sure unaries are turned into fp16 when we have the int8 exporter turned on.

Reviewed By: kennyhorror

Differential Revision: D17743791

fbshipit-source-id: 7322d23eb12ac3f813b525fc0ddd066f95c8ca85
2019-10-14 10:51:17 -07:00
..
__init__.py
adaptive_weight.py
add_bias.py
arc_cosine_feature_map.py
batch_huber_loss.py
batch_lr_loss.py Exponential decay of the weight of task loss (#27508) 2019-10-08 09:15:41 -07:00
batch_mse_loss.py
batch_normalization.py
batch_sigmoid_cross_entropy_loss.py
batch_softmax_loss.py
blob_weighted_sum.py
bpr_loss.py Add BPR loss to TTSN (#24439) 2019-08-15 23:20:15 -07:00
bucket_weighted.py add feature name into module and update position weighted to match dper2 2019-10-14 08:06:19 -07:00
build_index.py
concat.py
constant_weight.py
conv.py
dropout.py
fc.py Integrate FC fp16 exporter into Dper2 (#26582) 2019-09-29 10:19:28 -07:00
fc_without_bias.py
feature_sparse_to_dense.py Return list of AccessedFeatures from get_accessed_features (#23983) 2019-08-14 10:50:27 -07:00
functional.py
gather_record.py
homotopy_weight.py
label_smooth.py
last_n_window_collector.py
layer_normalization.py
layers.py Return list of AccessedFeatures from get_accessed_features (#23983) 2019-08-14 10:50:27 -07:00
margin_rank_loss.py
merge_id_lists.py
pairwise_similarity.py
position_weighted.py
random_fourier_features.py
reservoir_sampling.py
sampling_train.py
sampling_trainable_mixin.py
select_record_by_context.py
semi_random_features.py
sparse_dropout_with_replacement.py hook up dropout sparse with replacement operator 2019-07-23 14:34:25 -07:00
sparse_feature_hash.py Refactor and expose metadata of tum_history layer for online prediction 2019-08-15 00:27:11 -07:00
sparse_lookup.py Fix unaries to export fp16 instead of fp32 when rest of the model export to int8 2019-10-14 10:51:17 -07:00
split.py Enable variable size embedding (#25782) 2019-09-09 22:08:32 -07:00
tags.py
uniform_sampling.py