mirror of
https://github.com/saymrwulf/pytorch.git
synced 2026-05-15 21:00:47 +00:00
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/45551 The FP16 version of SparseNormalize op in Caffe2 is missing. This Diff adds FP16 support to unblock MC process of adding FP16 to Dper3. Check https://fb.quip.com/L0T2AXGwUY3n#EReACAeifk3 . One question is whether the pure FP16 Sparse Normalized op will affect the accuracy? Maybe we should do it in FP32 domain. ghstack-source-id: 114184398 Test Plan: ``` buck run mode/opt //caffe2/caffe2/python/operator_test:sparse_normalize_test ``` ``` buck run mode/opt -c python.package_style=inplace mode/no-gpu //caffe2/caffe2/python/benchmarks:sparse_normalize_benchmark -- --fp16 ``` Reviewed By: jspark1105 Differential Revision: D24005618 fbshipit-source-id: 8b918ec4063fdaafa444779b95206ba2b7b38537 |
||
|---|---|---|
| .. | ||
| fused_rowwise_nbit_conversion_bench.py | ||
| sparse_lengths_sum_nbit_benchmark.py | ||
| sparse_normalize_benchmark.py | ||