pytorch/caffe2/operators/sparse_lp_regularizer_op_gpu.cu
Jamie King 7f1a96d43c Adding sparse Lp regularization operator to Caffe2 (#38574)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38574

Adding sparse L1 and L2 regularization operator to Caffe2.  This doesn't work using run_on_loss, only run_after_optimize.  Applying it to run_after_optimize rather than run_on_loss was easier to implement, particularly for the L1 norm which is preferable in some cases and is non-differentiable at zero.

Test Plan: Wrote and ran unit tests in operator_test:sparse_lp_regularizer_test.

Differential Revision: D21003029

fbshipit-source-id: 81070a621752560ce03e320d065ce27807a5d278
2020-06-01 15:21:19 -07:00

7 lines
226 B
Text

#include "caffe2/core/context_gpu.h"
#include "caffe2/operators/operator_fallback_gpu.h"
#include "caffe2/operators/sparse_lp_regularizer_op.h"
namespace caffe2 {
REGISTER_CUDA_OPERATOR(SparseLpRegularizer, GPUFallbackOp);
}