mirror of
https://github.com/saymrwulf/pytorch.git
synced 2026-05-14 20:57:59 +00:00
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38574 Adding sparse L1 and L2 regularization operator to Caffe2. This doesn't work using run_on_loss, only run_after_optimize. Applying it to run_after_optimize rather than run_on_loss was easier to implement, particularly for the L1 norm which is preferable in some cases and is non-differentiable at zero. Test Plan: Wrote and ran unit tests in operator_test:sparse_lp_regularizer_test. Differential Revision: D21003029 fbshipit-source-id: 81070a621752560ce03e320d065ce27807a5d278
7 lines
226 B
Text
7 lines
226 B
Text
#include "caffe2/core/context_gpu.h"
|
|
#include "caffe2/operators/operator_fallback_gpu.h"
|
|
#include "caffe2/operators/sparse_lp_regularizer_op.h"
|
|
|
|
namespace caffe2 {
|
|
REGISTER_CUDA_OPERATOR(SparseLpRegularizer, GPUFallbackOp);
|
|
}
|