fluid.regularizer¶
L1Decay¶
-
paddle.fluid.regularizer.
L1Decay
alias of
paddle.fluid.regularizer.L1DecayRegularizer
L1DecayRegularizer¶
-
class
paddle.fluid.regularizer.
L1DecayRegularizer
(regularization_coeff=0.0)[source] Implements the L1 Weight Decay Regularization
L1 regularization encourages sparsity.
\[L1WeightDecay = reg\_coeff * sign(parameter)\]- Parameters
regularization_coeff (float) – regularization coeff
Examples
import paddle.fluid as fluid main_prog = fluid.Program() startup_prog = fluid.Program() with fluid.program_guard(main_prog, startup_prog): data = fluid.layers.data(name='image', shape=[3, 28, 28], dtype='float32') label = fluid.layers.data(name='label', shape=[1], dtype='int64') hidden = fluid.layers.fc(input=data, size=128, act='relu') prediction = fluid.layers.fc(input=hidden, size=10, act='softmax') loss = fluid.layers.cross_entropy(input=prediction, label=label) avg_loss = fluid.layers.mean(loss) optimizer = fluid.optimizer.Adagrad( learning_rate=1e-4, regularization=fluid.regularizer.L1DecayRegularizer( regularization_coeff=0.1)) optimizer.minimize(avg_loss)
L2Decay¶
-
paddle.fluid.regularizer.
L2Decay
alias of
paddle.fluid.regularizer.L2DecayRegularizer
L2DecayRegularizer¶
-
class
paddle.fluid.regularizer.
L2DecayRegularizer
(regularization_coeff=0.0)[source] Implements the L2 Weight Decay Regularization
Small values of L2 can help prevent over fitting the training data.
\[L2WeightDecay = reg\_coeff * parameter\]- Parameters
regularization_coeff (float) – regularization coeff
Examples
import paddle.fluid as fluid main_prog = fluid.Program() startup_prog = fluid.Program() with fluid.program_guard(main_prog, startup_prog): data = fluid.layers.data(name='image', shape=[3, 28, 28], dtype='float32') label = fluid.layers.data(name='label', shape=[1], dtype='int64') hidden = fluid.layers.fc(input=data, size=128, act='relu') prediction = fluid.layers.fc(input=hidden, size=10, act='softmax') loss = fluid.layers.cross_entropy(input=prediction, label=label) avg_loss = fluid.layers.mean(loss) optimizer = fluid.optimizer.Adagrad( learning_rate=1e-4, regularization=fluid.regularizer.L2DecayRegularizer( regularization_coeff=0.1)) optimizer.minimize(avg_loss)