fluid.initializer¶
Bilinear¶
-
paddle.fluid.initializer.
Bilinear
alias of
paddle.fluid.initializer.BilinearInitializer
BilinearInitializer¶
-
class
paddle.fluid.initializer.
BilinearInitializer
[source] This initializer can be used in transposed convolution operator to act as upsampling. Users can upsample a feature map with shape of (B, C, H, W) by any integer factor. The usage is:
Examples
import paddle.fluid as fluid factor = 2 C = 2 w_attr = fluid.initializer.ParamAttr( learning_rate=0., regularizer=fluid.regularizer.L2Decay(0.), initializer=fluid.initializer.Bilinear()) x = fluid.layers.data(name="data", shape=[3, 32, 32], dtype="float32") conv_up = fluid.layers.conv2d_transpose( input=x, num_filters=C, output_size=None, filter_size=2 * factor - factor % 2, padding=int(math.ceil((factor - 1) / 2.)), stride=factor, groups=C, param_attr=w_attr, bias_attr=False)
Where, num_filters=C and groups=C means this is channel-wise transposed convolution. The filter shape will be (C, 1, K, K) where K is filer_size, This initializer will set a (K, K) interpolation kernel for every channel of the filter identically. The resulting shape of the output feature map will be (B, C, factor * H, factor * W). Note that the learning rate and the weight decay are set to 0 in order to keep coefficient values of bilinear interpolation unchanged during training.
Constant¶
-
paddle.fluid.initializer.
Constant
alias of
paddle.fluid.initializer.ConstantInitializer
ConstantInitializer¶
-
class
paddle.fluid.initializer.
ConstantInitializer
(value=0.0, force_cpu=False)[source] Implements the constant initializer
- Parameters
value (float) – constant value to initialize the variable
Examples
import paddle.fluid as fluid x = fluid.layers.data(name="data", shape=[32, 32], dtype="float32") fc = fluid.layers.fc(input=x, size=10, param_attr=fluid.initializer.Constant(value=2.0))
force_init_on_cpu¶
-
paddle.fluid.initializer.
force_init_on_cpu
()[source] The flag of whether force to init variables on CPU.
- Returns
the state if we should force init on CPU.
- Return type
bool
Examples
import paddle.fluid as fluid if fluid.initializer.force_init_on_cpu(): step = fluid.layers.create_global_var( shape=[2,3], value=1.0, dtype='float32')
init_on_cpu¶
-
paddle.fluid.initializer.
init_on_cpu
()[source] Force the variable to be inited on CPU.
Examples
import paddle.fluid as fluid with fluid.initializer.init_on_cpu(): step = fluid.layers.create_global_var( shape=[2,3], value=1.0, dtype='float32')
MSRA¶
-
paddle.fluid.initializer.
MSRA
alias of
paddle.fluid.initializer.MSRAInitializer
MSRAInitializer¶
-
class
paddle.fluid.initializer.
MSRAInitializer
(uniform=True, fan_in=None, seed=0)[source] Implements the MSRA initializer a.k.a. Kaiming Initializer
This class implements the weight initialization from the paper Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification by Kaiming He, Xiangyu Zhang, Shaoqing Ren and Jian Sun. This is a robust initialization method that particularly considers the rectifier nonlinearities. In case of Uniform distribution, the range is [-x, x], where
\[x = \sqrt{\frac{6.0}{fan\_in}}\]In case of Normal distribution, the mean is 0 and the standard deviation is
\[\sqrt{\frac{2.0}{fan\_in}}\]- Parameters
uniform (bool) – whether to use uniform or normal distribution
fan_in (float) – fan_in for MSRAInitializer. If None, it is inferred from the variable.
seed (int) – random seed
Note
It is recommended to set fan_in to None for most cases.
Examples
import paddle.fluid as fluid x = fluid.layers.data(name="data", shape=[32, 32], dtype="float32") fc = fluid.layers.fc(input=x, size=10, param_attr=fluid.initializer.MSRA(uniform=False))
Normal¶
-
paddle.fluid.initializer.
Normal
alias of
paddle.fluid.initializer.NormalInitializer
NormalInitializer¶
-
class
paddle.fluid.initializer.
NormalInitializer
(loc=0.0, scale=1.0, seed=0)[source] Implements the Random Normal(Gaussian) distribution initializer
- Parameters
loc (float) – mean of the normal distribution
scale (float) – standard deviation of the normal distribution
seed (int) – random seed
Examples
import paddle.fluid as fluid x = fluid.layers.data(name="data", shape=[32, 32], dtype="float32") fc = fluid.layers.fc(input=x, size=10, param_attr=fluid.initializer.Normal(loc=0.0, scale=2.0))
NumpyArrayInitializer¶
-
class
paddle.fluid.initializer.
NumpyArrayInitializer
(value)[source] Init an parameter with an numpy array
- Parameters
value (numpy) – numpy array to initialize the variable
Examples
import paddle.fluid as fluid x = fluid.layers.data(name="x", shape=[5], dtype='float32') fc = fluid.layers.fc(input=x, size=10, param_attr=fluid.initializer.NumpyArrayInitializer(numpy.array([1,2])))
TruncatedNormal¶
-
paddle.fluid.initializer.
TruncatedNormal
alias of
paddle.fluid.initializer.TruncatedNormalInitializer
TruncatedNormalInitializer¶
-
class
paddle.fluid.initializer.
TruncatedNormalInitializer
(loc=0.0, scale=1.0, seed=0)[source] Implements the Random TruncatedNormal(Gaussian) distribution initializer
- Parameters
loc (float) – mean of the normal distribution
scale (float) – standard deviation of the normal distribution
seed (int) – random seed
Examples
import paddle.fluid as fluid x = fluid.layers.data(name='x', shape=[1], dtype='float32') fc = fluid.layers.fc(input=x, size=10, param_attr=fluid.initializer.TruncatedNormal(loc=0.0, scale=2.0))
Uniform¶
-
paddle.fluid.initializer.
Uniform
alias of
paddle.fluid.initializer.UniformInitializer
UniformInitializer¶
-
class
paddle.fluid.initializer.
UniformInitializer
(low=-1.0, high=1.0, seed=0)[source] Implements the random uniform distribution initializer
- Parameters
low (float) – lower boundary of the uniform distribution
high (float) – upper boundary of the uniform distribution
seed (int) – random seed
Examples
import paddle.fluid as fluid x = fluid.layers.data(name='x', shape=[1], dtype='float32') fc = fluid.layers.fc(input=x, size=10, param_attr=fluid.initializer.Uniform(low=-0.5, high=0.5))
Xavier¶
-
paddle.fluid.initializer.
Xavier
alias of
paddle.fluid.initializer.XavierInitializer
XavierInitializer¶
-
class
paddle.fluid.initializer.
XavierInitializer
(uniform=True, fan_in=None, fan_out=None, seed=0)[source] This class implements the Xavier weight initializer from the paper Understanding the difficulty of training deep feedforward neural networks by Xavier Glorot and Yoshua Bengio.
This initializer is designed to keep the scale of the gradients approximately same in all the layers. In case of Uniform distribution, the range is [-x, x], where
\[x = \sqrt{\frac{6.0}{fan\_in + fan\_out}}\]In case of Normal distribution, the mean is 0 and the standard deviation is
\[\sqrt{\frac{2.0}{fan\_in + fan\_out}}\]- Parameters
uniform (bool) – whether to use uniform or normal distribution
fan_in (float) – fan_in for Xavier initialization. If None, it is inferred from the variable.
fan_out (float) – fan_out for Xavier initialization. If None, it is inferred from the variable.
seed (int) – random seed
Note
It is recommended to set fan_in and fan_out to None for most cases.
Examples
import paddle.fluid as fluid queries = fluid.layers.data(name='x', shape=[1], dtype='float32') fc = fluid.layers.fc( input=queries, size=10, param_attr=fluid.initializer.Xavier(uniform=False))