nn¶
adaptive_pool2d¶
-
paddle.fluid.layers.
adaptive_pool2d
(input, pool_size, pool_type='max', require_index=False, name=None)[source] Adaptive Pool2d Operator The adaptive_pool2d operation calculates the output based on the input, pool_size, pool_type parameters. Input(X) and output(Out) are in NCHW format, where N is batch size, C is the number of channels, H is the height of the feature, and W is the width of the feature. Parameters(pool_size) should contain two elements which represent height and width, respectively. Also the H and W dimensions of output(Out) is same as Parameter(pool_size).
For average adaptive pool2d:
\[ \begin{align}\begin{aligned}hstart &= floor(i * H_{in} / H_{out})\\hend &= ceil((i + 1) * H_{in} / H_{out})\\wstart &= floor(j * W_{in} / W_{out})\\wend &= ceil((j + 1) * W_{in} / W_{out})\\Output(i ,j) &= \frac{sum(Input[hstart:hend, wstart:wend])}{(hend - hstart) * (wend - wstart)}\end{aligned}\end{align} \]- Parameters
input (Variable) – The input tensor of pooling operator. The format of input tensor is NCHW, where N is batch size, C is the number of channels, H is the height of the feature, and W is the width of the feature.
pool_size (int|list|tuple) – The pool kernel size. If pool kernel size is a tuple or list, it must contain two integers, (pool_size_Height, pool_size_Width).
pool_type – (string), pooling type, can be “max” for max-pooling and “avg” for average-pooling
require_index (bool) – If true, the index of max pooling point will be returned along with outputs. It cannot be set in average pooling type.
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
- Returns
The pooling result.
- Return type
Variable
- Raises
ValueError
– ‘pool_type’ is not ‘max’ nor ‘avg’.ValueError
– invalid setting ‘require_index’ true when ‘pool_type’ is ‘avg’.ValueError
– ‘pool_size’ should be a list or tuple with length as 2.
Examples
# suppose input data in shape of [N, C, H, W], `pool_size` is [m, n], # output shape is [N, C, m, n], adaptive pool divide H and W dimentions # of input data into m * n grids averagely and performs poolings in each # grid to get output. # adaptive average pool performs calculations as follow: # # for i in range(m): # for j in range(n): # hstart = floor(i * H / m) # hend = ceil((i + 1) * H / m) # wstart = floor(i * W / n) # wend = ceil((i + 1) * W / n) # output[:, :, i, j] = avg(input[:, :, hstart: hend, wstart: wend]) # import paddle.fluid as fluid data = fluid.layers.data( name='data', shape=[3, 32, 32], dtype='float32') pool_out = fluid.layers.adaptive_pool2d( input=data, pool_size=[3, 3], pool_type='avg')
adaptive_pool3d¶
-
paddle.fluid.layers.
adaptive_pool3d
(input, pool_size, pool_type='max', require_index=False, name=None)[source] Adaptive Pool3d Operator The adaptive_pool3d operation calculates the output based on the input, pool_size, pool_type parameters. Input(X) and output(Out) are in NCDHW format, where N is batch size, C is the number of channels, D is the depth of the feature, H is the height of the feature, and W is the width of the feature. Parameters(pool_size) should contain three elements which represent height and width, respectively. Also the D, H and W dimensions of output(Out) is same as Parameter(pool_size).
For average adaptive pool3d:
\[ \begin{align}\begin{aligned}dstart &= floor(i * D_{in} / D_{out})\\dend &= ceil((i + 1) * D_{in} / D_{out})\\hstart &= floor(j * H_{in} / H_{out})\\hend &= ceil((j + 1) * H_{in} / H_{out})\\wstart &= floor(k * W_{in} / W_{out})\\wend &= ceil((k + 1) * W_{in} / W_{out})\\Output(i ,j, k) &= \frac{sum(Input[dstart:dend, hstart:hend, wstart:wend])}{(dend - dstart) * (hend - hstart) * (wend - wstart)}\end{aligned}\end{align} \]- Parameters
input (Variable) – The input tensor of pooling operator. The format of input tensor is NCDHW, where N is batch size, C is the number of channels, D is the depth of the feature, H is the height of the feature, and W is the width of the feature.
pool_size (int|list|tuple) – The pool kernel size. If pool kernel size is a tuple or list, it must contain three integers, (Depth, Height, Width).
pool_type – (string) Pooling type, can be “max” for max-pooling and “avg” for average-pooling
require_index (bool) – If true, the index of max pooling point will be returned along with outputs. It cannot be set in average pooling type.
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
- Returns
The pooling result.
- Return type
Variable
- Raises
ValueError
– ‘pool_type’ is not ‘max’ nor ‘avg’.ValueError
– invalid setting ‘require_index’ true when ‘pool_type’ is ‘avg’.ValueError
– ‘pool_size’ should be a list or tuple with length as 2.
Examples
# suppose input data in shape of [N, C, D, H, W], `pool_size` is [l, m, n], # output shape is [N, C, l, m, n], adaptive pool divide D, H and W dimentions # of input data into l * m * n grids averagely and performs poolings in each # grid to get output. # adaptive average pool performs calculations as follow: # # for i in range(l): # for j in range(m): # for k in range(n): # dstart = floor(i * D / l) # dend = ceil((i + 1) * D / l) # hstart = floor(j * H / m) # hend = ceil((j + 1) * H / m) # wstart = floor(k * W / n) # wend = ceil((k + 1) * W / n) # output[:, :, i, j, k] = # avg(input[:, :, dstart:dend, hstart: hend, wstart: wend]) # import paddle.fluid as fluid data = fluid.layers.data( name='data', shape=[3, 32, 32, 32], dtype='float32') pool_out = fluid.layers.adaptive_pool3d( input=data, pool_size=[3, 3, 3], pool_type='avg')
add_position_encoding¶
-
paddle.fluid.layers.
add_position_encoding
(input, alpha, beta, name=None)[source] Add Position Encoding Layer
This layer accepts an input 3D-Tensor of shape [N x M x P], and returns an output Tensor of shape [N x M x P] with positional encoding value.
Refer to Attention Is All You Need .
\[\begin{split}PE(pos, 2i) &= \sin{(pos / 10000^{2i / P})} \\ PE(pos, 2i + 1) &= \cos{(pos / 10000^{2i / P})} \\ Out(:, pos, i) &= \alpha * input(:, pos, i) + \beta * PE(pos, i)\end{split}\]- Where:
\(PE(pos, 2i)\) : the increment for the number at even position
\(PE(pos, 2i + 1)\) : the increment for the number at odd position
- Parameters
input (Variable) – 3-D input tensor with shape [N x M x P]
alpha (float) – multiple of Input Tensor
beta (float) – multiple of Positional Encoding Tensor
name (string) – the name of position encoding layer
- Returns
A 3-D Tensor of shape [N x M x P] with positional encoding.
- Return type
Variable
Examples
import paddle.fluid as fluid tensor = fluid.layers.data( name='tensor', shape=[32, 64, 512], dtype='float32', append_batch_size=False) position_tensor = fluid.layers.add_position_encoding( input=tensor, alpha=1.0, beta=1.0)
affine_channel¶
-
paddle.fluid.layers.
affine_channel
(x, scale=None, bias=None, data_layout='NCHW', name=None, act=None)[source] Applies a separate affine transformation to each channel of the input. Useful for replacing spatial batch norm with its equivalent fixed transformation. The input also can be 2D tensor and applies a affine transformation in second dimension.
- Parameters
x (Variable) – Feature map input can be a 4D tensor with order NCHW or NHWC. It also can be a 2D tensor and the affine transformation is applied in the second dimension.
scale (Variable) – 1D input of shape (C), the c-th element is the scale factor of the affine transformation for the c-th channel of the input.
bias (Variable) – 1D input of shape (C), the c-th element is the bias of the affine transformation for the c-th channel of the input.
data_layout (string, default NCHW) – NCHW or NHWC. If input is 2D tensor, you can ignore data_layout.
name (str, default None) – The name of this layer.
act (str, default None) – Activation to be applied to the output of this layer.
- Returns
A tensor of the same shape and data layout with x.
- Return type
out (Variable)
Examples
import paddle.fluid as fluid data = fluid.layers.data(name='data', shape=[3, 32, 32], dtype='float32') input_scale = fluid.layers.create_parameter(shape=[3], dtype="float32") input_bias = fluid.layers.create_parameter(shape=[3], dtype="float32") out = fluid.layers.affine_channel(data,scale=input_scale, bias=input_bias)
affine_grid¶
-
paddle.fluid.layers.
affine_grid
(theta, out_shape, name=None)[source] It generates a grid of (x,y) coordinates using the parameters of the affine transformation that correspond to a set of points where the input feature map should be sampled to produce the transformed output feature map.
* Case 1: Given: theta = [[[x_11, x_12, x_13] [x_14, x_15, x_16]] [[x_21, x_22, x_23] [x_24, x_25, x_26]]] out_shape = [2, 3, 5, 5] Step 1: Generate normalized coordinates according to out_shape. The values of the normalized coordinates are in the interval between -1 and 1. The shape of the normalized coordinates is [2, H, W] as below: C = [[[-1. -1. -1. -1. -1. ] [-0.5 -0.5 -0.5 -0.5 -0.5] [ 0. 0. 0. 0. 0. ] [ 0.5 0.5 0.5 0.5 0.5] [ 1. 1. 1. 1. 1. ]] [[-1. -0.5 0. 0.5 1. ] [-1. -0.5 0. 0.5 1. ] [-1. -0.5 0. 0.5 1. ] [-1. -0.5 0. 0.5 1. ] [-1. -0.5 0. 0.5 1. ]]] C[0] is the coordinates in height axis and C[1] is the coordinates in width axis. Step2: Tanspose and reshape C to shape [H * W, 2] and append ones to last dimension. The we get: C_ = [[-1. -1. 1. ] [-0.5 -1. 1. ] [ 0. -1. 1. ] [ 0.5 -1. 1. ] [ 1. -1. 1. ] [-1. -0.5 1. ] [-0.5 -0.5 1. ] [ 0. -0.5 1. ] [ 0.5 -0.5 1. ] [ 1. -0.5 1. ] [-1. 0. 1. ] [-0.5 0. 1. ] [ 0. 0. 1. ] [ 0.5 0. 1. ] [ 1. 0. 1. ] [-1. 0.5 1. ] [-0.5 0.5 1. ] [ 0. 0.5 1. ] [ 0.5 0.5 1. ] [ 1. 0.5 1. ] [-1. 1. 1. ] [-0.5 1. 1. ] [ 0. 1. 1. ] [ 0.5 1. 1. ] [ 1. 1. 1. ]] Step3: Compute output by equation $$Output[i] = C_ * Theta[i]^T$$
- Parameters
theta (Variable) – A batch of affine transform parameters with shape [N, 2, 3].
out_shape (Variable | list | tuple) – The shape of target output with format [N, C, H, W].
out_shape
can be a Variable or a list or tuple.name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
- Returns
The output with shape [N, H, W, 2].
- Return type
Variable
- Raises
ValueError
– If the type of arguments is not supported.
Examples
import paddle.fluid as fluid theta = fluid.layers.data(name="x", shape=[2, 3], dtype="float32") out_shape = fluid.layers.data(name="y", shape=[-1], dtype="float32") data = fluid.layers.affine_grid(theta, out_shape) # or data = fluid.layers.affine_grid(theta, [5, 3, 28, 28])
autoincreased_step_counter¶
-
paddle.fluid.layers.
autoincreased_step_counter
(counter_name=None, begin=1, step=1)[source] Create an auto-increase variable which will be automatically increased by 1 every mini-batch Return the run counter of the main program, default is started from 1.
- Parameters
counter_name (str) – The counter name, default is ‘@STEP_COUNTER@’.
begin (int) – The first value of this counter.
step (int) – The increment step between each execution.
- Returns
The global run counter.
- Return type
Variable
Examples
import paddle.fluid as fluid global_step = fluid.layers.autoincreased_step_counter( counter_name='@LR_DECAY_COUNTER@', begin=0, step=1)
batch_norm¶
-
paddle.fluid.layers.
batch_norm
(input, act=None, is_test=False, momentum=0.9, epsilon=1e-05, param_attr=None, bias_attr=None, data_layout='NCHW', in_place=False, name=None, moving_mean_name=None, moving_variance_name=None, do_model_average_for_mean_and_var=False, fuse_with_relu=False, use_global_stats=False)[source] Batch Normalization Layer
Can be used as a normalizer function for conv2d and fully_connected operations. The required data format for this layer is one of the following:
NHWC [batch, in_height, in_width, in_channels]
NCHW [batch, in_channels, in_height, in_width]
Refer to Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift for more details.
\(input\) is the input features over a mini-batch.
\[\begin{split}\mu_{\beta} &\gets \frac{1}{m} \sum_{i=1}^{m} x_i \qquad &//\ \ mini-batch\ mean \\ \sigma_{\beta}^{2} &\gets \frac{1}{m} \sum_{i=1}^{m}(x_i - \ \mu_{\beta})^2 \qquad &//\ mini-batch\ variance \\ \hat{x_i} &\gets \frac{x_i - \mu_\beta} {\sqrt{\ \sigma_{\beta}^{2} + \epsilon}} \qquad &//\ normalize \\ y_i &\gets \gamma \hat{x_i} + \beta \qquad &//\ scale\ and\ shift\end{split}\]When use_global_stats = True, the \(\mu_{\beta}\) and \(\sigma_{\beta}^{2}\) are not the statistics of one mini-batch. They are global (or running) statistics. (It usually got from the pre-trained model.) The training and testing (or inference) have the same behavior:
\[\begin{split}\hat{x_i} &\gets \frac{x_i - \mu_\beta} {\sqrt{\ \sigma_{\beta}^{2} + \epsilon}} \\ y_i &\gets \gamma \hat{x_i} + \beta\end{split}\]- Parameters
input (variable) – The rank of input variable can be 2, 3, 4, 5.
act (string, Default None) – Activation type, linear|relu|prelu|…
is_test (bool, Default False) – A flag indicating whether it is in test phrase or not.
momentum (float, Default 0.9) – The value used for the moving_mean and moving_var computation. The updated formula is: \(moving\_mean = moving\_mean * momentum + new\_mean * (1. - momentum)\) \(moving\_var = moving\_var * momentum + new\_var * (1. - momentum)\) Default is 0.9.
epsilon (float, Default 1e-05) – A value added to the denominator for numerical stability. Default is 1e-5.
param_attr (ParamAttr|None) – The parameter attribute for Parameter scale of batch_norm. If it is set to None or one attribute of ParamAttr, batch_norm will create ParamAttr as param_attr. If the Initializer of the param_attr is not set, the parameter is initialized with Xavier. Default: None.
bias_attr (ParamAttr|None) – The parameter attribute for the bias of batch_norm. If it is set to None or one attribute of ParamAttr, batch_norm will create ParamAttr as bias_attr. If the Initializer of the bias_attr is not set, the bias is initialized zero. Default: None.
data_layout (string, default NCHW) – NCHW|NHWC
in_place (bool, Default False) – Make the input and output of batch norm reuse memory.
name (string, Default None) – A name for this layer(optional). If set None, the layer will be named automatically.
moving_mean_name (string, Default None) – The name of moving_mean which store the global Mean. If it is set to None, batch_norm will save global mean with a random name, otherwise, batch_norm will save global mean with the string.
moving_variance_name (string, Default None) – The name of the moving_variance which store the global Variance. If it is set to None, batch_norm will save global variance with a random name, otherwise, batch_norm will save global variance with the string.
do_model_average_for_mean_and_var (bool, Default False) – Do model average for mean and variance or not.
fuse_with_relu (bool) – if True, this OP performs relu after batch norm.
use_global_stats (bool, Default False) – Whether to use global mean and variance. In inference or test mode, set use_global_stats to true or is_test to true, and the behavior is equivalent. In train mode, when setting use_global_stats True, the global mean and variance are also used during train period.
- Returns
A tensor variable which is the result after applying batch normalization on the input.
- Return type
Variable
Examples
import paddle.fluid as fluid x = fluid.layers.data(name='x', shape=[3, 7, 3, 7], dtype='float32', append_batch_size=False) hidden1 = fluid.layers.fc(input=x, size=200, param_attr='fc1.w') hidden2 = fluid.layers.batch_norm(input=hidden1)
beam_search¶
-
paddle.fluid.layers.
beam_search
(pre_ids, pre_scores, ids, scores, beam_size, end_id, level=0, is_accumulated=True, name=None, return_parent_idx=False)[source] Beam search is a classical algorithm for selecting candidate words in a machine translation task.
Refer to Beam search for more details.
This layer does the search in beams for one time step. Specifically, it selects the top-K candidate word ids of current step from
ids
according to theirscores
for all source sentences, where K isbeam_size
andids, scores
are predicted results from the computation cell. Ifids
is not set, it will be calculated out according toscores
. Additionally,pre_ids
andpre_scores
are the output of beam_search at previous step, they are needed for special use to handle ended candidate translations.Note that if
is_accumulated
isTrue
, thescores
passed in should be accumulated scores. Else, thescores
are considered as the straightforward scores and will be transformed to the log field and accumulated thepre_scores
in this operator. Length penalty should be done with extra operators before calculating the accumulated scores if needed.Please see the following demo for a fully beam search usage example:
fluid/tests/book/test_machine_translation.py
- Parameters
pre_ids (Variable) – The LodTensor variable which is the output of beam_search at previous step. It should be a LodTensor with shape \((batch_size, 1)\) and lod \([[0, 1, ... , batch_size], [0, 1, ..., batch_size]]\) at the first step.
pre_scores (Variable) – The LodTensor variable which is the output of beam_search at previous step.
ids (Variable) – The LodTensor variable containing the candidates ids. Its shape should be \((batch_size \times beam_size, K)\), where \(K\) supposed to be
beam_size
.scores (Variable) – The LodTensor variable containing the accumulated scores corresponding to
ids
and its shape is the same as the shape ofids
.beam_size (int) – The beam width used in beam search.
end_id (int) – The id of end token.
level (int, default 0) – It can be ignored and mustn’t change currently. It means the source level of lod, which is explained as following. The lod level of
ids
should be 2. The first level is source level which describes how many prefixes (branchs) for each source sentece (beam), and the second level is sentence level which describes how these candidates belong to the prefix. The paths linking prefixes and selected candidates are organized and reserved in lod.is_accumulated (bool, default True) – Whether the input
score
is accumulated scores.name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
return_parent_idx (bool) – Whether to return an extra Tensor variable preserving the selected_ids’ parent indice in pre_ids in output, which can be used to gather cell states at the next time step.
- Returns
The LodTensor tuple containing the selected ids and the corresponding scores. If
return_parent_idx
isTrue
, an extra Tensor variable preserving the selected_ids’ parent indice is included.- Return type
Variable
Examples
import paddle.fluid as fluid # Suppose `probs` contains predicted results from the computation # cell and `pre_ids` and `pre_scores` is the output of beam_search # at previous step. beam_size = 4 end_id = 1 pre_ids = fluid.layers.data( name='pre_id', shape=[1], lod_level=2, dtype='int64') pre_scores = fluid.layers.data( name='pre_scores', shape=[1], lod_level=2, dtype='float32') probs = fluid.layers.data( name='probs', shape=[10000], dtype='float32') topk_scores, topk_indices = fluid.layers.topk(probs, k=beam_size) accu_scores = fluid.layers.elementwise_add( x=fluid.layers.log(x=topk_scores), y=fluid.layers.reshape(pre_scores, shape=[-1]), axis=0) selected_ids, selected_scores = fluid.layers.beam_search( pre_ids=pre_ids, pre_scores=pre_scores, ids=topk_indices, scores=accu_scores, beam_size=beam_size, end_id=end_id)
beam_search_decode¶
-
paddle.fluid.layers.
beam_search_decode
(ids, scores, beam_size, end_id, name=None)[source] Beam Search Decode Layer. This layer constructs the full hypotheses for each source sentence by walking back along the LoDTensorArray
ids
whose lods can be used to restore the path in the beam search tree. Please see the following demo for a fully beam search usage example:fluid/tests/book/test_machine_translation.py
- Parameters
ids (Variable) – The LodTensorArray variable containing the selected ids of all steps.
scores (Variable) – The LodTensorArray variable containing the selected scores of all steps.
beam_size (int) – The beam width used in beam search.
end_id (int) – The id of end token.
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
- Returns
The LodTensor pair containing the generated id sequences and the corresponding scores. The shapes and lods of the two LodTensor are same. The lod level is 2 and the two levels separately indicate how many hypotheses each source sentence has and how many ids each hypothesis has.
- Return type
Variable
Examples
import paddle.fluid as fluid # Suppose `ids` and `scores` are LodTensorArray variables reserving # the selected ids and scores of all steps ids = fluid.layers.create_array(dtype='int64') scores = fluid.layers.create_array(dtype='float32') finished_ids, finished_scores = fluid.layers.beam_search_decode( ids, scores, beam_size=5, end_id=0)
bilinear_tensor_product¶
-
paddle.fluid.layers.
bilinear_tensor_product
(x, y, size, act=None, name=None, param_attr=None, bias_attr=None)[source] Add Bilinear Tensor Product Layer
This layer performs bilinear tensor product on two inputs. For example:
\[out_{i} = x * W_{i} * {y^\mathrm{T}}, i=0,1,...,size-1\]- In this formula:
\(x\): the first input contains M elements, shape is [batch_size, M].
\(y\): the second input contains N elements, shape is [batch_size, N].
\(W_{i}\): the i-th learned weight, shape is [M, N]
\(out_{i}\): the i-th element of out, shape is [batch_size, size].
\(y^\mathrm{T}\): the transpose of \(y_{2}\).
- Parameters
x (Variable) – 2-D input tensor with shape [batch_size, M]
y (Variable) – 2-D input tensor with shape [batch_size, N]
size (int) – The dimension of this layer.
act (str, default None) – Activation to be applied to the output of this layer.
name (str, default None) – The name of this layer.
param_attr (ParamAttr, default None) – The parameter attribute for the learnable w. parameters/weights of this layer.
bias_attr (ParamAttr, default None) – The parameter attribute for the bias of this layer. If it is set to False, no bias will be added to the output units. If it is set to None, the bias is initialized zero. Default: None.
- Returns
A 2-D Tensor of shape [batch_size, size].
- Return type
Variable
Examples
import paddle.fluid as fluid layer1 = fluid.layers.data("t1", shape=[-1, 5], dtype="float32") layer2 = fluid.layers.data("t2", shape=[-1, 4], dtype="float32") tensor = fluid.layers.bilinear_tensor_product(x=layer1, y=layer2, size=1000)
bpr_loss¶
-
paddle.fluid.layers.
bpr_loss
(input, label, name=None)[source] Bayesian Personalized Ranking Loss Operator
This operator belongs to pairwise ranking loss. Label is the desired item. The loss at a given point in one session is defined as:
\[Y[i] = 1/(N[i] - 1) * \sum_j{\log(\sigma(X[i, Label[i]]-X[i, j]))}\]Learn more details by reading paper <session-based recommendations with recurrent neural networks>.
- Parameters
input (Variable|list) – a 2-D tensor with shape [N x D], where N is the batch size and D is the number of classes. This input is not probability but logits.
label (Variable|list) – the ground truth which is a 2-D tensor. label is a tensor<int64> with shape [N x 1].
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically. Default: None.
- Returns
A 2-D tensor with shape [N x 1], the bpr loss.
Examples
import paddle.fluid as fluid neg_size = 10 label = fluid.layers.data( name="label", shape=[1], dtype="int64") predict = fluid.layers.data( name="predict", shape=[neg_size + 1], dtype="float32") cost = fluid.layers.bpr_loss(input=predict, label=label)
brelu¶
-
paddle.fluid.layers.
brelu
(x, t_min=0.0, t_max=24.0, name=None)[source] BRelu Activation Operator.
\(out = \max(\min(x, t_{min}), t_{max})\)
- Parameters
x (Variable) – Input of BRelu operator
t_min (FLOAT|0.0) – The min marginal value of BRelu
t_max (FLOAT|24.0) – The max marginal value of BRelu
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
- Returns
Output of BRelu operator
- Return type
output(Variable)
Examples:
import paddle.fluid as fluid x = fluid.layers.data(name="x", shape=[2,3,16,16], dtype="float32") y = fluid.layers.brelu(x, t_min=1.0, t_max=20.0)
chunk_eval¶
-
paddle.fluid.layers.
chunk_eval
(input, label, chunk_scheme, num_chunk_types, excluded_chunk_types=None)[source] Chunk Evaluator
This function computes and outputs the precision, recall and F1-score of chunk detection.
For some basics of chunking, please refer to Chunking with Support Vector Machines .
ChunkEvalOp computes the precision, recall, and F1-score of chunk detection, and supports IOB, IOE, IOBES and IO (also known as plain) tagging schemes. Here is a NER example of labeling for these tagging schemes:
====== ====== ====== ===== == ============ ===== ===== ===== == ========= Li Ming works at Agricultural Bank of China in Beijing. ====== ====== ====== ===== == ============ ===== ===== ===== == ========= IO I-PER I-PER O O I-ORG I-ORG I-ORG I-ORG O I-LOC IOB B-PER I-PER O O B-ORG I-ORG I-ORG I-ORG O B-LOC IOE I-PER E-PER O O I-ORG I-ORG I-ORG E-ORG O E-LOC IOBES B-PER E-PER O O I-ORG I-ORG I-ORG E-ORG O S-LOC ====== ====== ====== ===== == ============ ===== ===== ===== == =========
There are three chunk types(named entity types) including PER(person), ORG(organization) and LOC(LOCATION), and we can see that the labels have the form <tag type>-<chunk type>.
Since the calculations actually use label ids rather than labels, extra attention should be paid when mapping labels to ids to make CheckEvalOp work. The key point is that the listed equations are satisfied by ids.
tag_type = label % num_tag_type chunk_type = label / num_tag_type
where num_tag_type is the num of tag types in the tagging scheme, num_chunk_type is the num of chunk types, and tag_type get its value from the following table.
Scheme Begin Inside End Single plain 0 - - - IOB 0 1 - - IOE - 0 1 - IOBES 0 1 2 3
Still use NER as example, assuming the tagging scheme is IOB while chunk types are ORG, PER and LOC. To satisfy the above equations, the label map can be like this:
B-ORG 0 I-ORG 1 B-PER 2 I-PER 3 B-LOC 4 I-LOC 5 O 6
It’s not hard to verify the equations noting that the num of chunk types is 3 and the num of tag types in IOB scheme is 2. For example, the label id of I-LOC is 5, the tag type id of I-LOC is 1, and the chunk type id of I-LOC is 2, which consistent with the results from the equations.
- Parameters
input (Variable) – prediction output of the network.
label (Variable) – label of the test data set.
chunk_scheme (str) – The labeling scheme indicating how to encode the chunks. Must be IOB, IOE, IOBES or plain. See the descriptionfor details
num_chunk_types (int) – The number of chunk type. See the description for details
excluded_chunk_types (list) – A list including chunk type ids indicating chunk types that are not counted. See the description for details
- Returns
tuple containing: precision, recall, f1_score, num_infer_chunks, num_label_chunks, num_correct_chunks
- Return type
tuple
Examples
import paddle.fluid as fluid dict_size = 10000 label_dict_len = 7 sequence = fluid.layers.data( name='id', shape=[1], lod_level=1, dtype='int64') embedding = fluid.layers.embedding( input=sequence, size=[dict_size, 512]) hidden = fluid.layers.fc(input=embedding, size=512) label = fluid.layers.data( name='label', shape=[1], lod_level=1, dtype='int32') crf = fluid.layers.linear_chain_crf( input=hidden, label=label, param_attr=fluid.ParamAttr(name="crfw")) crf_decode = fluid.layers.crf_decoding( input=hidden, param_attr=fluid.ParamAttr(name="crfw")) fluid.layers.chunk_eval( input=crf_decode, label=label, chunk_scheme="IOB", num_chunk_types=(label_dict_len - 1) / 2)
clip¶
-
paddle.fluid.layers.
clip
(x, min, max, name=None)[source] Clip Operator.
The clip operator limits the value of given input within an interval. The interval is specified with arguments ‘min’ and ‘max’:
$$ Out = min(max(X, min), max) $$
- Parameters
x (Variable) – (Tensor)The input of clip op.The number of dimensions must be between [1, 9]
min (FLOAT) – (float)Minimum value, under which element is replaced by min
max (FLOAT) – (float)Maximum value, above which element is replaced by max
name (basestring|None) – Name of the output.
- Returns
(Tensor)The output of clip op with shape as input(X)
- Return type
out(Variable)
Examples
import paddle.fluid as fluid input = fluid.layers.data( name='data', shape=[1], dtype='float32') reward = fluid.layers.clip(x=input, min=-1.0, max=1.0)
clip_by_norm¶
-
paddle.fluid.layers.
clip_by_norm
(x, max_norm, name=None)[source] ClipByNorm Operator.
This operator limits the L2 norm of the input \(X\) within \(max\_norm\). If the L2 norm of \(X\) is less than or equal to \(max\_norm\), \(Out\) will be the same as \(X\). If the L2 norm of \(X\) is greater than \(max\_norm\), \(X\) will be linearly scaled to make the L2 norm of \(Out\) equal to \(max\_norm\), as shown in the following formula:
$$ Out = \frac{max\_norm * X}{norm(X)}, $$
where \(norm(X)\) represents the L2 norm of \(X\).
Examples: .. code-block:: python
data = fluid.layer.data( name=’data’, shape=[2, 4, 6], dtype=’float32’) reshaped = fluid.layers.clip_by_norm( x=data, max_norm=0.5)
- Parameters
x (Variable) – (Tensor) The input of clip_by_norm op.The number of dimensions must be between [1, 9]
max_norm (FLOAT) – (float) The maximum norm value
name (basestring|None) – Name of the output.
- Returns
(Tensor) The output of clip_by_norm op with shape as input(X)
- Return type
out(Variable)
Examples
import paddle.fluid as fluid input = fluid.layers.data( name='data', shape=[1], dtype='float32') reward = fluid.layers.clip_by_norm(x=input, max_norm=1.0)
continuous_value_model¶
-
paddle.fluid.layers.
continuous_value_model
(input, cvm, use_cvm=True)[source] continuous_value_model layers
continuous value model(cvm). Now, it only considers show and click value in CTR project. We assume that input is an embedding vector with cvm_feature, whose shape is [N * D] (D is 2 + embedding dim). If use_cvm is True, it will log(cvm_feature), and output shape is [N * D]. If use_cvm is False, it will remove cvm_feature from input, and output shape is [N * (D - 2)].
This layer accepts a tensor named input which is ID after embedded(lod level is 1), cvm is a show_click info.
- Parameters
input (Variable) – a 2-D LodTensor with shape [N x D], where N is the batch size, D is 2 + the embedding dim. lod level = 1.
cvm (Variable) – a 2-D Tensor with shape [N x 2], where N is the batch size, 2 is show and click.
use_cvm (bool) – use cvm or not. if use cvm, the output dim is the same as input if don’t use cvm, the output dim is input dim - 2(remove show and click) (cvm op is a customized op, which input is a sequence has embed_with_cvm default, so we need an op named cvm to decided whever use it or not.)
- Returns
A 2-D LodTensor with shape [N x D], if use cvm, D is equal to input dim, if don’t use cvm, D is equal to input dim - 2.
- Return type
Variable
Examples
import paddle.fluid as fluid input = fluid.layers.data(name="input", shape=[-1, 1], lod_level=1, append_batch_size=False, dtype="int64")#, stop_gradient=False) label = fluid.layers.data(name="label", shape=[-1, 1], append_batch_size=False, dtype="int64") embed = fluid.layers.embedding( input=input, size=[100, 11], dtype='float32') ones = fluid.layers.fill_constant_batch_size_like(input=label, shape=[-1, 1], dtype="int64", value=1) show_clk = fluid.layers.cast(fluid.layers.concat([ones, label], axis=1), dtype='float32') show_clk.stop_gradient = True input_with_cvm = fluid.layers.continuous_value_model(embed, show_clk, True)
conv2d¶
-
paddle.fluid.layers.
conv2d
(input, num_filters, filter_size, stride=1, padding=0, dilation=1, groups=None, param_attr=None, bias_attr=None, use_cudnn=True, act=None, name=None)[source] The convolution2D layer calculates the output based on the input, filter and strides, paddings, dilations, groups parameters. Input and Output are in NCHW format, where N is batch size, C is the number of channels, H is the height of the feature, and W is the width of the feature. Filter is in MCHW format, where M is the number of output image channels, C is the number of input image channels, H is the height of the filter, and W is the width of the filter. If the groups is greater than 1, C will equal the number of input image channels divided by the groups. Please refer to UFLDL’s convolution for more detials. If bias attribution and activation type are provided, bias is added to the output of the convolution, and the corresponding activation function is applied to the final result.
For each input \(X\), the equation is:
\[Out = \sigma (W \ast X + b)\]Where:
\(X\): Input value, a tensor with NCHW format.
\(W\): Filter value, a tensor with MCHW format.
\(\ast\): Convolution operation.
\(b\): Bias value, a 2-D tensor with shape [M, 1].
\(\sigma\): Activation function.
\(Out\): Output value, the shape of \(Out\) and \(X\) may be different.
Example
Input:
Input shape: \((N, C_{in}, H_{in}, W_{in})\)
Filter shape: \((C_{out}, C_{in}, H_f, W_f)\)
Output:
Output shape: \((N, C_{out}, H_{out}, W_{out})\)
Where
\[\begin{split}H_{out}&= \frac{(H_{in} + 2 * paddings[0] - (dilations[0] * (H_f - 1) + 1))}{strides[0]} + 1 \\ W_{out}&= \frac{(W_{in} + 2 * paddings[1] - (dilations[1] * (W_f - 1) + 1))}{strides[1]} + 1\end{split}\]- Parameters
input (Variable) – The input image with [N, C, H, W] format.
num_filters (int) – The number of filter. It is as same as the output image channel.
filter_size (int|tuple|None) – The filter size. If filter_size is a tuple, it must contain two integers, (filter_size_H, filter_size_W). Otherwise, the filter will be a square.
stride (int|tuple) – The stride size. If stride is a tuple, it must contain two integers, (stride_H, stride_W). Otherwise, the stride_H = stride_W = stride. Default: stride = 1.
padding (int|tuple) – The padding size. If padding is a tuple, it must contain two integers, (padding_H, padding_W). Otherwise, the padding_H = padding_W = padding. Default: padding = 0.
dilation (int|tuple) – The dilation size. If dilation is a tuple, it must contain two integers, (dilation_H, dilation_W). Otherwise, the dilation_H = dilation_W = dilation. Default: dilation = 1.
groups (int) – The groups number of the Conv2d Layer. According to grouped convolution in Alex Krizhevsky’s Deep CNN paper: when group=2, the first half of the filters is only connected to the first half of the input channels, while the second half of the filters is only connected to the second half of the input channels. Default: groups=1.
param_attr (ParamAttr|None) – The parameter attribute for learnable parameters/weights of conv2d. If it is set to None or one attribute of ParamAttr, conv2d will create ParamAttr as param_attr. If the Initializer of the param_attr is not set, the parameter is initialized with \(Normal(0.0, std)\), and the \(std\) is \((\frac{2.0 }{filter\_elem\_num})^{0.5}\). Default: None.
bias_attr (ParamAttr|bool|None) – The parameter attribute for the bias of conv2d. If it is set to False, no bias will be added to the output units. If it is set to None or one attribute of ParamAttr, conv2d will create ParamAttr as bias_attr. If the Initializer of the bias_attr is not set, the bias is initialized zero. Default: None.
use_cudnn (bool) – Use cudnn kernel or not, it is valid only when the cudnn library is installed. Default: True
act (str) – Activation type, if it is set to None, activation is not appended. Default: None
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically. Default: None
- Returns
The tensor variable storing the convolution and non-linearity activation result.
- Return type
Variable
- Raises
ValueError
– If the shapes of input, filter_size, stride, padding and groups mismatch.
Examples
import paddle.fluid as fluid data = fluid.layers.data(name='data', shape=[3, 32, 32], dtype='float32') conv2d = fluid.layers.conv2d(input=data, num_filters=2, filter_size=3, act="relu")
conv2d_transpose¶
-
paddle.fluid.layers.
conv2d_transpose
(input, num_filters, output_size=None, filter_size=None, padding=0, stride=1, dilation=1, groups=None, param_attr=None, bias_attr=None, use_cudnn=True, act=None, name=None)[source] Convlution2D transpose layer
The convolution2D transpose layer calculates the output based on the input, filter, and dilations, strides, paddings. Input(Input) and output(Output) are in NCHW format. Where N is batch size, C is the number of channels, H is the height of the feature, and W is the width of the feature. Parameters(dilations, strides, paddings) are two elements. These two elements represent height and width, respectively. The details of convolution transpose layer, please refer to the following explanation and references therein. If bias attribution and activation type are provided, bias is added to the output of the convolution, and the corresponding activation function is applied to the final result.
For each input \(X\), the equation is:
\[Out = \sigma (W \ast X + b)\]Where:
\(X\): Input value, a tensor with NCHW format.
\(W\): Filter value, a tensor with MCHW format.
\(\ast\): Convolution operation.
\(b\): Bias value, a 2-D tensor with shape [M, 1].
\(\sigma\): Activation function.
\(Out\): Output value, the shape of \(Out\) and \(X\) may be different.
Example
Input:
Input shape: \((N, C_{in}, H_{in}, W_{in})\)
Filter shape: \((C_{in}, C_{out}, H_f, W_f)\)
Output:
Output shape: \((N, C_{out}, H_{out}, W_{out})\)
Where
\[\begin{split}H^\prime_{out} &= (H_{in} - 1) * strides[0] - 2 * paddings[0] + dilations[0] * (H_f - 1) + 1 \\ W^\prime_{out} &= (W_{in} - 1) * strides[1] - 2 * paddings[1] + dilations[1] * (W_f - 1) + 1 \\ H_{out} &\in [ H^\prime_{out}, H^\prime_{out} + strides[0] ) \\ W_{out} &\in [ W^\prime_{out}, W^\prime_{out} + strides[1] )\end{split}\]- Parameters
input (Variable) – The input image with [N, C, H, W] format.
num_filters (int) – The number of the filter. It is as same as the output image channel.
output_size (int|tuple|None) – The output image size. If output size is a tuple, it must contain two integers, (image_H, image_W). None if use filter_size, padding, and stride to calculate output_size. if output_size and filter_size are specified at the same time, They should follow the formula above.
filter_size (int|tuple|None) – The filter size. If filter_size is a tuple, it must contain two integers, (filter_size_H, filter_size_W). Otherwise, the filter will be a square. None if use output size to calculate filter_size.
padding (int|tuple) – The padding size. If padding is a tuple, it must contain two integers, (padding_H, padding_W). Otherwise, the padding_H = padding_W = padding. Default: padding = 0.
stride (int|tuple) – The stride size. If stride is a tuple, it must contain two integers, (stride_H, stride_W). Otherwise, the stride_H = stride_W = stride. Default: stride = 1.
dilation (int|tuple) – The dilation size. If dilation is a tuple, it must contain two integers, (dilation_H, dilation_W). Otherwise, the dilation_H = dilation_W = dilation. Default: dilation = 1.
groups (int) – The groups number of the Conv2d transpose layer. Inspired by grouped convolution in Alex Krizhevsky’s Deep CNN paper, in which when group=2, the first half of the filters is only connected to the first half of the input channels, while the second half of the filters is only connected to the second half of the input channels. Default: groups = 1.
param_attr (ParamAttr|None) – The parameter attribute for learnable parameters/weights of conv2d_transpose. If it is set to None or one attribute of ParamAttr, conv2d_transpose will create ParamAttr as param_attr. If the Initializer of the param_attr is not set, the parameter is initialized with Xavier. Default: None.
bias_attr (ParamAttr|bool|None) – The parameter attribute for the bias of conv2d_transpose. If it is set to False, no bias will be added to the output units. If it is set to None or one attribute of ParamAttr, conv2d_transpose will create ParamAttr as bias_attr. If the Initializer of the bias_attr is not set, the bias is initialized zero. Default: None.
use_cudnn (bool) – Use cudnn kernel or not, it is valid only when the cudnn library is installed. Default: True.
act (str) – Activation type, if it is set to None, activation is not appended. Default: None.
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically. Default: True.
- Returns
The tensor variable storing the convolution transpose result.
- Return type
Variable
- Raises
ValueError
– If the shapes of input, filter_size, stride, padding and groups mismatch.
Examples
import paddle.fluid as fluid data = fluid.layers.data(name='data', shape=[3, 32, 32], dtype='float32') conv2d_transpose = fluid.layers.conv2d_transpose(input=data, num_filters=2, filter_size=3)
conv3d¶
-
paddle.fluid.layers.
conv3d
(input, num_filters, filter_size, stride=1, padding=0, dilation=1, groups=None, param_attr=None, bias_attr=None, use_cudnn=True, act=None, name=None)[source] Convlution3D Layer
The convolution3D layer calculates the output based on the input, filter and strides, paddings, dilations, groups parameters. Input(Input) and Output(Output) are in NCDHW format. Where N is batch size C is the number of channels, D is the depth of the feature, H is the height of the feature, and W is the width of the feature. Convlution3D is similar with Convlution2D but adds one dimension(depth). If bias attribution and activation type are provided, bias is added to the output of the convolution, and the corresponding activation function is applied to the final result.
For each input \(X\), the equation is:
\[Out = \sigma (W \ast X + b)\]In the above equation:
\(X\): Input value, a tensor with NCDHW format.
\(W\): Filter value, a tensor with MCDHW format.
\(\ast\): Convolution operation.
\(b\): Bias value, a 2-D tensor with shape [M, 1].
\(\sigma\): Activation function.
\(Out\): Output value, the shape of \(Out\) and \(X\) may be different.
Example
Input:
Input shape: \((N, C_{in}, D_{in}, H_{in}, W_{in})\)
Filter shape: \((C_{out}, C_{in}, D_f, H_f, W_f)\)
Output: Output shape: \((N, C_{out}, D_{out}, H_{out}, W_{out})\)
Where
\[\begin{split}D_{out}&= \frac{(D_{in} + 2 * paddings[0] - (dilations[0] * (D_f - 1) + 1))}{strides[0]} + 1 \\ H_{out}&= \frac{(H_{in} + 2 * paddings[1] - (dilations[1] * (H_f - 1) + 1))}{strides[1]} + 1 \\ W_{out}&= \frac{(W_{in} + 2 * paddings[2] - (dilations[2] * (W_f - 1) + 1))}{strides[2]} + 1\end{split}\]- Parameters
input (Variable) – The input image with [N, C, D, H, W] format.
num_filters (int) – The number of filter. It is as same as the output image channel.
filter_size (int|tuple|None) – The filter size. If filter_size is a tuple, it must contain three integers, (filter_size_D, filter_size_H, filter_size_W). Otherwise, the filter will be a square.
stride (int|tuple) – The stride size. If stride is a tuple, it must contain three integers, (stride_D, stride_H, stride_W). Otherwise, the stride_D = stride_H = stride_W = stride. Default: stride = 1.
padding (int|tuple) – The padding size. If padding is a tuple, it must contain three integers, (padding_D, padding_H, padding_W). Otherwise, the padding_D = padding_H = padding_W = padding. Default: padding = 0.
dilation (int|tuple) – The dilation size. If dilation is a tuple, it must contain three integers, (dilation_D, dilation_H, dilation_W). Otherwise, the dilation_D = dilation_H = dilation_W = dilation. Default: dilation = 1.
groups (int) – The groups number of the Conv3d Layer. According to grouped convolution in Alex Krizhevsky’s Deep CNN paper: when group=2, the first half of the filters is only connected to the first half of the input channels, while the second half of the filters is only connected to the second half of the input channels. Default: groups=1
param_attr (ParamAttr|None) – The parameter attribute for learnable parameters/weights of conv3d. If it is set to None or one attribute of ParamAttr, conv3d will create ParamAttr as param_attr. If it is set to None, the parameter is initialized with \(Normal(0.0, std)\), and the \(std\) is \((\frac{2.0 }{filter\_elem\_num})^{0.5}\). Default: None.
bias_attr (ParamAttr|bool|None) – The parameter attribute for the bias of conv3d. If it is set to False, no bias will be added to the output units. If it is set to None or one attribute of ParamAttr, conv3d will create ParamAttr as bias_attr. If the Initializer of the bias_attr is not set, the bias is initialized zero. Default: None.
use_cudnn (bool) – Use cudnn kernel or not, it is valid only when the cudnn library is installed. Default: True
act (str) – Activation type, if it is set to None, activation is not appended. Default: None.
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically. Default: None.
- Returns
The tensor variable storing the convolution and non-linearity activation result.
- Return type
Variable
- Raises
ValueError
– If the shapes of input, filter_size, stride, padding and groups mismatch.
Examples
import paddle.fluid as fluid data = fluid.layers.data(name='data', shape=[3, 12, 32, 32], dtype='float32') conv3d = fluid.layers.conv3d(input=data, num_filters=2, filter_size=3, act="relu")
conv3d_transpose¶
-
paddle.fluid.layers.
conv3d_transpose
(input, num_filters, output_size=None, filter_size=None, padding=0, stride=1, dilation=1, groups=None, param_attr=None, bias_attr=None, use_cudnn=True, act=None, name=None)[source] Convlution3D transpose layer
The convolution3D transpose layer calculates the output based on the input, filter, and dilations, strides, paddings. Input(Input) and output(Output) are in NCDHW format. Where N is batch size, C is the number of channels, D is the depth of the feature, H is the height of the feature, and W is the width of the feature. Parameters(dilations, strides, paddings) are two elements. These two elements represent height and width, respectively. The details of convolution transpose layer, please refer to the following explanation and references therein. If bias attribution and activation type are provided, bias is added to the output of the convolution, and the corresponding activation function is applied to the final result.
For each input \(X\), the equation is:
\[Out = \sigma (W \ast X + b)\]In the above equation:
\(X\): Input value, a tensor with NCDHW format.
\(W\): Filter value, a tensor with MCDHW format.
\(\ast\): Convolution operation.
\(b\): Bias value, a 2-D tensor with shape [M, 1].
\(\sigma\): Activation function.
\(Out\): Output value, the shape of \(Out\) and \(X\) may be different.
Example
Input:
Input shape: \((N, C_{in}, D_{in}, H_{in}, W_{in})\)
Filter shape: \((C_{in}, C_{out}, D_f, H_f, W_f)\)
Output:
Output shape: \((N, C_{out}, D_{out}, H_{out}, W_{out})\)
Where
\[\begin{split}D_{out} &= (D_{in} - 1) * strides[0] - 2 * paddings[0] + dilations[0] * (D_f - 1) + 1 \\ H_{out} &= (H_{in} - 1) * strides[1] - 2 * paddings[1] + dilations[1] * (H_f - 1) + 1 \\ W_{out} &= (W_{in} - 1) * strides[2] - 2 * paddings[2] + dilations[2] * (W_f - 1) + 1\end{split}\]- Parameters
input (Variable) – The input image with [N, C, D, H, W] format.
num_filters (int) – The number of the filter. It is as same as the output image channel.
output_size (int|tuple|None) – The output image size. If output size is a tuple, it must contain three integers, (image_D, image_H, image_W). This parameter only works when filter_size is None.
filter_size (int|tuple|None) – The filter size. If filter_size is a tuple, it must contain three integers, (filter_size_D, filter_size_H, filter_size_W). Otherwise, the filter will be a square. None if use output size to calculate filter_size.
padding (int|tuple) – The padding size. If padding is a tuple, it must contain three integers, (padding_D, padding_H, padding_W). Otherwise, the padding_D = padding_H = padding_W = padding. Default: padding = 0.
stride (int|tuple) – The stride size. If stride is a tuple, it must contain three integers, (stride_D, stride_H, stride_W). Otherwise, the stride_D = stride_H = stride_W = stride. Default: stride = 1.
dilation (int|tuple) – The dilation size. If dilation is a tuple, it must contain three integers, (dilation_D, dilation_H, dilation_W). Otherwise, the dilation_D = dilation_H = dilation_W = dilation. Default: dilation = 1.
groups (int) – The groups number of the Conv3d transpose layer. Inspired by grouped convolution in Alex Krizhevsky’s Deep CNN paper, in which when group=2, the first half of the filters is only connected to the first half of the input channels, while the second half of the filters is only connected to the second half of the input channels. Default: groups=1
param_attr (ParamAttr|None) – The parameter attribute for learnable parameters/weights of conv3d_transpose. If it is set to None or one attribute of ParamAttr, conv3d_transpose will create ParamAttr as param_attr. If the Initializer of the param_attr is not set, the parameter is initialized with Xavier. Default: None.
bias_attr (ParamAttr|bool|None) – The parameter attribute for the bias of conv3d_transpose. If it is set to False, no bias will be added to the output units. If it is set to None or one attribute of ParamAttr, conv3d_transpose will create ParamAttr as bias_attr. If the Initializer of the bias_attr is not set, the bias is initialized zero. Default: None.
use_cudnn (bool) – Use cudnn kernel or not, it is valid only when the cudnn library is installed. Default: True
act (str) – Activation type, if it is set to None, activation is not appended. Default: None.
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
- Returns
The tensor variable storing the convolution transpose result.
- Return type
Variable
- Raises
ValueError
– If the shapes of input, filter_size, stride, padding and groups mismatch.
Examples
import paddle.fluid as fluid data = fluid.layers.data(name='data', shape=[3, 12, 32, 32], dtype='float32') conv3d_transpose = fluid.layers.conv3d_transpose(input=data, num_filters=2, filter_size=3)
cos_sim¶
-
paddle.fluid.layers.
cos_sim
(X, Y)[source] Cosine Similarity Operator
\(Out = \frac{X^T * Y}{(\sqrt{X^T * X} * \sqrt{Y^T * Y})}\)
The input X and Y must have the same shape, except that the 1st dimension of input Y could be just 1 (different from input X), which will be broadcasted to match the shape of input X before computing their cosine similarity.
Both the input X and Y can carry the LoD (Level of Details) information, or not. But the output only shares the LoD information with input X.
- Parameters
X (Variable) – The 1st input of cos_sim op.
Y (Variable) – The 2nd input of cos_sim op.
- Returns
the output of cosine(X, Y).
- Return type
Variable
Examples
import paddle.fluid as fluid x = fluid.layers.data(name='x', shape=[3, 7], dtype='float32', append_batch_size=False) y = fluid.layers.data(name='y', shape=[1, 7], dtype='float32', append_batch_size=False) out = fluid.layers.cos_sim(x, y)
crf_decoding¶
-
paddle.fluid.layers.
crf_decoding
(input, param_attr, label=None)[source] The crf_decoding operator reads the emission feature weights and the transition feature weights learned by the linear_chain_crf operator. It implements the Viterbi algorithm which is a dynamic programming algorithm for finding the most likely sequence of hidden states, called the Viterbi path, that results in a sequence of observed tags.
The output of this operator changes according to whether Input(Label) is given:
Input(Label) is given: This happens in training. This operator is used to co-work with the chunk_eval operator. When Input(Label) is given, the crf_decoding operator returns a row vector with shape [N x 1] whose values are fixed to be 0, indicating an incorrect prediction, or 1 indicating a tag is correctly predicted. Such an output is the input to chunk_eval operator.
Input(Label) is not given: This is the standard decoding process.
The crf_decoding operator returns a row vector with shape [N x 1] whose values range from 0 to maximum tag number - 1, Each element indicates an index of a predicted tag.
- Parameters
input (Variable) – (LoDTensor, default: LoDTensor<float>). A LoDTensor with shape [N x D] where N is the size of the mini-batch and D is the total tag number. This input is the unscaled emission weight matrix of the linear_chain_crf operator
param_attr (ParamAttr) – The parameter attribute for training.
label (Variable) – (LoDTensor, LoDTensor<int64_t>). The ground truth with shape [N x 1]. This input is optional. See more details in the operator’s comments
- Returns
(LoDTensor, LoDTensor<int64_t>). The decoding results. What to return changes depending on whether the Input(Label) (the ground truth) is given. See more details in the operator’s comment
- Return type
Variable
Examples
import paddle.fluid as fluid images = fluid.layers.data(name='pixel', shape=[784], dtype='float32') label = fluid.layers.data(name='label', shape=[1], dtype='int32') hidden = fluid.layers.fc(input=images, size=2) crf = fluid.layers.linear_chain_crf(input=hidden, label=label, param_attr=fluid.ParamAttr(name="crfw")) crf_decode = fluid.layers.crf_decoding(input=hidden, param_attr=fluid.ParamAttr(name="crfw"))
crop¶
-
paddle.fluid.layers.
crop
(x, shape=None, offsets=None, name=None)[source] Crop input into output, as specified by offsets and shape.
* Case 1: Given X = [[0, 1, 2, 0, 0] [0, 3, 4, 0, 0] [0, 0, 0, 0, 0]], and shape = [2, 2], offsets = [0, 1], output is: Out = [[1, 2], [3, 4]]. * Case 2: Given X = [[0, 1, 2, 5, 0] [0, 3, 4, 6, 0] [0, 0, 0, 0, 0]], and shape is tensor shape = [[0, 0, 0] [0, 0, 0]] and offsets = [0, 1], output is: Out = [[1, 2, 5], [3, 4, 6]].
- Parameters
x (Variable) – The input tensor variable.
shape (Variable|list/tuple of integer) – The output shape is specified by shape, which can a Variable or a list/tupe of integer. If a tensor Variable, it’s rank must be the same as x. This way is suitable for the case that the output shape may be changed each iteration. If a list/tupe of integer, it’s length must be the same as the rank of x
offsets (Variable|list/tuple of integer|None) – Specifies the cropping offsets at each dimension. It can be a Variable or or a list/tupe of integers. If a tensor Variable, it’s rank must be the same as x. This way is suitable for the case that the offsets may be changed each iteration. If a list/tupe of integer, it’s length must be the same as the rank of x. If None, the offsets are 0 at each dimension.
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
- Returns
The cropped tensor variable.
- Return type
Variable
- Raises
ValueError
– If shape is not a list, tuple or Variable.
Examples
import paddle.fluid as fluid x = fluid.layers.data(name="x", shape=[3, 5], dtype="float32") y = fluid.layers.data(name="y", shape=[2, 3], dtype="float32") crop = fluid.layers.crop(x, shape=y) # or z = fluid.layers.data(name="z", shape=[3, 5], dtype="float32") crop = fluid.layers.crop(z, shape=[-1, 2, 3])
cross_entropy¶
-
paddle.fluid.layers.
cross_entropy
(input, label, soft_label=False, ignore_index=-100)[source] Cross Entropy Layer
This layer computes the cross entropy between input and label. It supports both standard cross-entropy and soft-label cross-entropy loss computation.
- One-hot cross-entropy:
soft_label = False, Label[i, 0] indicates the class index for sample i:
\[Y[i] = -\log(X[i, Label[i]])\]
- Soft-label cross-entropy:
soft_label = True, Label[i, j] indicates the soft label of class j for sample i:
\[Y[i] = \sum_j{-Label[i, j] * log(X[i, j])}\]
Please make sure that in this case the summation of each row of label equals one.
- One-hot cross-entropy with vecterized label:
As a special case of 2), when each row of ‘label’ has only one non-zero element which is equal to 1, soft-label cross-entropy degenerates to a one-hot cross-entropy with one-hot label representation.
- Parameters
input (Variable|list) – a 2-D tensor with shape [N x D], where N is the batch size and D is the number of classes. This input is a probability computed by the previous operator, which is almost always the result of a softmax operator.
label (Variable|list) – the ground truth which is a 2-D tensor. When soft_label is set to False, label is a tensor<int64> with shape [N x 1]. When soft_label is set to True, label is a tensor<float/double> with shape [N x D].
soft_label (bool) – a flag indicating whether to interpretate the given labels as soft labels. Default: False.
ignore_index (int) – Specifies a target value that is ignored and does not contribute to the input gradient. Only valid if soft_label is set to False. Default: kIgnoreIndex
- Returns
A 2-D tensor with shape [N x 1], the cross entropy loss.
- Raises
ValueError
– 1. the 1st dimension ofinput
andlabel
are not equal.when
soft_label == True
, and the 2nd dimension ofinput
andlabel
are not equal.when
soft_label == False
, and the 2nd dimension oflabel
is not 1.
Examples
import paddle.fluid as fluid classdim = 7 x = fluid.layers.data(name='x', shape=[3, 7], dtype='float32', append_batch_size=False) label = fluid.layers.data(name='label', shape=[3, 1], dtype='float32', append_batch_size=False) predict = fluid.layers.fc(input=x, size=classdim, act='softmax') cost = fluid.layers.cross_entropy(input=predict, label=label)
ctc_greedy_decoder¶
-
paddle.fluid.layers.
ctc_greedy_decoder
(input, blank, name=None)[source] This op is used to decode sequences by greedy policy by below steps:
Get the indexes of max value for each row in input. a.k.a. numpy.argmax(input, axis=0).
For each sequence in result of step1, merge repeated tokens between two blanks and delete all blanks.
A simple example as below:
Given: input.data = [[0.6, 0.1, 0.3, 0.1], [0.3, 0.2, 0.4, 0.1], [0.1, 0.5, 0.1, 0.3], [0.5, 0.1, 0.3, 0.1], [0.5, 0.1, 0.3, 0.1], [0.2, 0.2, 0.2, 0.4], [0.2, 0.2, 0.1, 0.5], [0.5, 0.1, 0.3, 0.1]] input.lod = [[4, 4]] Computation: step1: Apply argmax to first input sequence which is input.data[0:4]. Then we get: [[0], [2], [1], [0]] step2: merge repeated tokens and remove blank which is 0. Then we get first output sequence: [[2], [1]] Finally: output.data = [[2], [1], [3]] output.lod = [[2, 1]]
- Parameters
input (Variable) – (LoDTensor<float>), the probabilities of variable-length sequences, which is a 2-D Tensor with LoD information. It’s shape is [Lp, num_classes + 1], where Lp is the sum of all input sequences’ length and num_classes is the true number of classes. (not including the blank label).
blank (int) – the blank label index of Connectionist Temporal Classification (CTC) loss, which is in thehalf-opened interval [0, num_classes + 1).
name (str) – The name of this layer. It is optional.
- Returns
CTC greedy decode result which is a 2-D tensor with shape [Lp, 1]. ‘Lp’ is the sum if all output sequences’ length. If all the sequences in result were empty, the result LoDTensor will be [-1] with LoD [[]] and dims [1, 1].
- Return type
Variable
Examples
import paddle.fluid as fluid x = fluid.layers.data(name='x', shape=[8], dtype='float32') cost = fluid.layers.ctc_greedy_decoder(input=x, blank=0)
data_norm¶
-
paddle.fluid.layers.
data_norm
(input, act=None, epsilon=1e-05, param_attr=None, data_layout='NCHW', in_place=False, name=None, moving_mean_name=None, moving_variance_name=None, do_model_average_for_mean_and_var=False)[source] Data Normalization Layer
Can be used as a normalizer function for conv2d and fully_connected operations. The required data format for this layer is one of the following:
NHWC [batch, in_height, in_width, in_channels]
NCHW [batch, in_channels, in_height, in_width]
\(input\) is the input features over a mini-batch.
\[\begin{split}\mu_{\beta} &\gets \frac{1}{m} \sum_{i=1}^{m} x_i \qquad &//\ \ mini-batch\ mean \\ \sigma_{\beta}^{2} &\gets \frac{1}{m} \sum_{i=1}^{m}(x_i - \ \mu_{\beta})^2 \qquad &//\ mini-batch\ variance \\ \hat{x_i} &\gets \frac{x_i - \mu_\beta} {\sqrt{\ \sigma_{\beta}^{2} + \epsilon}} \qquad &//\ normalize \\ y_i &\gets \gamma \hat{x_i} + \beta \qquad &//\ scale\ and\ shift\end{split}\]- Parameters
input (variable) – The input variable which is a LoDTensor.
act (string, Default None) – Activation type, linear|relu|prelu|…
epsilon (float, Default 1e-05) –
param_attr (ParamAttr) – The parameter attribute for Parameter scale.
data_layout (string, default NCHW) – NCHW|NHWC
in_place (bool, Default False) – Make the input and output of batch norm reuse memory.
name (string, Default None) – A name for this layer(optional). If set None, the layer will be named automatically.
moving_mean_name (string, Default None) – The name of moving_mean which store the global Mean.
moving_variance_name (string, Default None) – The name of the moving_variance which store the global Variance.
do_model_average_for_mean_and_var (bool, Default False) – Do model average for mean and variance or not.
- Returns
A tensor variable which is the result after applying data normalization on the input.
- Return type
Variable
Examples
import paddle.fluid as fluid hidden1 = fluid.layers.data(name="hidden1", shape=[200]) hidden2 = fluid.layers.data_norm(name="hidden2", input=hidden1)
deformable_conv¶
-
paddle.fluid.layers.
deformable_conv
(input, offset, mask, num_filters, filter_size, stride=1, padding=0, dilation=1, groups=None, deformable_groups=None, im2col_step=None, param_attr=None, bias_attr=None, name=None)[source] Deformable Convolution Layer
Compute 2-D deformable convolution on 4-D input. Given input image x, output feature map y, the deformable convolution operation can be expressed as follow:
\[y(p) = \sum_{k=1}^{K}{w_k * x(p + p_k + \Delta p_k) * \Delta m_k}\]Where \(\Delta p_k\) and \(\Delta m_k\) are the learnable offset and modulation scalar for the k-th location, respectively. Refer to Deformable ConvNets v2: More Deformable, Better Results .
Example
Input:
Input shape: \((N, C_{in}, H_{in}, W_{in})\)
Filter shape: \((C_{out}, C_{in}, H_f, W_f)\)
Offset shape: \((N, 2 * deformable\_groups * H_f * H_w, H_{in}, W_{in})\)
Mask shape: \((N, deformable\_groups * H_f * H_w, H_{in}, W_{in})\)
Output:
Output shape: \((N, C_{out}, H_{out}, W_{out})\)
Where
\[\begin{split}H_{out}&= \frac{(H_{in} + 2 * paddings[0] - (dilations[0] * (H_f - 1) + 1))}{strides[0]} + 1 \\ W_{out}&= \frac{(W_{in} + 2 * paddings[1] - (dilations[1] * (W_f - 1) + 1))}{strides[1]} + 1\end{split}\]- Parameters
input (Variable) – The input image with [N, C, H, W] format.
offset (Variable) – The input coord offset of deformable convolution layer.
Mask (Variable) – The input mask of deformable covolution layer.
num_filters (int) – The number of filter. It is as same as the output image channel.
filter_size (int|tuple|None) – The filter size. If filter_size is a tuple, it must contain two integers, (filter_size_H, filter_size_W). Otherwise, the filter will be a square.
stride (int|tuple) – The stride size. If stride is a tuple, it must contain two integers, (stride_H, stride_W). Otherwise, the stride_H = stride_W = stride. Default: stride = 1.
padding (int|tuple) – The padding size. If padding is a tuple, it must contain two integers, (padding_H, padding_W). Otherwise, the padding_H = padding_W = padding. Default: padding = 0.
dilation (int|tuple) – The dilation size. If dilation is a tuple, it must contain two integers, (dilation_H, dilation_W). Otherwise, the dilation_H = dilation_W = dilation. Default: dilation = 1.
groups (int) – The groups number of the deformable conv layer. According to grouped convolution in Alex Krizhevsky’s Deep CNN paper: when group=2, the first half of the filters is only connected to the first half of the input channels, while the second half of the filters is only connected to the second half of the input channels. Default: groups=1.
deformable_groups (int) – The number of deformable group partitions. Default: deformable_groups = 1.
im2col_step (int) – Maximum number of images per im2col computation; The total batch size should be divisable by this value or smaller than this value; if you face out of memory problem, you can try to use a smaller value here. Default: im2col_step = 64.
param_attr (ParamAttr|None) – The parameter attribute for learnable parameters/weights of deformable conv. If it is set to None or one attribute of ParamAttr, deformable conv will create ParamAttr as param_attr. If the Initializer of the param_attr is not set, the parameter is initialized with \(Normal(0.0, std)\), and the \(std\) is \((\frac{2.0 }{filter\_elem\_num})^{0.5}\). Default: None.
bias_attr (ParamAttr|bool|None) – The parameter attribute for the bias of deformable conv layer. If it is set to False, no bias will be added to the output units. If it is set to None or one attribute of ParamAttr, conv2d will create ParamAttr as bias_attr. If the Initializer of the bias_attr is not set, the bias is initialized zero. Default: None.
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically. Default: None
- Returns
The tensor variable storing the deformable convolution result.
- Return type
Variable
- Raises
ValueError
– If the shapes of input, filter_size, stride, padding and groups mismatch.
Examples
import paddle.fluid as fluid data = fluid.layers.data(name='data', shape=[3, 32, 32], dtype='float32') offset = fluid.layers.data(name='offset', shape=[18, 32, 32], dtype='float32') mask = fluid.layers.data(name='mask', shape=[9, 32, 32], dtype='float32') out = fluid.layers.deformable_conv(input=data, offset=offset, mask=mask, num_filters=2, filter_size=3, padding=1)
deformable_roi_pooling¶
-
paddle.fluid.layers.
deformable_roi_pooling
(input, rois, trans, no_trans=False, spatial_scale=1.0, group_size=[1, 1], pooled_height=1, pooled_width=1, part_size=None, sample_per_part=1, trans_std=0.1, position_sensitive=False, name=None)[source] Deformable PSROI Pooling Layer
- Parameters
input (Variable) – The input of Deformable PSROIPooling.The shape of input tensor is [N,C,H,W]. Where N is batch size,C is number of input channels,H is height of the feature, and W is the width of the feature.
rois (Variable) – ROIs (Regions of Interest) to pool over.It should be a 2-D LoDTensor of shape (num_rois, 4), the lod level is 1. Given as [[x1, y1, x2, y2], …], (x1, y1) is the top left coordinates, and (x2, y2) is the bottom right coordinates.
trans (Variable) – Offset of features on ROIs while pooling.The format is NCHW, where N is number of ROIs, C is number of channels, which indicate the offset distance in the x and y directions, H is pooled height, and W is pooled width.
no_trans (bool) – Whether to add offset to get new value or not while roi pooling, which value is True or False. Default: False.
spatial_scale (float) – Ratio of input feature map height (or width) to raw image height (or width). Equals the reciprocal of total stride in convolutional layers, Default: 1.0.
group_size (list|tuple) – The number of groups which input channels are divided.(eg.number of input channels is k1*k2*(C+1), which k1 and k2 are group width and height and C+1 is number of output chanels. eg.(4, 6), which 4 is height of group and 6 is width of group. Default: [1, 1].
pooled_height (integer) – The pooled output height. Default: 1.
pooled_width (integer) – The pooled output width. Default: 1.
part_size (list|tuple) – The height and width of offset, eg.(4, 6), which height is 4 and width is 6, Default: if None, default value is [pooled_height, pooled_width].
sample_per_part (integer) – The number of samples in each bin. Default: 1.
trans_std (float) – Coefficient of offset. Default: 0.1.
position_sensitive (bool) – Whether to choose deformable psroi pooling mode or not. Default: False.
name (str) – Name of layer. Default: None.
- Returns
The tensor variable storing the deformable psroi pooling result.
- Return type
Variable
Examples
import paddle.fluid as fluid input = fluid.layers.data(name="input", shape=[2, 192, 64, 64], dtype='float32', append_batch_size=False) rois = fluid.layers.data(name="rois", shape=[4], dtype='float32', lod_level=1) trans = fluid.layers.data(name="trans", shape=[2, 384, 64, 64], dtype='float32', append_batch_size=False) x = fluid.layers.nn.deformable_roi_pooling(input=input, rois=rois, trans=trans, no_trans=False, spatial_scale=1.0, group_size=(1, 1), pooled_height=8, pooled_width=8, part_size=(8, 8), sample_per_part=4, trans_std=0.1, position_sensitive=False)
dice_loss¶
-
paddle.fluid.layers.
dice_loss
(input, label, epsilon=1e-05)[source] Dice loss for comparing the similarity of two batch of data, usually is used for binary image segmentation i.e. labels are binary. The dice loss can be defined as below equation:
\[\begin{split}dice\_loss &= 1 - \frac{2 * intersection\_area}{total\_area} \\ &= \frac{(total\_area - intersection\_area) - intersection\_area}{total\_area} \\ &= \frac{(union\_area - intersection\_area)}{total\_area}\end{split}\]- Parameters
input (Variable) – The predictions with rank>=2. The first dimension is batch size, and the last dimension is class number.
label (Variable) – The groud truth with the same rank with input. The first dimension is batch size, and the last dimension is 1.
epsilon (float) – The epsilon will be added to the numerator and denominator. If both input and label are empty, it makes sure dice is 1. Default: 0.00001
- Returns
The dice loss with shape [1].
- Return type
dice_loss (Variable)
Examples
import paddle.fluid as fluid x = fluid.layers.data(name='data', shape = [3, 224, 224, 2], dtype='float32') label = fluid.layers.data(name='label', shape=[3, 224, 224, 1], dtype='float32') predictions = fluid.layers.softmax(x) loss = fluid.layers.dice_loss(input=predictions, label=label)
dropout¶
-
paddle.fluid.layers.
dropout
(x, dropout_prob, is_test=False, seed=None, name=None, dropout_implementation='downgrade_in_infer')[source] Computes dropout.
Drop or keep each element of x independently. Dropout is a regularization technique for reducing overfitting by preventing neuron co-adaption during training. The dropout operator randomly sets (according to the given dropout probability) the outputs of some units to zero, while others are remain unchanged.
dropout op can be removed from the program to make the program more efficient.
- Parameters
x (Variable) – The input tensor variable.
dropout_prob (float) – Probability of setting units to zero.
is_test (bool) – A flag indicating whether it is in test phrase or not.
seed (int) – A Python integer used to create random seeds. If this parameter is set to None, a random seed is used. NOTE: If an integer seed is given, always the same output units will be dropped. DO NOT use a fixed seed in training.
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
dropout_implementation (string) –
[‘downgrade_in_infer’(default)|’upscale_in_train’]
downgrade_in_infer(default), downgrade the outcome at inference
train: out = input * mask
inference: out = input * (1.0 - dropout_prob)
(mask is a tensor same shape with input, value is 0 or 1 ratio of 0 is dropout_prob)
upscale_in_train, upscale the outcome at training time
train: out = input * mask / ( 1.0 - dropout_prob )
inference: out = input
(mask is a tensor same shape with input, value is 0 or 1 ratio of 0 is dropout_prob)
- Returns
A tensor variable is the shape with x.
- Return type
Variable
Examples
import paddle.fluid as fluid x = fluid.layers.data(name="data", shape=[32, 32], dtype="float32") droped = fluid.layers.dropout(x, dropout_prob=0.5)
dynamic_gru¶
-
paddle.fluid.layers.
dynamic_gru
(input, size, param_attr=None, bias_attr=None, is_reverse=False, gate_activation='sigmoid', candidate_activation='tanh', h_0=None, origin_mode=False)[source] Gated Recurrent Unit (GRU) Layer
if origin_mode is False, then the equation of a gru step is from paper Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling .
The formula is as follows:
\[ \begin{align}\begin{aligned}u_t & = act_g(W_{ux}x_{t} + W_{uh}h_{t-1} + b_u)\\r_t & = act_g(W_{rx}x_{t} + W_{rh}h_{t-1} + b_r)\\\tilde{h_t} & = act_c(W_{cx}x_{t} + W_{ch}(r_t \odot h_{t-1}) + b_c)\\h_t & = (1-u_t) \odot h_{t-1} + u_t \odot \tilde{h_t}\end{aligned}\end{align} \]if origin_mode is True then the equation is from paper Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation <https://arxiv.org/pdf/1406.1078.pdf>`_
\[ \begin{align}\begin{aligned}u_t & = act_g(W_{ux}x_{t} + W_{uh}h_{t-1} + b_u)\\r_t & = act_g(W_{rx}x_{t} + W_{rh}h_{t-1} + b_r)\\\tilde{h_t} & = act_c(W_{cx}x_{t} + W_{ch}(r_t \odot h_{t-1}) + b_c)\\h_t & = u_t \odot h_{t-1} + (1-u_t) \odot \tilde{h_t}\end{aligned}\end{align} \]The \(\odot\) is the element-wise product of the vectors. \(act_g\) is the update gate and reset gate activation function and \(sigmoid\) is usually used for it. \(act_c\) is the activation function for candidate hidden state and \(tanh\) is usually used for it.
Note that these \(W_{ux}x_{t}, W_{rx}x_{t}, W_{cx}x_{t}\) operations on the input \(x_{t}\) are NOT included in this operator. Users can choose to use fully-connect layer before GRU layer.
- Parameters
input (Variable) – The input of dynamic_gru layer, which supports variable-time length input sequence. The underlying tensor in this Variable is a matrix with shape \((T \times 3D)\), where \(T\) is the total time steps in this mini-batch, \(D\) is the hidden size.
size (int) – The dimension of the gru cell.
param_attr (ParamAttr|None) –
The parameter attribute for the learnable hidden-hidden weight matrix. Note:
The shape of the weight matrix is \((T \times 3D)\), where \(D\) is the hidden size.
All elements in the weight matrix can be divided into two parts. The first part are weights of the update gate and reset gate with shape \((D \times 2D)\), and the second part are weights for candidate hidden state with shape \((D \times D)\).
If it is set to None or one attribute of ParamAttr, dynamic_gru will create ParamAttr as param_attr. If the Initializer of the param_attr is not set, the parameter is initialized with Xavier. Default: None.
bias_attr (ParamAttr|bool|None) – The parameter attribute for the bias of GRU.Note that the bias with \((1 \times 3D)\) concatenates the bias in the update gate, reset gate and candidate calculations. If it is set to False, no bias will be applied to the update gate, reset gate and candidate calculations. If it is set to None or one attribute of ParamAttr, dynamic_gru will create ParamAttr as bias_attr. If the Initializer of the bias_attr is not set, the bias is initialized zero. Default: None.
is_reverse (bool) – Whether to compute reversed GRU, default
False
.gate_activation (str) – The activation for update gate and reset gate. Choices = [“sigmoid”, “tanh”, “relu”, “identity”], default “sigmoid”.
candidate_activation (str) – The activation for candidate hidden state. Choices = [“sigmoid”, “tanh”, “relu”, “identity”], default “tanh”.
h_0 (Variable) – This is initial hidden state. If not set, default is zero. This is a tensor with shape (N x D), where N is the number of total time steps of input mini-batch feature and D is the hidden size.
- Returns
The hidden state of GRU. The shape is \((T \times D)\), and sequence length is the same with the input.
- Return type
Variable
Examples
import paddle.fluid as fluid dict_dim, emb_dim = 128, 64 data = fluid.layers.data(name='sequence', shape=[1], dtype='int32', lod_level=1) emb = fluid.layers.embedding(input=data, size=[dict_dim, emb_dim]) hidden_dim = 512 x = fluid.layers.fc(input=emb, size=hidden_dim * 3) hidden = fluid.layers.dynamic_gru(input=x, size=hidden_dim)
dynamic_lstm¶
-
paddle.fluid.layers.
dynamic_lstm
(input, size, h_0=None, c_0=None, param_attr=None, bias_attr=None, use_peepholes=True, is_reverse=False, gate_activation='sigmoid', cell_activation='tanh', candidate_activation='tanh', dtype='float32', name=None)[source] Long-Short Term Memory (LSTM) Operator.
The default implementation is diagonal/peephole connection (https://arxiv.org/pdf/1402.1128.pdf), the formula is as follows:
$$ i_t = \sigma(W_{ix}x_{t} + W_{ih}h_{t-1} + W_{ic}c_{t-1} + b_i) $$
$$ f_t = \sigma(W_{fx}x_{t} + W_{fh}h_{t-1} + W_{fc}c_{t-1} + b_f) $$
$$ \tilde{c_t} = act_g(W_{cx}x_t + W_{ch}h_{t-1} + b_c) $$
$$ o_t = \sigma(W_{ox}x_{t} + W_{oh}h_{t-1} + W_{oc}c_t + b_o) $$
$$ c_t = f_t \odot c_{t-1} + i_t \odot \tilde{c_t} $$
$$ h_t = o_t \odot act_h(c_t) $$
W terms denote weight matrices (e.g. \(W_{xi}\) is the matrix of weights from the input gate to the input), \(W_{ic}, W_{fc}, W_{oc}\) are diagonal weight matrices for peephole connections. In our implementation, we use vectors to represent these diagonal weight matrices. - The b terms denote bias vectors (\(b_i\) is the input gate bias vector). - \(\sigma\) is the non-line activations, such as logistic sigmoid function. - \(i, f, o\) and \(c\) are the input gate, forget gate, output gate, and cell activation vectors, respectively, all of which have the same size as the cell output activation vector \(h\). - The \(\odot\) is the element-wise product of the vectors. - \(act_g\) and \(act_h\) are the cell input and cell output activation functions and tanh is usually used for them. - \(\tilde{c_t}\) is also called candidate hidden state, which is computed based on the current input and the previous hidden state.
Set use_peepholes False to disable peephole connection. The formula is omitted here, please refer to the paper http://www.bioinf.jku.at/publications/older/2604.pdf for details.
Note that these \(W_{xi}x_{t}, W_{xf}x_{t}, W_{xc}x_{t}, W_{xo}x_{t}\) operations on the input \(x_{t}\) are NOT included in this operator. Users can choose to use fully-connect operator before LSTM operator.
- Parameters
input (Variable) – (LoDTensor) the first input is a LodTensor, which support variable-time length input sequence. The underlying tensor in this LoDTensor is a matrix with shape (T X 4D), where T is the total time steps in this mini-batch, D is the hidden size
size (int) – 4 * hidden size.
h_0 (Variable) – The initial hidden state is an optional input, default is zero. This is a tensor with shape (N x D), where N is the batch size and D is the hidden size.
c_0 (Variable) – The initial cell state is an optional input, default is zero. This is a tensor with shape (N x D), where N is the batch size. h_0 and c_0 can be NULL but only at the same time.
param_attr (ParamAttr|None) –
The parameter attribute for the learnable hidden-hidden weights.
Weights = {\(W_{ch}, W_{ih}, W_{fh}, W_{oh}\)}
The shape is (D x 4D), where D is the hidden size.
If it is set to None or one attribute of ParamAttr, dynamic_lstm will create ParamAttr as param_attr. If the Initializer of the param_attr is not set, the parameter is initialized with Xavier. Default: None.
bias_attr (ParamAttr|None) –
The bias attribute for the learnable bias weights, which contains two parts, input-hidden bias weights and peephole connections weights if setting use_peepholes to True.
use_peepholes = False - Biases = {\(b_c, b_i, b_f, b_o\)}. - The shape is (1 x 4D).
use_peepholes = True - Biases = { \(b_c, b_i, b_f, b_o, W_{ic}, W_{fc}, W_{oc}\)}. - The shape is (1 x 7D).
If it is set to None or one attribute of ParamAttr, dynamic_lstm will create ParamAttr as bias_attr. If the Initializer of the bias_attr is not set, the bias is initialized zero. Default: None.
use_peepholes (bool) – (bool, default: True) whether to enable diagonal/peephole connections
is_reverse (bool) – (bool, default: False) whether to compute reversed LSTM
gate_activation (str) – (string, default: sigmoid)The activation for input gate, forget gate and output gate, sigmoid by default
cell_activation (str) – (string, default: tanh)The activation for cell output, tanh by default
candidate_activation (str) – (string, default: tanh)The activation for candidate hidden state, tanh by default
dtype (str) – Data type. Choices = [“float32”, “float64”], default “float32”.
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
- Returns
The hidden state, and cell state of LSTM. The shape of both is (T x D), and lod is the same with the input.
- Return type
tuple
Examples
import paddle.fluid as fluid emb_dim = 256 vocab_size = 10000 hidden_dim = 512 data = fluid.layers.data(name='x', shape=[1], dtype='int32', lod_level=1) emb = fluid.layers.embedding(input=data, size=[vocab_size, emb_dim], is_sparse=True) forward_proj = fluid.layers.fc(input=emb, size=hidden_dim * 4, bias_attr=False) forward, _ = fluid.layers.dynamic_lstm( input=forward_proj, size=hidden_dim * 4, use_peepholes=False)
dynamic_lstmp¶
-
paddle.fluid.layers.
dynamic_lstmp
(input, size, proj_size, param_attr=None, bias_attr=None, use_peepholes=True, is_reverse=False, gate_activation='sigmoid', cell_activation='tanh', candidate_activation='tanh', proj_activation='tanh', dtype='float32', name=None, h_0=None, c_0=None, cell_clip=None, proj_clip=None)[source] Dynamic LSTMP Layer
LSTMP (LSTM with recurrent projection) layer has a separate projection layer after the LSTM layer, projecting the original hidden state to a lower-dimensional one, which is proposed to reduce the number of total parameters and furthermore computational complexity for the LSTM, espeacially for the case that the size of output units is relative large (https://research.google.com/pubs/archive/43905.pdf).
The formula is as follows:
\[ \begin{align}\begin{aligned}i_t & = \sigma(W_{ix}x_{t} + W_{ir}r_{t-1} + W_{ic}c_{t-1} + b_i)\\f_t & = \sigma(W_{fx}x_{t} + W_{fr}r_{t-1} + W_{fc}c_{t-1} + b_f)\\\tilde{c_t} & = act_g(W_{cx}x_t + W_{cr}r_{t-1} + b_c)\\o_t & = \sigma(W_{ox}x_{t} + W_{or}r_{t-1} + W_{oc}c_t + b_o)\\c_t & = f_t \odot c_{t-1} + i_t \odot \tilde{c_t}\\h_t & = o_t \odot act_h(c_t)\\r_t & = \overline{act_h}(W_{rh}h_t)\end{aligned}\end{align} \]In the above formula:
\(W\): Denotes weight matrices (e.g. \(W_{xi}\) is the matrix of weights from the input gate to the input).
\(W_{ic}\), \(W_{fc}\), \(W_{oc}\): Diagonal weight matrices for peephole connections. In our implementation, we use vectors to represent these diagonal weight matrices.
\(b\): Denotes bias vectors (e.g. \(b_i\) is the input gate bias vector).
\(\sigma\): The activation, such as logistic sigmoid function.
\(i, f, o\) and \(c\): The input gate, forget gate, output gate, and cell activation vectors, respectively, all of which have the same size as the cell output activation vector \(h\).
\(h\): The hidden state.
\(r\): The recurrent projection of the hidden state.
\(\tilde{c_t}\): The candidate hidden state, whose computation is based on the current input and previous hidden state.
\(\odot\): The element-wise product of the vectors.
\(act_g\) and \(act_h\): The cell input and cell output activation functions and tanh is usually used for them.
\(\overline{act_h}\): The activation function for the projection output, usually using identity or same as \(act_h\).
Set use_peepholes to False to disable peephole connection. The formula is omitted here, please refer to the paper http://www.bioinf.jku.at/publications/older/2604.pdf for details.
Note that these \(W_{xi}x_{t}, W_{xf}x_{t}, W_{xc}x_{t}, W_{xo}x_{t}\) operations on the input \(x_{t}\) are NOT included in this operator. Users can choose to use fully-connected layer before LSTMP layer.
- Parameters
input (Variable) – The input of dynamic_lstmp layer, which supports variable-time length input sequence. The underlying tensor in this Variable is a matrix with shape (T X 4D), where T is the total time steps in this mini-batch, D is the hidden size.
size (int) – 4 * hidden size.
proj_size (int) – The size of projection output.
param_attr (ParamAttr|None) –
The parameter attribute for the learnable hidden-hidden weight and projection weight.
Hidden-hidden weight = {\(W_{ch}, W_{ih}, W_{fh}, W_{oh}\)}.
The shape of hidden-hidden weight is (P x 4D), where P is the projection size and D the hidden size.
Projection weight = {\(W_{rh}\)}.
The shape of projection weight is (D x P).
If it is set to None or one attribute of ParamAttr, dynamic_lstm will create ParamAttr as param_attr. If the Initializer of the param_attr is not set, the parameter is initialized with Xavier. Default: None.
bias_attr (ParamAttr|None) –
The bias attribute for the learnable bias weights, which contains two parts, input-hidden bias weights and peephole connections weights if setting use_peepholes to True.
use_peepholes = False
Biases = {\(b_c, b_i, b_f, b_o\)}.
The shape is (1 x 4D).
use_peepholes = True
Biases = { \(b_c, b_i, b_f, b_o, W_{ic}, W_{fc}, W_{oc}\)}.
The shape is (1 x 7D).
If it is set to None or one attribute of ParamAttr, dynamic_lstm will create ParamAttr as bias_attr. If the Initializer of the bias_attr is not set, the bias is initialized zero. Default: None.
use_peepholes (bool) – Whether to enable diagonal/peephole connections, default True.
is_reverse (bool) – Whether to compute reversed LSTM, default False.
gate_activation (str) – The activation for input gate, forget gate and output gate. Choices = [“sigmoid”, “tanh”, “relu”, “identity”], default “sigmoid”.
cell_activation (str) – The activation for cell output. Choices = [“sigmoid”, “tanh”, “relu”, “identity”], default “tanh”.
candidate_activation (str) – The activation for candidate hidden state. Choices = [“sigmoid”, “tanh”, “relu”, “identity”], default “tanh”.
proj_activation (str) – The activation for projection output. Choices = [“sigmoid”, “tanh”, “relu”, “identity”], default “tanh”.
dtype (str) – Data type. Choices = [“float32”, “float64”], default “float32”.
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
h_0 (Variable) – The initial hidden state is an optional input, default is zero. This is a tensor with shape (N x D), where N is the batch size and D is the projection size.
c_0 (Variable) – The initial cell state is an optional input, default is zero. This is a tensor with shape (N x D), where N is the batch size. h_0 and c_0 can be NULL but only at the same time.
cell_clip (float) – If provided the cell state is clipped by this value prior to the cell output activation.
proj_clip (float) – If num_proj > 0 and proj_clip is provided, then the projected values are clipped elementwise to within [-proj_clip, proj_clip].
- Returns
A tuple of two output variable: the projection of hidden state, and cell state of LSTMP. The shape of projection is (T x P), for the cell state which is (T x D), and both LoD is the same with the input.
- Return type
tuple
Examples
import paddle.fluid as fluid dict_dim, emb_dim = 128, 64 data = fluid.layers.data(name='sequence', shape=[1], dtype='int32', lod_level=1) emb = fluid.layers.embedding(input=data, size=[dict_dim, emb_dim]) hidden_dim, proj_dim = 512, 256 fc_out = fluid.layers.fc(input=emb, size=hidden_dim * 4, act=None, bias_attr=None) proj_out, _ = fluid.layers.dynamic_lstmp(input=fc_out, size=hidden_dim * 4, proj_size=proj_dim, use_peepholes=False, is_reverse=True, cell_activation="tanh", proj_activation="tanh")
edit_distance¶
-
paddle.fluid.layers.
edit_distance
(input, label, normalized=True, ignored_tokens=None)[source] Edit distance operator computes the edit distances between a batch of hypothesis strings and their references. Edit distance, also called Levenshtein distance, measures how dissimilar two strings are by counting the minimum number of operations to transform one string into anthor. Here the operations include insertion, deletion, and substitution.
For example, given hypothesis string A = “kitten” and reference B = “sitting”, the edit distance is 3 for A will be transformed into B at least after two substitutions and one insertion:
“kitten” -> “sitten” -> “sittin” -> “sitting”
The input is a LoDTensor consisting of all the hypothesis strings with the total number denoted by batch_size, and the separation is specified by the LoD information. And the batch_size reference strings are arranged in order in the same way in the input LoDTensor.
The output contains the batch_size results and each stands for the edit distance for a pair of strings respectively. If Attr(normalized) is true, the edit distance will be divided by the length of reference string.
- Parameters
input (Variable) – The indices for hypothesis strings.
label (Variable) – The indices for reference strings.
normalized (bool, default True) – Indicated whether to normalize the edit distance by the length of reference string.
ignored_tokens (list<int>, default None) – Tokens that should be removed before calculating edit distance.
name (str) – The name of this layer. It is optional.
- Returns
sequence-to-sequence edit distance in shape [batch_size, 1].
- Return type
Variable
Examples
import paddle.fluid as fluid x = fluid.layers.data(name='x', shape=[1], dtype='int64') y = fluid.layers.data(name='y', shape=[1], dtype='int64') cost, _ = fluid.layers.edit_distance(input=x, label=y) cpu = fluid.core.CPUPlace() exe = fluid.Executor(cpu) exe.run(fluid.default_startup_program()) import numpy x_ = numpy.random.randint(5, size=(2, 1)).astype('int64') y_ = numpy.random.randint(5, size=(2, 1)).astype('int64') print(x_) print(y_) x = fluid.create_lod_tensor(x_, [[2]], cpu) y = fluid.create_lod_tensor(y_, [[2]], cpu) outs = exe.run(feed={'x':x, 'y':y}, fetch_list=[cost.name]) print(outs)
elementwise_add¶
-
paddle.fluid.layers.
elementwise_add
(x, y, axis=-1, act=None, name=None)[source] Elementwise Add Operator
The equation is:
$$Out = X + Y$$
\(X\): a tensor of any dimension.
\(Y\): a tensor whose dimensions must be less than or equal to the dimensions of \(X\).
There are two cases for this operator:
The shape of \(Y\) is the same with \(X\).
The shape of \(Y\) is a continuous subsequence of \(X\).
For case 2:
Broadcast \(Y\) to match the shape of \(X\), where \(axis\) is the start dimension index for broadcasting \(Y\) onto \(X\).
If \(axis\) is -1 (default), \(axis = rank(X) - rank(Y)\).
The trailing dimensions of size 1 for \(Y\) will be ignored for the consideration of subsequence, such as shape(Y) = (2, 1) => (2).
For example:
shape(X) = (2, 3, 4, 5), shape(Y) = (,) shape(X) = (2, 3, 4, 5), shape(Y) = (5,) shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5), with axis=-1(default) or axis=2 shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1 shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0 shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0
The inputs \(X\) and \(Y\) can carry the different LoD information. But the output only shares the LoD information with the input \(X\).
- Parameters
x – (Tensor), The first input tensor of elementwise op.
y – (Tensor), The second input tensor of elementwise op.
axis (INT) – (int, default -1). The start dimension index for broadcasting Y onto X.
x_data_format (STRING) – (string, default NCHW) Only used in mkldnnAn optional string from: “NHWC”, “NCHW”, “NCHW16C”, “NCHW8C”. Defaults to “”. Specify the data format of the output data, the input will be transformed automatically.
y_data_format (STRING) – (string, default “”) Only used in mkldnnAn optional string from: “NHWC”, “NCHW”, “NCHW16C”, “NCHW8C”. Defaults to “”. Specify the data format of the output data, the input will be transformed automatically.
act (basestring|None) – Activation applied to the output.
name (basestring|None) – Name of the output.
- Returns
The output of elementwise op.
Examples
import paddle.fluid as fluid # example 1: shape(x) = (2, 3, 4, 5), shape(y) = (2, 3, 4, 5) x0 = fluid.layers.data(name="x0", shape=[2, 3, 4, 5], dtype='float32') y0 = fluid.layers.data(name="y0", shape=[2, 3, 4, 5], dtype='float32') z0 = fluid.layers.elementwise_add(x0, y0) # example 2: shape(X) = (2, 3, 4, 5), shape(Y) = (5) x1 = fluid.layers.data(name="x1", shape=[2, 3, 4, 5], dtype='float32') y1 = fluid.layers.data(name="y1", shape=[5], dtype='float32') z1 = fluid.layers.elementwise_add(x1, y1) # example 3: shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5), with axis=-1(default) or axis=2 x2 = fluid.layers.data(name="x2", shape=[2, 3, 4, 5], dtype='float32') y2 = fluid.layers.data(name="y2", shape=[4, 5], dtype='float32') z2 = fluid.layers.elementwise_add(x2, y2, axis=2) # example 4: shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1 x3 = fluid.layers.data(name="x3", shape=[2, 3, 4, 5], dtype='float32') y3 = fluid.layers.data(name="y3", shape=[3, 4], dtype='float32') z3 = fluid.layers.elementwise_add(x3, y3, axis=1) # example 5: shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0 x4 = fluid.layers.data(name="x4", shape=[2, 3, 4, 5], dtype='float32') y4 = fluid.layers.data(name="y4", shape=[2], dtype='float32') z4 = fluid.layers.elementwise_add(x4, y4, axis=0) # example 6: shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0 x5 = fluid.layers.data(name="x5", shape=[2, 3, 4, 5], dtype='float32') y5 = fluid.layers.data(name="y5", shape=[2], dtype='float32') z5 = fluid.layers.elementwise_add(x5, y5, axis=0)
elementwise_div¶
-
paddle.fluid.layers.
elementwise_div
(x, y, axis=-1, act=None, name=None)[source] Elementwise Div Operator
The equation is:
$$Out = X / Y$$
\(X\): a tensor of any dimension.
\(Y\): a tensor whose dimensions must be less than or equal to the dimensions of \(X\).
There are two cases for this operator:
The shape of \(Y\) is the same with \(X\).
The shape of \(Y\) is a continuous subsequence of \(X\).
For case 2:
Broadcast \(Y\) to match the shape of \(X\), where \(axis\) is the start dimension index for broadcasting \(Y\) onto \(X\).
If \(axis\) is -1 (default), \(axis = rank(X) - rank(Y)\).
The trailing dimensions of size 1 for \(Y\) will be ignored for the consideration of subsequence, such as shape(Y) = (2, 1) => (2).
For example:
shape(X) = (2, 3, 4, 5), shape(Y) = (,) shape(X) = (2, 3, 4, 5), shape(Y) = (5,) shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5), with axis=-1(default) or axis=2 shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1 shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0 shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0
The inputs \(X\) and \(Y\) can carry the different LoD information. But the output only shares the LoD information with the input \(X\).
- Parameters
x – (Tensor), The first input tensor of elementwise op.
y – (Tensor), The second input tensor of elementwise op.
axis (INT) – (int, default -1). The start dimension index for broadcasting Y onto X.
x_data_format (STRING) – (string, default NCHW) Only used in mkldnnAn optional string from: “NHWC”, “NCHW”, “NCHW16C”, “NCHW8C”. Defaults to “”. Specify the data format of the output data, the input will be transformed automatically.
y_data_format (STRING) – (string, default “”) Only used in mkldnnAn optional string from: “NHWC”, “NCHW”, “NCHW16C”, “NCHW8C”. Defaults to “”. Specify the data format of the output data, the input will be transformed automatically.
act (basestring|None) – Activation applied to the output.
name (basestring|None) – Name of the output.
- Returns
The output of elementwise op.
Examples
import paddle.fluid as fluid # example 1: shape(x) = (2, 3, 4, 5), shape(y) = (2, 3, 4, 5) x0 = fluid.layers.data(name="x0", shape=[2, 3, 4, 5], dtype='float32') y0 = fluid.layers.data(name="y0", shape=[2, 3, 4, 5], dtype='float32') z0 = fluid.layers.elementwise_div(x0, y0) # example 2: shape(X) = (2, 3, 4, 5), shape(Y) = (5) x1 = fluid.layers.data(name="x1", shape=[2, 3, 4, 5], dtype='float32') y1 = fluid.layers.data(name="y1", shape=[5], dtype='float32') z1 = fluid.layers.elementwise_div(x1, y1) # example 3: shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5), with axis=-1(default) or axis=2 x2 = fluid.layers.data(name="x2", shape=[2, 3, 4, 5], dtype='float32') y2 = fluid.layers.data(name="y2", shape=[4, 5], dtype='float32') z2 = fluid.layers.elementwise_div(x2, y2, axis=2) # example 4: shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1 x3 = fluid.layers.data(name="x3", shape=[2, 3, 4, 5], dtype='float32') y3 = fluid.layers.data(name="y3", shape=[3, 4], dtype='float32') z3 = fluid.layers.elementwise_div(x3, y3, axis=1) # example 5: shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0 x4 = fluid.layers.data(name="x4", shape=[2, 3, 4, 5], dtype='float32') y4 = fluid.layers.data(name="y4", shape=[2], dtype='float32') z4 = fluid.layers.elementwise_div(x4, y4, axis=0) # example 6: shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0 x5 = fluid.layers.data(name="x5", shape=[2, 3, 4, 5], dtype='float32') y5 = fluid.layers.data(name="y5", shape=[2], dtype='float32') z5 = fluid.layers.elementwise_div(x5, y5, axis=0)
elementwise_floordiv¶
-
paddle.fluid.layers.
elementwise_floordiv
(x, y, axis=-1, act=None, name=None)[source] Elementwise FloorDiv Operator
The equation is:
$$Out = X // Y$$
\(X\): a tensor of any dimension.
\(Y\): a tensor whose dimensions must be less than or equal to the dimensions of \(X\).
There are two cases for this operator:
The shape of \(Y\) is the same with \(X\).
The shape of \(Y\) is a continuous subsequence of \(X\).
For case 2:
Broadcast \(Y\) to match the shape of \(X\), where \(axis\) is the start dimension index for broadcasting \(Y\) onto \(X\).
If \(axis\) is -1 (default), \(axis = rank(X) - rank(Y)\).
The trailing dimensions of size 1 for \(Y\) will be ignored for the consideration of subsequence, such as shape(Y) = (2, 1) => (2).
For example:
shape(X) = (2, 3, 4, 5), shape(Y) = (,) shape(X) = (2, 3, 4, 5), shape(Y) = (5,) shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5), with axis=-1(default) or axis=2 shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1 shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0 shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0
The inputs \(X\) and \(Y\) can carry the different LoD information. But the output only shares the LoD information with the input \(X\).
- Parameters
x – (Tensor), The first input tensor of elementwise op.
y – (Tensor), The second input tensor of elementwise op.
axis (INT) – (int, default -1). The start dimension index for broadcasting Y onto X.
x_data_format (STRING) – (string, default NCHW) Only used in mkldnnAn optional string from: “NHWC”, “NCHW”, “NCHW16C”, “NCHW8C”. Defaults to “”. Specify the data format of the output data, the input will be transformed automatically.
y_data_format (STRING) – (string, default “”) Only used in mkldnnAn optional string from: “NHWC”, “NCHW”, “NCHW16C”, “NCHW8C”. Defaults to “”. Specify the data format of the output data, the input will be transformed automatically.
act (basestring|None) – Activation applied to the output.
name (basestring|None) – Name of the output.
- Returns
The output of elementwise op.
Examples
import paddle.fluid as fluid # example 1: shape(x) = (2, 3, 4, 5), shape(y) = (2, 3, 4, 5) x0 = fluid.layers.data(name="x0", shape=[2, 3, 4, 5], dtype='float32') y0 = fluid.layers.data(name="y0", shape=[2, 3, 4, 5], dtype='float32') z0 = fluid.layers.elementwise_floordiv(x0, y0) # example 2: shape(X) = (2, 3, 4, 5), shape(Y) = (5) x1 = fluid.layers.data(name="x1", shape=[2, 3, 4, 5], dtype='float32') y1 = fluid.layers.data(name="y1", shape=[5], dtype='float32') z1 = fluid.layers.elementwise_floordiv(x1, y1) # example 3: shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5), with axis=-1(default) or axis=2 x2 = fluid.layers.data(name="x2", shape=[2, 3, 4, 5], dtype='float32') y2 = fluid.layers.data(name="y2", shape=[4, 5], dtype='float32') z2 = fluid.layers.elementwise_floordiv(x2, y2, axis=2) # example 4: shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1 x3 = fluid.layers.data(name="x3", shape=[2, 3, 4, 5], dtype='float32') y3 = fluid.layers.data(name="y3", shape=[3, 4], dtype='float32') z3 = fluid.layers.elementwise_floordiv(x3, y3, axis=1) # example 5: shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0 x4 = fluid.layers.data(name="x4", shape=[2, 3, 4, 5], dtype='float32') y4 = fluid.layers.data(name="y4", shape=[2], dtype='float32') z4 = fluid.layers.elementwise_floordiv(x4, y4, axis=0) # example 6: shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0 x5 = fluid.layers.data(name="x5", shape=[2, 3, 4, 5], dtype='float32') y5 = fluid.layers.data(name="y5", shape=[2], dtype='float32') z5 = fluid.layers.elementwise_floordiv(x5, y5, axis=0)
elementwise_max¶
-
paddle.fluid.layers.
elementwise_max
(x, y, axis=-1, act=None, name=None)[source] Elementwise Max Operator
The equation is:
$$Out = max(X, Y)$$
\(X\): a tensor of any dimension.
\(Y\): a tensor whose dimensions must be less than or equal to the dimensions of \(X\).
There are two cases for this operator:
The shape of \(Y\) is the same with \(X\).
The shape of \(Y\) is a continuous subsequence of \(X\).
For case 2:
Broadcast \(Y\) to match the shape of \(X\), where \(axis\) is the start dimension index for broadcasting \(Y\) onto \(X\).
If \(axis\) is -1 (default), \(axis = rank(X) - rank(Y)\).
The trailing dimensions of size 1 for \(Y\) will be ignored for the consideration of subsequence, such as shape(Y) = (2, 1) => (2).
For example:
shape(X) = (2, 3, 4, 5), shape(Y) = (,) shape(X) = (2, 3, 4, 5), shape(Y) = (5,) shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5), with axis=-1(default) or axis=2 shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1 shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0 shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0
The inputs \(X\) and \(Y\) can carry the different LoD information. But the output only shares the LoD information with the input \(X\).
- Parameters
x – (Tensor), The first input tensor of elementwise op.
y – (Tensor), The second input tensor of elementwise op.
axis (INT) – (int, default -1). The start dimension index for broadcasting Y onto X.
x_data_format (STRING) – (string, default NCHW) Only used in mkldnnAn optional string from: “NHWC”, “NCHW”, “NCHW16C”, “NCHW8C”. Defaults to “”. Specify the data format of the output data, the input will be transformed automatically.
y_data_format (STRING) – (string, default “”) Only used in mkldnnAn optional string from: “NHWC”, “NCHW”, “NCHW16C”, “NCHW8C”. Defaults to “”. Specify the data format of the output data, the input will be transformed automatically.
act (basestring|None) – Activation applied to the output.
name (basestring|None) – Name of the output.
- Returns
The output of elementwise op.
Examples
import paddle.fluid as fluid # example 1: shape(x) = (2, 3, 4, 5), shape(y) = (2, 3, 4, 5) x0 = fluid.layers.data(name="x0", shape=[2, 3, 4, 5], dtype='float32') y0 = fluid.layers.data(name="y0", shape=[2, 3, 4, 5], dtype='float32') z0 = fluid.layers.elementwise_max(x0, y0) # example 2: shape(X) = (2, 3, 4, 5), shape(Y) = (5) x1 = fluid.layers.data(name="x1", shape=[2, 3, 4, 5], dtype='float32') y1 = fluid.layers.data(name="y1", shape=[5], dtype='float32') z1 = fluid.layers.elementwise_max(x1, y1) # example 3: shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5), with axis=-1(default) or axis=2 x2 = fluid.layers.data(name="x2", shape=[2, 3, 4, 5], dtype='float32') y2 = fluid.layers.data(name="y2", shape=[4, 5], dtype='float32') z2 = fluid.layers.elementwise_max(x2, y2, axis=2) # example 4: shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1 x3 = fluid.layers.data(name="x3", shape=[2, 3, 4, 5], dtype='float32') y3 = fluid.layers.data(name="y3", shape=[3, 4], dtype='float32') z3 = fluid.layers.elementwise_max(x3, y3, axis=1) # example 5: shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0 x4 = fluid.layers.data(name="x4", shape=[2, 3, 4, 5], dtype='float32') y4 = fluid.layers.data(name="y4", shape=[2], dtype='float32') z4 = fluid.layers.elementwise_max(x4, y4, axis=0) # example 6: shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0 x5 = fluid.layers.data(name="x5", shape=[2, 3, 4, 5], dtype='float32') y5 = fluid.layers.data(name="y5", shape=[2], dtype='float32') z5 = fluid.layers.elementwise_max(x5, y5, axis=0)
elementwise_min¶
-
paddle.fluid.layers.
elementwise_min
(x, y, axis=-1, act=None, name=None)[source] Elementwise Min Operator
The equation is:
$$Out = min(X, Y)$$
\(X\): a tensor of any dimension.
\(Y\): a tensor whose dimensions must be less than or equal to the dimensions of \(X\).
There are two cases for this operator:
The shape of \(Y\) is the same with \(X\).
The shape of \(Y\) is a continuous subsequence of \(X\).
For case 2:
Broadcast \(Y\) to match the shape of \(X\), where \(axis\) is the start dimension index for broadcasting \(Y\) onto \(X\).
If \(axis\) is -1 (default), \(axis = rank(X) - rank(Y)\).
The trailing dimensions of size 1 for \(Y\) will be ignored for the consideration of subsequence, such as shape(Y) = (2, 1) => (2).
For example:
shape(X) = (2, 3, 4, 5), shape(Y) = (,) shape(X) = (2, 3, 4, 5), shape(Y) = (5,) shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5), with axis=-1(default) or axis=2 shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1 shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0 shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0
The inputs \(X\) and \(Y\) can carry the different LoD information. But the output only shares the LoD information with the input \(X\).
- Parameters
x – (Tensor), The first input tensor of elementwise op.
y – (Tensor), The second input tensor of elementwise op.
axis (INT) – (int, default -1). The start dimension index for broadcasting Y onto X.
x_data_format (STRING) – (string, default NCHW) Only used in mkldnnAn optional string from: “NHWC”, “NCHW”, “NCHW16C”, “NCHW8C”. Defaults to “”. Specify the data format of the output data, the input will be transformed automatically.
y_data_format (STRING) – (string, default “”) Only used in mkldnnAn optional string from: “NHWC”, “NCHW”, “NCHW16C”, “NCHW8C”. Defaults to “”. Specify the data format of the output data, the input will be transformed automatically.
act (basestring|None) – Activation applied to the output.
name (basestring|None) – Name of the output.
- Returns
The output of elementwise op.
Examples
import paddle.fluid as fluid # example 1: shape(x) = (2, 3, 4, 5), shape(y) = (2, 3, 4, 5) x0 = fluid.layers.data(name="x0", shape=[2, 3, 4, 5], dtype='float32') y0 = fluid.layers.data(name="y0", shape=[2, 3, 4, 5], dtype='float32') z0 = fluid.layers.elementwise_min(x0, y0) # example 2: shape(X) = (2, 3, 4, 5), shape(Y) = (5) x1 = fluid.layers.data(name="x1", shape=[2, 3, 4, 5], dtype='float32') y1 = fluid.layers.data(name="y1", shape=[5], dtype='float32') z1 = fluid.layers.elementwise_min(x1, y1) # example 3: shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5), with axis=-1(default) or axis=2 x2 = fluid.layers.data(name="x2", shape=[2, 3, 4, 5], dtype='float32') y2 = fluid.layers.data(name="y2", shape=[4, 5], dtype='float32') z2 = fluid.layers.elementwise_min(x2, y2, axis=2) # example 4: shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1 x3 = fluid.layers.data(name="x3", shape=[2, 3, 4, 5], dtype='float32') y3 = fluid.layers.data(name="y3", shape=[3, 4], dtype='float32') z3 = fluid.layers.elementwise_min(x3, y3, axis=1) # example 5: shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0 x4 = fluid.layers.data(name="x4", shape=[2, 3, 4, 5], dtype='float32') y4 = fluid.layers.data(name="y4", shape=[2], dtype='float32') z4 = fluid.layers.elementwise_min(x4, y4, axis=0) # example 6: shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0 x5 = fluid.layers.data(name="x5", shape=[2, 3, 4, 5], dtype='float32') y5 = fluid.layers.data(name="y5", shape=[2], dtype='float32') z5 = fluid.layers.elementwise_min(x5, y5, axis=0)
elementwise_mod¶
-
paddle.fluid.layers.
elementwise_mod
(x, y, axis=-1, act=None, name=None)[source] Elementwise Mod Operator
The equation is:
$$Out = X \% Y$$
\(X\): a tensor of any dimension.
\(Y\): a tensor whose dimensions must be less than or equal to the dimensions of \(X\).
There are two cases for this operator:
The shape of \(Y\) is the same with \(X\).
The shape of \(Y\) is a continuous subsequence of \(X\).
For case 2:
Broadcast \(Y\) to match the shape of \(X\), where \(axis\) is the start dimension index for broadcasting \(Y\) onto \(X\).
If \(axis\) is -1 (default), \(axis = rank(X) - rank(Y)\).
The trailing dimensions of size 1 for \(Y\) will be ignored for the consideration of subsequence, such as shape(Y) = (2, 1) => (2).
For example:
shape(X) = (2, 3, 4, 5), shape(Y) = (,) shape(X) = (2, 3, 4, 5), shape(Y) = (5,) shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5), with axis=-1(default) or axis=2 shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1 shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0 shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0
The inputs \(X\) and \(Y\) can carry the different LoD information. But the output only shares the LoD information with the input \(X\).
- Parameters
x – (Tensor), The first input tensor of elementwise op.
y – (Tensor), The second input tensor of elementwise op.
axis (INT) – (int, default -1). The start dimension index for broadcasting Y onto X.
x_data_format (STRING) – (string, default NCHW) Only used in mkldnnAn optional string from: “NHWC”, “NCHW”, “NCHW16C”, “NCHW8C”. Defaults to “”. Specify the data format of the output data, the input will be transformed automatically.
y_data_format (STRING) – (string, default “”) Only used in mkldnnAn optional string from: “NHWC”, “NCHW”, “NCHW16C”, “NCHW8C”. Defaults to “”. Specify the data format of the output data, the input will be transformed automatically.
act (basestring|None) – Activation applied to the output.
name (basestring|None) – Name of the output.
- Returns
The output of elementwise op.
Examples
import paddle.fluid as fluid # example 1: shape(x) = (2, 3, 4, 5), shape(y) = (2, 3, 4, 5) x0 = fluid.layers.data(name="x0", shape=[2, 3, 4, 5], dtype='float32') y0 = fluid.layers.data(name="y0", shape=[2, 3, 4, 5], dtype='float32') z0 = fluid.layers.elementwise_mod(x0, y0) # example 2: shape(X) = (2, 3, 4, 5), shape(Y) = (5) x1 = fluid.layers.data(name="x1", shape=[2, 3, 4, 5], dtype='float32') y1 = fluid.layers.data(name="y1", shape=[5], dtype='float32') z1 = fluid.layers.elementwise_mod(x1, y1) # example 3: shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5), with axis=-1(default) or axis=2 x2 = fluid.layers.data(name="x2", shape=[2, 3, 4, 5], dtype='float32') y2 = fluid.layers.data(name="y2", shape=[4, 5], dtype='float32') z2 = fluid.layers.elementwise_mod(x2, y2, axis=2) # example 4: shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1 x3 = fluid.layers.data(name="x3", shape=[2, 3, 4, 5], dtype='float32') y3 = fluid.layers.data(name="y3", shape=[3, 4], dtype='float32') z3 = fluid.layers.elementwise_mod(x3, y3, axis=1) # example 5: shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0 x4 = fluid.layers.data(name="x4", shape=[2, 3, 4, 5], dtype='float32') y4 = fluid.layers.data(name="y4", shape=[2], dtype='float32') z4 = fluid.layers.elementwise_mod(x4, y4, axis=0) # example 6: shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0 x5 = fluid.layers.data(name="x5", shape=[2, 3, 4, 5], dtype='float32') y5 = fluid.layers.data(name="y5", shape=[2], dtype='float32') z5 = fluid.layers.elementwise_mod(x5, y5, axis=0)
elementwise_mul¶
-
paddle.fluid.layers.
elementwise_mul
(x, y, axis=-1, act=None, name=None)[source] Elementwise Mul Operator
The equation is:
$$Out = X \odot Y$$
\(X\): a tensor of any dimension.
\(Y\): a tensor whose dimensions must be less than or equal to the dimensions of \(X\).
There are two cases for this operator:
The shape of \(Y\) is the same with \(X\).
The shape of \(Y\) is a continuous subsequence of \(X\).
For case 2:
Broadcast \(Y\) to match the shape of \(X\), where \(axis\) is the start dimension index for broadcasting \(Y\) onto \(X\).
If \(axis\) is -1 (default), \(axis = rank(X) - rank(Y)\).
The trailing dimensions of size 1 for \(Y\) will be ignored for the consideration of subsequence, such as shape(Y) = (2, 1) => (2).
For example:
shape(X) = (2, 3, 4, 5), shape(Y) = (,) shape(X) = (2, 3, 4, 5), shape(Y) = (5,) shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5), with axis=-1(default) or axis=2 shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1 shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0 shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0
The inputs \(X\) and \(Y\) can carry the different LoD information. But the output only shares the LoD information with the input \(X\).
- Parameters
x – (Tensor), The first input tensor of elementwise op.
y – (Tensor), The second input tensor of elementwise op.
axis (INT) – (int, default -1). The start dimension index for broadcasting Y onto X.
x_data_format (STRING) – (string, default NCHW) Only used in mkldnnAn optional string from: “NHWC”, “NCHW”, “NCHW16C”, “NCHW8C”. Defaults to “”. Specify the data format of the output data, the input will be transformed automatically.
y_data_format (STRING) – (string, default “”) Only used in mkldnnAn optional string from: “NHWC”, “NCHW”, “NCHW16C”, “NCHW8C”. Defaults to “”. Specify the data format of the output data, the input will be transformed automatically.
act (basestring|None) – Activation applied to the output.
name (basestring|None) – Name of the output.
- Returns
The output of elementwise op.
Examples
import paddle.fluid as fluid # example 1: shape(x) = (2, 3, 4, 5), shape(y) = (2, 3, 4, 5) x0 = fluid.layers.data(name="x0", shape=[2, 3, 4, 5], dtype='float32') y0 = fluid.layers.data(name="y0", shape=[2, 3, 4, 5], dtype='float32') z0 = fluid.layers.elementwise_mul(x0, y0) # example 2: shape(X) = (2, 3, 4, 5), shape(Y) = (5) x1 = fluid.layers.data(name="x1", shape=[2, 3, 4, 5], dtype='float32') y1 = fluid.layers.data(name="y1", shape=[5], dtype='float32') z1 = fluid.layers.elementwise_mul(x1, y1) # example 3: shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5), with axis=-1(default) or axis=2 x2 = fluid.layers.data(name="x2", shape=[2, 3, 4, 5], dtype='float32') y2 = fluid.layers.data(name="y2", shape=[4, 5], dtype='float32') z2 = fluid.layers.elementwise_mul(x2, y2, axis=2) # example 4: shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1 x3 = fluid.layers.data(name="x3", shape=[2, 3, 4, 5], dtype='float32') y3 = fluid.layers.data(name="y3", shape=[3, 4], dtype='float32') z3 = fluid.layers.elementwise_mul(x3, y3, axis=1) # example 5: shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0 x4 = fluid.layers.data(name="x4", shape=[2, 3, 4, 5], dtype='float32') y4 = fluid.layers.data(name="y4", shape=[2], dtype='float32') z4 = fluid.layers.elementwise_mul(x4, y4, axis=0) # example 6: shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0 x5 = fluid.layers.data(name="x5", shape=[2, 3, 4, 5], dtype='float32') y5 = fluid.layers.data(name="y5", shape=[2], dtype='float32') z5 = fluid.layers.elementwise_mul(x5, y5, axis=0)
elementwise_pow¶
-
paddle.fluid.layers.
elementwise_pow
(x, y, axis=-1, act=None, name=None)[source] Elementwise Pow Operator
The equation is:
$$Out = X ^ Y$$
\(X\): a tensor of any dimension.
\(Y\): a tensor whose dimensions must be less than or equal to the dimensions of \(X\).
There are two cases for this operator:
The shape of \(Y\) is the same with \(X\).
The shape of \(Y\) is a continuous subsequence of \(X\).
For case 2:
Broadcast \(Y\) to match the shape of \(X\), where \(axis\) is the start dimension index for broadcasting \(Y\) onto \(X\).
If \(axis\) is -1 (default), \(axis = rank(X) - rank(Y)\).
The trailing dimensions of size 1 for \(Y\) will be ignored for the consideration of subsequence, such as shape(Y) = (2, 1) => (2).
For example:
shape(X) = (2, 3, 4, 5), shape(Y) = (,) shape(X) = (2, 3, 4, 5), shape(Y) = (5,) shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5), with axis=-1(default) or axis=2 shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1 shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0 shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0
The inputs \(X\) and \(Y\) can carry the different LoD information. But the output only shares the LoD information with the input \(X\).
- Parameters
x – (Tensor), The first input tensor of elementwise op.
y – (Tensor), The second input tensor of elementwise op.
axis (INT) – (int, default -1). The start dimension index for broadcasting Y onto X.
x_data_format (STRING) – (string, default NCHW) Only used in mkldnnAn optional string from: “NHWC”, “NCHW”, “NCHW16C”, “NCHW8C”. Defaults to “”. Specify the data format of the output data, the input will be transformed automatically.
y_data_format (STRING) – (string, default “”) Only used in mkldnnAn optional string from: “NHWC”, “NCHW”, “NCHW16C”, “NCHW8C”. Defaults to “”. Specify the data format of the output data, the input will be transformed automatically.
act (basestring|None) – Activation applied to the output.
name (basestring|None) – Name of the output.
- Returns
The output of elementwise op.
Examples
import paddle.fluid as fluid # example 1: shape(x) = (2, 3, 4, 5), shape(y) = (2, 3, 4, 5) x0 = fluid.layers.data(name="x0", shape=[2, 3, 4, 5], dtype='float32') y0 = fluid.layers.data(name="y0", shape=[2, 3, 4, 5], dtype='float32') z0 = fluid.layers.elementwise_pow(x0, y0) # example 2: shape(X) = (2, 3, 4, 5), shape(Y) = (5) x1 = fluid.layers.data(name="x1", shape=[2, 3, 4, 5], dtype='float32') y1 = fluid.layers.data(name="y1", shape=[5], dtype='float32') z1 = fluid.layers.elementwise_pow(x1, y1) # example 3: shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5), with axis=-1(default) or axis=2 x2 = fluid.layers.data(name="x2", shape=[2, 3, 4, 5], dtype='float32') y2 = fluid.layers.data(name="y2", shape=[4, 5], dtype='float32') z2 = fluid.layers.elementwise_pow(x2, y2, axis=2) # example 4: shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1 x3 = fluid.layers.data(name="x3", shape=[2, 3, 4, 5], dtype='float32') y3 = fluid.layers.data(name="y3", shape=[3, 4], dtype='float32') z3 = fluid.layers.elementwise_pow(x3, y3, axis=1) # example 5: shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0 x4 = fluid.layers.data(name="x4", shape=[2, 3, 4, 5], dtype='float32') y4 = fluid.layers.data(name="y4", shape=[2], dtype='float32') z4 = fluid.layers.elementwise_pow(x4, y4, axis=0) # example 6: shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0 x5 = fluid.layers.data(name="x5", shape=[2, 3, 4, 5], dtype='float32') y5 = fluid.layers.data(name="y5", shape=[2], dtype='float32') z5 = fluid.layers.elementwise_pow(x5, y5, axis=0)
elementwise_sub¶
-
paddle.fluid.layers.
elementwise_sub
(x, y, axis=-1, act=None, name=None)[source] Elementwise Sub Operator
The equation is:
$$Out = X - Y$$
\(X\): a tensor of any dimension.
\(Y\): a tensor whose dimensions must be less than or equal to the dimensions of \(X\).
There are two cases for this operator:
The shape of \(Y\) is the same with \(X\).
The shape of \(Y\) is a continuous subsequence of \(X\).
For case 2:
Broadcast \(Y\) to match the shape of \(X\), where \(axis\) is the start dimension index for broadcasting \(Y\) onto \(X\).
If \(axis\) is -1 (default), \(axis = rank(X) - rank(Y)\).
The trailing dimensions of size 1 for \(Y\) will be ignored for the consideration of subsequence, such as shape(Y) = (2, 1) => (2).
For example:
shape(X) = (2, 3, 4, 5), shape(Y) = (,) shape(X) = (2, 3, 4, 5), shape(Y) = (5,) shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5), with axis=-1(default) or axis=2 shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1 shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0 shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0
The inputs \(X\) and \(Y\) can carry the different LoD information. But the output only shares the LoD information with the input \(X\).
- Parameters
x – (Tensor), The first input tensor of elementwise op.
y – (Tensor), The second input tensor of elementwise op.
axis (INT) – (int, default -1). The start dimension index for broadcasting Y onto X.
x_data_format (STRING) – (string, default NCHW) Only used in mkldnnAn optional string from: “NHWC”, “NCHW”, “NCHW16C”, “NCHW8C”. Defaults to “”. Specify the data format of the output data, the input will be transformed automatically.
y_data_format (STRING) – (string, default “”) Only used in mkldnnAn optional string from: “NHWC”, “NCHW”, “NCHW16C”, “NCHW8C”. Defaults to “”. Specify the data format of the output data, the input will be transformed automatically.
act (basestring|None) – Activation applied to the output.
name (basestring|None) – Name of the output.
- Returns
The output of elementwise op.
Examples
import paddle.fluid as fluid # example 1: shape(x) = (2, 3, 4, 5), shape(y) = (2, 3, 4, 5) x0 = fluid.layers.data(name="x0", shape=[2, 3, 4, 5], dtype='float32') y0 = fluid.layers.data(name="y0", shape=[2, 3, 4, 5], dtype='float32') z0 = fluid.layers.elementwise_sub(x0, y0) # example 2: shape(X) = (2, 3, 4, 5), shape(Y) = (5) x1 = fluid.layers.data(name="x1", shape=[2, 3, 4, 5], dtype='float32') y1 = fluid.layers.data(name="y1", shape=[5], dtype='float32') z1 = fluid.layers.elementwise_sub(x1, y1) # example 3: shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5), with axis=-1(default) or axis=2 x2 = fluid.layers.data(name="x2", shape=[2, 3, 4, 5], dtype='float32') y2 = fluid.layers.data(name="y2", shape=[4, 5], dtype='float32') z2 = fluid.layers.elementwise_sub(x2, y2, axis=2) # example 4: shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1 x3 = fluid.layers.data(name="x3", shape=[2, 3, 4, 5], dtype='float32') y3 = fluid.layers.data(name="y3", shape=[3, 4], dtype='float32') z3 = fluid.layers.elementwise_sub(x3, y3, axis=1) # example 5: shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0 x4 = fluid.layers.data(name="x4", shape=[2, 3, 4, 5], dtype='float32') y4 = fluid.layers.data(name="y4", shape=[2], dtype='float32') z4 = fluid.layers.elementwise_sub(x4, y4, axis=0) # example 6: shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0 x5 = fluid.layers.data(name="x5", shape=[2, 3, 4, 5], dtype='float32') y5 = fluid.layers.data(name="y5", shape=[2], dtype='float32') z5 = fluid.layers.elementwise_sub(x5, y5, axis=0)
elu¶
-
paddle.fluid.layers.
elu
(x, alpha=1.0, name=None)[source] ELU Activation Operator.
Applies the following element-wise computation on the input according to https://arxiv.org/abs/1511.07289.
\(out = \max(0, x) + \min(0, \alpha * (e^x - 1))\)
- Parameters
x (Variable) – Input of ELU operator
alpha (FLOAT|1.0) – The alpha value of ELU
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
- Returns
Output of ELU operator
- Return type
output(Variable)
Examples
import paddle.fluid as fluid x = fluid.layers.data(name="x", shape=[3,10,32,32], dtype="float32") y = fluid.layers.elu(x, alpha=0.2)
embedding¶
-
paddle.fluid.layers.
embedding
(input, size, is_sparse=False, is_distributed=False, padding_idx=None, param_attr=None, dtype='float32')[source] Embedding Layer
This layer is used to lookup embeddings of IDs, provided by
input
, in a lookup table. The result of this lookup is the embedding of each ID in theinput
.All the input variables are passed in as local variables to the LayerHelper constructor.
- Parameters
input (Variable) – The tensor variable containing the IDs.
size (tuple|list) – The shape of the look up table parameter. It should have two elements which indicate the size of the dictionary of embeddings and the size of each embedding vector respectively.
is_sparse (bool) – The flag indicating whether to use sparse update.
is_distributed (bool) – Whether to run lookup table from remote parameter server.
padding_idx (int|long|None) – If
None
, it makes no effect to lookup. Otherwise the givenpadding_idx
indicates padding the output with zeros whenever lookup encounters it ininput
. If \(padding_idx < 0\), thepadding_idx
to use in lookup is \(size[0] + dim\).param_attr (ParamAttr) – Parameters for this layer
dtype (np.dtype|core.VarDesc.VarType|str) – The type of data : float32, float_16, int etc
- Returns
The tensor variable storing the embeddings of the supplied inputs.
- Return type
Variable
Examples
import paddle.fluid as fluid data = fluid.layers.data(name='sequence', shape=[1], dtype='int64', lod_level=1) emb = fluid.layers.embedding(input=data, size=[128, 64])
expand¶
-
paddle.fluid.layers.
expand
(x, expand_times, name=None)[source] Expand operator tiles the input by given times number. You should set times number for each dimension by providing attribute ‘expand_times’. The rank of X should be in [1, 6]. Please note that size of ‘expand_times’ must be the same with X’s rank. Following is a using case:
Input(X) is a 3-D tensor with shape [2, 3, 1]: [ [[1], [2], [3]], [[4], [5], [6]] ] Attr(expand_times): [1, 2, 2] Output(Out) is a 3-D tensor with shape [2, 6, 2]: [ [[1, 1], [2, 2], [3, 3], [1, 1], [2, 2], [3, 3]], [[4, 4], [5, 5], [6, 6], [4, 4], [5, 5], [6, 6]] ]
- Parameters
x (Variable) – A tensor with rank in [1, 6].
expand_times (list|tuple) – Expand times number for each dimension.
- Returns
The expanded variable which is a LoDTensor. After expanding, size of each dimension of Output(Out) is equal to ithe size of the corresponding dimension of Input(X) multiplying the corresponding value given by expand_times.
- Return type
Variable
Examples
x = fluid.layers.data(name='x', shape=[10], dtype='float32') out = fluid.layers.expand(x=x, expand_times=[1, 2, 2])
fc¶
-
paddle.fluid.layers.
fc
(input, size, num_flatten_dims=1, param_attr=None, bias_attr=None, act=None, is_test=False, name=None)[source] Fully Connected Layer
This function creates a fully connected layer in the network. It can take one or multiple tensors as its inputs(input can be a list of Variable, see Args in detail). It creates a variable called weights for each input tensor, which represents a fully connected weight matrix from each input unit to each output unit. The fully connected layer multiplies each input tensor with its corresponding weight to produce an output Tensor with shape [M, size], where M is batch size. If multiple input tensors are given, the results of multiple output tensors with shape [M, size] will be summed up. If bias_attr is not None, a bias variable will be created and added to the output. Finally, if activation is not None, it will be applied to the output as well.
When the input is single tensor:
\[Out = Act({XW + b})\]When the input are multiple tensors:
\[Out = Act({\sum_{i=0}^{N-1}X_iW_i + b})\]In the above equation:
\(N\): Number of the input. N equals to len(input) if input is list of Variable.
\(X_i\): The i-th input tensor.
\(W_i\): The i-th weights matrix corresponding i-th input tensor.
\(b\): The bias parameter created by this layer (if needed).
\(Act\): The activation function.
\(Out\): The output tensor.
See below for an example.
Given: data_1.data = [[[0.1, 0.2], [0.3, 0.4]]] data_1.shape = (1, 2, 2) # 1 is batch_size data_2 = [[[0.1, 0.2, 0.3]]] data_2.shape = (1, 1, 3) out = fluid.layers.fc(input=[data_1, data_2], size=2) Then: out.data = [[0.18669507, 0.1893476]] out.shape = (1, 2)
- Parameters
input (Variable|list of Variable) – The input tensor(s) of this layer, and the dimension of the input tensor(s) is at least 2.
size (int) – The number of output units in this layer.
num_flatten_dims (int, default 1) – The fc layer can accept an input tensor with more than two dimensions. If this happens, the multidimensional tensor will first be flattened into a 2-dimensional matrix. The parameter num_flatten_dims determines how the input tensor is flattened: the first num_flatten_dims (inclusive, index starts from 1) dimensions will be flatten to form the first dimension of the final matrix (height of the matrix), and the rest rank(X) - num_flatten_dims dimensions are flattened to form the second dimension of the final matrix (width of the matrix). For example, suppose X is a 5-dimensional tensor with a shape [2, 3, 4, 5, 6], and num_flatten_dims = 3. Then, the flattened matrix will have a shape [2 x 3 x 4, 5 x 6] = [24, 30].
param_attr (ParamAttr|list of ParamAttr, default None) – The parameter attribute for learnable parameters/weights of this layer.
bias_attr (ParamAttr|list of ParamAttr, default None) – The parameter attribute for the bias of this layer. If it is set to False, no bias will be added to the output units. If it is set to None, the bias is initialized zero. Default: None.
act (str, default None) – Activation to be applied to the output of this layer.
is_test (bool) – A flag indicating whether execution is in test phase.
name (str, default None) – The name of this layer.
- Returns
The transformation result.
- Return type
Variable
- Raises
ValueError
– If rank of the input tensor is less than 2.
Examples
import paddle.fluid as fluid # when input is single tensor data = fluid.layers.data(name="data", shape=[32, 32], dtype="float32") fc = fluid.layers.fc(input=data, size=1000, act="tanh") # when input are multiple tensors data_1 = fluid.layers.data(name="data_1", shape=[32, 32], dtype="float32") data_2 = fluid.layers.data(name="data_2", shape=[24, 36], dtype="float32") fc = fluid.layers.fc(input=[data_1, data_2], size=1000, act="tanh")
flatten¶
-
paddle.fluid.layers.
flatten
(x, axis=1, name=None)[source] Flatten layer Flattens the input tensor into a 2D matrix.
For Example:
Case 1: Given X.shape = (3, 100, 100, 4) and axis = 2 We get: Out.shape = (3 * 100, 4 * 100) Case 2: Given X.shape = (3, 100, 100, 4) and axis = 0 We get: Out.shape = (1, 3 * 100 * 100 * 4)
- Parameters
x (Variable) – A tensor of rank >= axis.
axis (int) – Indicate up to which input dimensions (exclusive) should be flattened to the outer dimension of the output. The value for axis must be in the range [0, R], where R is the rank of the input tensor. When axis = 0, the shape of the output tensor is (1, (d_0 X d_1 … d_n), where the shape of the input tensor is (d_0, d_1, … d_n).
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
- Returns
A 2D tensor with the contents of the input tensor, with input dimensions up to axis flattened to the outer dimension of the output and remaining input dimensions flattened into the inner dimension of the output.
- Return type
Variable
- Raises
ValueError
– If x is not a variable.ValueError
– If axis is not in range [0, rank(x)].
Examples
import paddle.fluid as fluid x = fluid.layers.data(name="x", shape=[4, 4, 3], dtype="float32") out = fluid.layers.flatten(x=x, axis=2)
fsp_matrix¶
-
paddle.fluid.layers.
fsp_matrix
(x, y)[source] FSP matrix op
This op is used to calculate the flow of solution procedure (FSP) matrix of two feature maps. Given feature map x with shape [x_channel, h, w] and feature map y with shape [y_channel, h, w], we can get the fsp matrix of x and y in two steps:
reshape x into matrix with shape [x_channel, h * w] and reshape and transpose y into matrix with shape [h * w, y_channel].
multiply x and y to get fsp matrix with shape [x_channel, y_channel].
The output is a batch of fsp matrices.
- Parameters
x (Variable) – A feature map with shape [batch_size, x_channel, height, width].
y (Variable) – A feature map with shape [batch_size, y_channel, height, width]. The y_channel can be different with the x_channel of Input(X) while the other dimensions must be the same with Input(X)’s.
- Returns
The output of FSP op with shape [batch_size, x_channel, y_channel]. The x_channel is the channel of x and the y_channel is the channel of y.
- Return type
fsp matrix (Variable)
Examples
import paddle.fluid as fluid data = fluid.layers.data(name='data', shape=[3, 32, 32]) feature_map_0 = fluid.layers.conv2d(data, num_filters=2, filter_size=3) feature_map_1 = fluid.layers.conv2d(feature_map_0, num_filters=2, filter_size=1) loss = fluid.layers.fsp_matrix(feature_map_0, feature_map_1)
gather¶
-
paddle.fluid.layers.
gather
(input, index, overwrite=True)[source] Gather Layer
Output is obtained by gathering entries of the outer-most dimension of X indexed by index and concatenate them together.
\[Out = X[Index]\]Given: X = [[1, 2], [3, 4], [5, 6]] Index = [1, 2] Then: Out = [[3, 4], [5, 6]]
- Parameters
input (Variable) – The source input with rank>=1.
index (Variable) – The index input with rank=1.
overwrite (bool) – The mode that updating the grad when has same index. If True, use the overwrite mode to update the grad of the same index, if False, use the accumulate mode to update the grad of the same index. Default value is True.
- Returns
The output is a tensor with the same rank as input.
- Return type
output (Variable)
Examples
import paddle.fluid as fluid x = fluid.layers.data(name='x', shape=[-1, 5], dtype='float32') index = fluid.layers.data(name='index', shape=[-1, 1], dtype='int32') output = fluid.layers.gather(x, index)
gaussian_random¶
-
paddle.fluid.layers.
gaussian_random
(shape, mean=0.0, std=1.0, seed=0, dtype='float32')[source] GaussianRandom Operator.
Used to initialize tensors with gaussian random generator.
- Parameters
shape (tuple|list) – (vector<int64_t>) The dimension of random tensor
mean (Float) – (float, default 0.0) mean of random tensor
std (Float) – (float, default 1.0) std of random tensor
seed (Int) – (int, default 0) Random seed of generator.0 means use system wide seed.Note that if seed is not 0, this operator will always generate the same random numbers every time
dtype (np.dtype|core.VarDesc.VarType|str) – Output data type.
- Returns
Output matrix of gaussian random op
- Return type
out (Variable)
Examples
import paddle.fluid as fluid import paddle.fluid.layers as layers out = layers.gaussian_random(shape=[20, 30])
gaussian_random_batch_size_like¶
-
paddle.fluid.layers.
gaussian_random_batch_size_like
(input, shape, input_dim_idx=0, output_dim_idx=0, mean=0.0, std=1.0, seed=0, dtype='float32')[source] Used to initialize tensors with gaussian random generator. The default mean of the distribution is 0. and default standard deviation (std) of the distribution is 1.. Uers can set mean and std by input arguments.
- Parameters
input (Variable) – Tensor whose input_dim_idx’th dimension specifies the batch_size
shape (tuple|list) – The shape of the output
input_dim_idx (Int) – default 0. The index of input’s batch size dimension
output_dim_idx (Int) – default 0. The index of output’s batch size dimension
mean (Float) – (float, default 0.0) The mean (or center) of the gaussian distribution
std (Float) – (float, default 1.0) The standard deviation (std, or spread) of the gaussian distribution
seed (Int) – (int, default 0) Random seed of generator.0 means use system wide seed.Note that if seed is not 0, this operator will always generate the same random numbers every time
dtype (np.dtype|core.VarDesc.VarType|str) – The type of output data : float32, float_16, int etc
- Returns
Tensor of specified shape will be filled with the specified value
- Return type
out (Variable)
Examples
import paddle.fluid as fluid input = fluid.layers.data(name="input", shape=[13, 11], dtype='float32') out = fluid.layers.gaussian_random_batch_size_like( input, shape=[-1, 11], mean=1.0, std=2.0)
get_tensor_from_selected_rows¶
-
paddle.fluid.layers.
get_tensor_from_selected_rows
(x, name=None)[source] GetTensorFromSelectedRows Operator
GetTensorFromSelectedRows is used to get the tensor from SelectedRows.
- Parameters
x (Variable) – The input type is SelectedRows
name (basestring|None) – Name of the output.
- Returns
The output type is LoDTensor
- Return type
out(Variable)
Examples
import paddle.fluid as fluid b = fluid.default_main_program().global_block() input = b.create_var(name="X", dtype="float32", persistable=True, type=fluid.core.VarDesc.VarType.SELECTED_ROWS) out = fluid.layers.get_tensor_from_selected_rows(input)
grid_sampler¶
-
paddle.fluid.layers.
grid_sampler
(x, grid, name=None)[source] This operation samples input X by using bilinear interpolation based on flow field grid, which is usually gennerated by
affine_grid
. The grid of shape [N, H, W, 2] is the concatenation of (grid_x, grid_y) coordinates with shape [N, H, W] each, where grid_x is indexing the 4th dimension (in width dimension) of input data x and grid_y is indexng the 3rd dimention (in height dimension), finally results is the bilinear interpolation value of 4 nearest corner points.Step 1: Get (x, y) grid coordinates and scale to [0, H-1/W-1]. grid_x = 0.5 * (grid[:, :, :, 0] + 1) * (W - 1) grid_y = 0.5 * (grid[:, :, :, 1] + 1) * (H - 1) Step 2: Indices input data X with grid (x, y) in each [H, W] area, and bilinear interpolate point value by 4 nearest points. wn ------- y_n ------- en | | | | d_n | | | | x_w --d_w-- grid--d_e-- x_e | | | | d_s | | | | ws ------- y_s ------- wn x_w = floor(x) // west side x coord x_e = x_w + 1 // east side x coord y_n = floor(y) // north side y coord y_s = y_s + 1 // south side y coord d_w = grid_x - x_w // distance to west side d_e = x_e - grid_x // distance to east side d_n = grid_y - y_n // distance to north side d_s = y_s - grid_y // distance to south side wn = X[:, :, y_n, x_w] // north-west point value en = X[:, :, y_n, x_e] // north-east point value ws = X[:, :, y_s, x_w] // south-east point value es = X[:, :, y_s, x_w] // north-east point value output = wn * d_e * d_s + en * d_w * d_s + ws * d_e * d_n + es * d_w * d_n
- Parameters
x (Variable) – Input data of shape [N, C, H, W].
grid (Variable) – Input grid tensor of shape [N, H, W, 2].
name (str, default None) – The name of this layer.
- Returns
Output of shape [N, C, H, W] data samples input X using bilnear interpolation based on input grid.
- Return type
Variable
Examples
import paddle.fluid as fluid x = fluid.layers.data(name='x', shape=[10, 32, 32], dtype='float32') theta = fluid.layers.data(name='theta', shape=[2, 3], dtype='float32') grid = fluid.layers.affine_grid(theta=theta, out_shape=[3, 10, 32, 32]) out = fluid.layers.grid_sampler(x=x, grid=grid)
group_norm¶
-
paddle.fluid.layers.
group_norm
(input, groups, epsilon=1e-05, param_attr=None, bias_attr=None, act=None, data_layout='NCHW', name=None)[source] Group Normalization Layer
Refer to Group Normalization .
- Parameters
input (Variable) – The input tensor variable.
groups (int) – The number of groups that divided from channels.
epsilon (float) – The small value added to the variance to prevent division by zero.
param_attr (ParamAttr|None) – The parameter attribute for the learnable scale \(g\). If it is set to False, no scale will be added to the output units. If it is set to None, the bias is initialized one. Default: None.
bias_attr (ParamAttr|None) – The parameter attribute for the learnable bias \(b\). If it is set to False, no bias will be added to the output units. If it is set to None, the bias is initialized zero. Default: None.
act (str) – Activation to be applied to the output of group normalizaiton.
data_layout (string|NCHW) – Only NCHW is supported.
name (str) – The name of this layer. It is optional.
- Returns
A tensor variable which is the result after applying group normalization on the input.
- Return type
Variable
Examples
>>> import paddle.fluid as fluid >>> data = fluid.layers.data(name='data', shape=[8, 32, 32], >>> dtype='float32') >>> x = fluid.layers.group_norm(input=data, groups=4)
gru_unit¶
-
paddle.fluid.layers.
gru_unit
(input, hidden, size, param_attr=None, bias_attr=None, activation='tanh', gate_activation='sigmoid', origin_mode=False)[source] GRU unit layer
if origin_mode is True, then the equation of a gru step is from paper Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation
\[ \begin{align}\begin{aligned}u_t & = actGate(xu_{t} + W_u h_{t-1} + b_u)\\r_t & = actGate(xr_{t} + W_r h_{t-1} + b_r)\\m_t & = actNode(xm_t + W_c dot(r_t, h_{t-1}) + b_m)\\h_t & = dot(u_t, h_{t-1}) + dot((1-u_t), m_t)\end{aligned}\end{align} \]if origin_mode is False, then the equation of a gru step is from paper Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling
\[ \begin{align}\begin{aligned}u_t & = actGate(xu_{t} + W_u h_{t-1} + b_u)\\r_t & = actGate(xr_{t} + W_r h_{t-1} + b_r)\\m_t & = actNode(xm_t + W_c dot(r_t, h_{t-1}) + b_m)\\h_t & = dot((1-u_t), h_{t-1}) + dot(u_t, m_t)\end{aligned}\end{align} \]The inputs of gru unit includes \(z_t\), \(h_{t-1}\). In terms of the equation above, the \(z_t\) is split into 3 parts - \(xu_t\), \(xr_t\) and \(xm_t\). This means that in order to implement a full GRU unit operator for an input, a fully connected layer has to be applied, such that \(z_t = W_{fc}x_t\).
The terms \(u_t\) and \(r_t\) represent the update and reset gates of the GRU cell. Unlike LSTM, GRU has one lesser gate. However, there is an intermediate candidate hidden output, which is denoted by \(m_t\). This layer has three outputs \(h_t\), \(dot(r_t, h_{t-1})\) and concatenation of \(u_t\), \(r_t\) and \(m_t\).
- Parameters
input (Variable) – The fc transformed input value of current step.
hidden (Variable) – The hidden value of gru unit from previous step.
size (integer) – The input dimension value.
param_attr (ParamAttr|None) –
The parameter attribute for the learnable hidden-hidden weight matrix. Note:
The shape of the weight matrix is \((T \times 3D)\), where \(D\) is the hidden size.
All elements in the weight matrix can be divided into two parts. The first part are weights of the update gate and reset gate with shape \((D \times 2D)\), and the second part are weights for candidate hidden state with shape \((D \times D)\).
If it is set to None or one attribute of ParamAttr, gru_unit will create ParamAttr as param_attr. If the Initializer of the param_attr is not set, the parameter is initialized with Xavier. Default: None.
bias_attr (ParamAttr|bool|None) – The parameter attribute for the bias of GRU.Note that the bias with \((1 \times 3D)\) concatenates the bias in the update gate, reset gate and candidate calculations. If it is set to False, no bias will be applied to the update gate, reset gate and candidate calculations. If it is set to None or one attribute of ParamAttr, gru_unit will create ParamAttr as bias_attr. If the Initializer of the bias_attr is not set, the bias is initialized zero. Default: None.
activation (string) – The activation type for cell (actNode). Default: ‘tanh’
gate_activation (string) – The activation type for gates (actGate). Default: ‘sigmoid’
- Returns
The hidden value, reset-hidden value and gate values.
- Return type
tuple
Examples
import paddle.fluid as fluid dict_dim, emb_dim = 128, 64 data = fluid.layers.data(name='step_data', shape=[1], dtype='int32') emb = fluid.layers.embedding(input=data, size=[dict_dim, emb_dim]) hidden_dim = 512 x = fluid.layers.fc(input=emb, size=hidden_dim * 3) pre_hidden = fluid.layers.data( name='pre_hidden', shape=[hidden_dim], dtype='float32') hidden = fluid.layers.gru_unit( input=x, hidden=pre_hidden, size=hidden_dim * 3)
hard_sigmoid¶
-
paddle.fluid.layers.
hard_sigmoid
(x, slope=0.2, offset=0.5, name=None)[source] HardSigmoid Activation Operator.
Segment-wise linear approximation of sigmoid(https://arxiv.org/abs/1603.00391), which is much faster than sigmoid.
\(out = \max(0, \min(1, slope * x + shift))\)
The slope should be positive. The offset can be either positive or negative. The default slope and shift are set according to the above reference. It is recommended to use the defaults for this activation.
- Parameters
x (Variable) – Input of HardSigmoid operator
slope (FLOAT|0.2) – Slope for linear approximation of sigmoid
offset (FLOAT|0.5) – Offset for linear approximation of sigmoid
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
- Returns
Output of HardSigmoid operator
- Return type
output(Variable)
Examples
import paddle.fluid as fluid x = fluid.layers.data(name="x", shape=[3,10,32,32], dtype="float32") y = fluid.layers.hard_sigmoid(x, slope=0.3, offset=0.8)
hash¶
-
paddle.fluid.layers.
hash
(input, hash_size, num_hash=1, name=None)[source] Hash the input to an integer whose value is less than the given hash size.
The hash algorithm we used was xxHash - Extremely fast hash algorithm (https://github.com/Cyan4973/xxHash/tree/v0.6.5)
A simple example as below:
Given: # shape [2, 2] input.data = [ [[1, 2], [3, 4]], ] input.lod = [[0, 2]] hash_size = 10000 num_hash = 4 Then: Hash op will take all number in input's 2nd dimension as hash algorithm's input for each time. Each input will be hashed for 4 times, and get an array whose length is 4. Each value in the array ranges from 0 to 9999. # shape [2, 4] output.data = [ [[9662, 9217, 1129, 8487], [8310, 1327, 1654, 4567]], ] output.lod = [[0, 2]]
- Parameters
input (Variable) – The input variable which is a one-hot word. The dimensions of the input variable must be 2.
hash_size (int) – The space size for hash algorithm. The output value will keep in the range:math:[0, hash_size - 1].
num_hash (int) – The times of hash, default 1.
name (str, default None) – The name of this layer.
- Returns
The hash result variable which is a LoDTensor.
- Return type
Variable
Examples
import paddle.fluid as fluid import paddle.fluid.layers as layers import numpy as np titles = fluid.layers.data(name='titles', shape=[1], dtype='int32', lod_level=1) hash_r = fluid.layers.hash(name='hash_x', input=titles, num_hash=1, hash_size=1000) place = fluid.core.CPUPlace() exece = fluid.Executor(place) exece.run(fluid.default_startup_program()) # Init Tensor tensor = fluid.core.LoDTensor() tensor.set(np.random.randint(0, 10, (3, 1)).astype("int32"), place) # Set LoD tensor.set_recursive_sequence_lengths([[1, 1, 1]]) out = exece.run(feed={'titles': tensor}, fetch_list=[hash_r], return_numpy=False)
hsigmoid¶
-
paddle.fluid.layers.
hsigmoid
(input, label, num_classes, param_attr=None, bias_attr=None, name=None, path_table=None, path_code=None, is_custom=False, is_sparse=False)[source] The hierarchical sigmoid operator is used to accelerate the training process of language model. This operator organizes the classes into a complete binary tree, or you can use is_custom to pass your own tree to implement hierarchical. Each leaf node represents a class(a word) and each internal node acts as a binary classifier. For each word there’s a unique path from root to it’s leaf node, hsigmoid calculate the cost for each internal node on the path, and sum them to get a total cost. hsigmoid can achive a acceleration from \(O(N)\) to \(O(logN)\), where \(N\) represents the size of word dict.
Using default tree you can Refer to Hierarchical Probabilistic Neural Network Language Model
And if you want to use the costumed tree by set ‘is_custom’ as true you may need to do following things first:
using your word dict to build a binary tree, each leaf node should be an word of your word dict
build a dict to store word_id -> word’s leaf to root path, we call it path_table.
build a dict to store word_id -> code of word’s leaf to root path, we call it path_code. Code means label of each binary classification, using 1 indicate true, 0 indicate false.
now, each word should has its path and code along the path, you can pass a batch of path and code related to the same batch of inputs.
- Parameters
input (Variable) – The input tensor variable with shape \([N \times D]\), where \(N\) is the size of mini-batch, and \(D\) is the feature size.
label (Variable) – The tensor variable contains labels of training data. It’s a tensor with shape is \([N \times 1]\).
num_classes – (int), The number of classes, must not be less than 2. with default tree this has to be set, it should never be None under is_custom=False, but while is_custom is true, it should be non leaf num which indicates the num of classes using by binary classify.
param_attr (ParamAttr|None) – The parameter attribute for learnable parameters/weights of hsigmoid. If it is set to None or one attribute of ParamAttr, hsigmoid will create ParamAttr as param_attr. If the Initializer of the param_attr is not set, the parameter is initialized with Xavier. Default: None.
bias_attr (ParamAttr|bool|None) – The parameter attribute for the bias of hsigmoid. If it is set to False, no bias will be added to the output units. If it is set to None or one attribute of ParamAttr, hsigmoid will create ParamAttr as bias_attr. If the Initializer of the bias_attr is not set, the bias is initialized zero. Default: None.
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically. Default: None.
path_table – (Variable|None) this variable can store each batch of samples’ path to root, it should be in leaf -> root order path_table should have the same shape with path_code, and for each sample i path_table[i] indicates a np.array like structure and each element in this array is indexes in parent nodes’ Weight Matrix.
path_code – (Variable|None) this variable can store each batch of samples’ code, each code consist with every code of parent nodes. it should be in leaf -> root order
is_custom – (bool|False)using user defined binary tree instead of default complete binary tree, if costum is set you need to set path_table/path_code/num_classes, otherwise num_classes should be set
is_sparse – (bool|False)using sparse update instead of dense update, if set, the gradient of W and input will be sparse.
- Returns
(LodTensor) The cost of hierarchical sigmoid operator. the shape is [N, 1]
- Return type
Out
Examples
import paddle.fluid as fluid x = fluid.layers.data(name='x', shape=[2], dtype='float32') y = fluid.layers.data(name='y', shape=[1], dtype='int64') out = fluid.layers.hsigmoid(input=x, label=y, num_classes=6)
huber_loss¶
-
paddle.fluid.layers.
huber_loss
(input, label, delta)[source] Huber loss is a loss function used in robust. Huber loss can evaluate the fitness of input to label. Different from MSE loss, Huber loss is more robust for outliers.
When the difference between input and label is large than delta .. math:
huber\_loss = delta * (label - input) - 0.5 * delta * delta
When the difference between input and label is less than delta .. math:
huber\_loss = 0.5 * (label - input) * (label - input)
- Parameters
input (Variable) – This input is a probability computed by the previous operator. The first dimension is batch size, and the last dimension is 1.
label (Variable) – The groud truth whose first dimension is batch size and last dimension is 1.
delta (float) – The parameter of huber loss, which controls the range of outliers
- Returns
The huber loss with shape [batch_size, 1].
- Return type
huber_loss (Variable)
Examples
import paddle.fluid as fluid x = fluid.layers.data(name='x', shape=[13], dtype='float32') predict = fluid.layers.fc(input=x, size=1) label = fluid.layers.data( name='label', shape=[1], dtype='float32') loss = fluid.layers.huber_loss( input=predict, label=label, delta=1.0)
im2sequence¶
-
paddle.fluid.layers.
im2sequence
(input, filter_size=1, stride=1, padding=0, input_image_size=None, out_stride=1, name=None)[source] Extracts image patches from the input tensor to form a tensor of shape {input.batch_size * output_height * output_width, filter_size_H * filter_size_W * input.channels} which is similar with im2col. This op use filter / kernel to scan images and convert these images to sequences. After expanding, the number of time step are output_height * output_width for an image, in which output_height and output_width are calculated by below equation:
\[output\_size = 1 + (2 * padding + img\_size - block\_size + stride - 1) / stride\]And the dimension of each time step is block_y * block_x * input.channels.
- Parameters
input (Variable) – The input should be a tensor in NCHW format.
filter_size (int|tuple|None) – The filter size. If filter_size is a tuple, it must contain two integers, (filter_size_H, filter_size_W). Otherwise, the filter will be a square.
stride (int|tuple) – The stride size. If stride is a tuple, it must contain two integers, (stride_H, stride_W). Otherwise, the stride_H = stride_W = stride. Default: stride = 1.
padding (int|tuple) – The padding size. If padding is a tuple, it can contain two integers like (padding_H, padding_W) which means padding_up = padding_down = padding_H and padding_left = padding_right = padding_W. Or it can use (padding_up, padding_left, padding_down, padding_right) to indicate paddings of four direction. Otherwise, a scalar padding means padding_up = padding_down = padding_left = padding_right = padding Default: padding = 0.
input_image_size (Variable) – the input contains image real size.It’s dim is [batchsize, 2]. It is dispensable.It is just for batch inference.
out_stride (int|tuple) – The scaling of image through CNN. It is dispensable. It is valid only when input_image_size is not null. If out_stride is tuple, it must contain two intergers, (out_stride_H, out_stride_W). Otherwise, the out_stride_H = out_stride_W = out_stride.
name (int) – The name of this layer. It is optional.
- Returns
The output is a LoDTensor with shape {input.batch_size * output_height * output_width, filter_size_H * filter_size_W * input.channels}. If we regard output as a matrix, each row of this matrix is a step of a sequence.
- Return type
output
Examples
Given: x = [[[[ 6. 2. 1.] [ 8. 3. 5.] [ 0. 2. 6.]] [[ 2. 4. 4.] [ 6. 3. 0.] [ 6. 4. 7.]]] [[[ 6. 7. 1.] [ 5. 7. 9.] [ 2. 4. 8.]] [[ 1. 2. 1.] [ 1. 3. 5.] [ 9. 0. 8.]]]] x.dims = {2, 2, 3, 3} And: filter = [2, 2] stride = [1, 1] padding = [0, 0] Then: output.data = [[ 6. 2. 8. 3. 2. 4. 6. 3.] [ 2. 1. 3. 5. 4. 4. 3. 0.] [ 8. 3. 0. 2. 6. 3. 6. 4.] [ 3. 5. 2. 6. 3. 0. 4. 7.] [ 6. 7. 5. 7. 1. 2. 1. 3.] [ 7. 1. 7. 9. 2. 1. 3. 5.] [ 5. 7. 2. 4. 1. 3. 9. 0.] [ 7. 9. 4. 8. 3. 5. 0. 8.]] output.dims = {8, 8} output.lod = [[4, 4]]
Examples
import paddle.fluid as fluid data = fluid.layers.data(name='data', shape=[3, 32, 32], dtype='float32') output = fluid.layers.im2sequence( input=data, stride=[1, 1], filter_size=[2, 2])
image_resize¶
-
paddle.fluid.layers.
image_resize
(input, out_shape=None, scale=None, name=None, resample='BILINEAR', actual_shape=None, align_corners=True, align_mode=1)[source] Resize a Batch of Images
The input must be a tensor of the shape (num_batches, channels, in_h, in_w), and the resizing only applies on the last two dimensions(hight and width).
Supporting resample methods:
‘BILINEAR’ : Bilinear interpolation
‘NEAREST’ : Nearest neighbor interpolation
Nearest neighbor interpolation is to perform nearest neighbor interpolation in both the 3rd dimention(in height direction) and the 4th dimention(in width direction) on input tensor.
Bilinear interpolation is an extension of linear interpolation for interpolating functions of two variables (e.g. H-direction and W-direction in this op) on a rectilinear 2D grid. The key idea is to perform linear interpolation first in one direction, and then again in the other direction.
Align_corners and align_mode are optinal parameters,the calculation method of interpolation can be selected by them.
Example:
For scale: if align_corners = True && out_size > 1 : scale_factor = (in_size-1.0)/(out_size-1.0) else: scale_factor = float(in_size/out_size) Nearest neighbor interpolation: if: align_corners = False input : (N,C,H_in,W_in) output: (N,C,H_out,W_out) where: H_out = floor (H_{in} * scale_{factor}) W_out = floor (W_{in} * scale_{factor}) else: align_corners = True input : (N,C,H_in,W_in) output: (N,C,H_out,W_out) where: H_out = round(H_{in} * scale_{factor}) W_out = round(W_{in} * scale_{factor}) Bilinear interpolation: if: align_corners = False , align_mode = 0 input : (N,C,H_in,W_in) output: (N,C,H_out,W_out) where: H_out = (H_{in}+0.5) * scale_{factor} - 0.5 W_out = (W_{in}+0.5) * scale_{factor} - 0.5 else: input : (N,C,H_in,W_in) output: (N,C,H_out,W_out) where: H_out = H_{in} * scale_{factor} W_out = W_{in} * scale_{factor}
For details of nearest neighbor interpolation, please refer to Wikipedia: https://en.wikipedia.org/wiki/Nearest-neighbor_interpolation.
For details of bilinear interpolation, please refer to Wikipedia: https://en.wikipedia.org/wiki/Bilinear_interpolation.
- Parameters
input (Variable) – The input tensor of image resize layer, This is a 4-D tensor of the shape (num_batches, channels, in_h, in_w).
out_shape (list|tuple|Variable|None) – Output shape of image resize layer, the shape is (out_h, out_w). Default: None
scale (float|None) – The multiplier for the input height or width. At least one of
out_shape
orscale
must be set. Andout_shape
has a higher priority thanscale
. Default: None.name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
resample (str) – The resample method. It supports ‘BILINEAR’ and ‘NEAREST’ currently. Default: ‘BILINEAR’
actual_shape (Variable) – An optional input to specify output shape dynamically. If provided, image resize according to this given shape rather than
out_shape
andscale
specifying shape. That is to say actual_shape has the highest priority. It is recommended to use actual_shape instead ofout_shape
if you want to specify output shape dynamically. When using actual_shape to specify output shape, one ofout_shape
andscale
should also be set, otherwise errors would be occured in graph constructing stage. Default: Nonealign_corners (bool) – An optional bool, If True, the centers of the 4 corner pixels of the input and output tensors are aligned, preserving the values at the corner pixels. Default: True
align_mode (int) – An optional for bilinear interpolation. can be ‘0’ for src_idx = scale*(dst_indx+0.5)-0.5 , can be ‘1’ for src_idx = scale*dst_index .
- Returns
The output is a 4-D tensor of the shape (num_batches, channls, out_h, out_w).
- Return type
Variable
- Raises
TypeError
– out_shape should be a list or tuple or Variable.TypeError
– actual_shape should either be Variable or None.ValueError
– The ‘resample’ of image_resize can only be ‘BILINEAR’ or ‘NEAREST’ currently.ValueError
– One of out_shape and scale must not be None.ValueError
– out_shape length should be 2.ValueError
– scale should be greater than zero.TypeError
– align_corners shoule be a bool valueValueError
– align_mode can only be ‘0’ or ‘1’
Examples
import paddle.fluid as fluid input = fluid.layers.data(name="input", shape=[3,6,9], dtype="float32") out = fluid.layers.image_resize(input, out_shape=[12, 12], resample="NEAREST")
image_resize_short¶
-
paddle.fluid.layers.
image_resize_short
(input, out_short_len, resample='BILINEAR')[source] Resize a batch of images. The short edge of input images will be resized to the given ‘out_short_len’. The long edge of input images will be resized proportionately to make images’ length-width ratio constant.
- Parameters
input (Variable) – The input tensor of image resize layer, This is a 4-D tensor of the shape (num_batches, channels, in_h, in_w).
out_short_len (int) – The length of output images’ short edge.
resample (str) – resample method, default: BILINEAR.
- Returns
The output is a 4-D tensor of the shape (num_batches, channls, out_h, out_w).
- Return type
Variable
Examples
import paddle.fluid as fluid input = fluid.layers.data(name="input", shape=[3,6,9], dtype="float32") out = fluid.layers.image_resize_short(input, out_short_len=3)
kldiv_loss¶
-
paddle.fluid.layers.
kldiv_loss
(x, target, reduction='mean', name=None)[source] This operator calculates the Kullback-Leibler divergence loss between Input(X) and Input(Target).
KL divergence loss is calculated as follows:
$$l(x, y) = y * (log(y) - x)$$
While \(x\) is Input(X) and \(y\) is Input(Target).
While
reduction
isnone
, output loss is in the same shape as Input(X), loss in each point is calculated seperately and no reduction is applied.While
reduction
ismean
, output loss is in shape of [1] and loss value is the mean value of all losses.While
reduction
issum
, output loss is in shape of [1] and loss value is the sum value of all losses.While
reduction
isbatchmean
, output loss is in shape of [1] and loss value is the sum value of all losses divided by batch size.- Parameters
x (Variable) – The input tensor of KL divergence loss operator. This is a tensor with shape of [N, *], where N is the batch size, * means any number of additional dimensions
target (Variable) – The tensor of KL divergence loss operator. This is a tensor with shape of Input(X)
reduction (Variable) – The reduction type to apply to the output, available types are ‘none’ | ‘batchmean’ | ‘mean’ | ‘sum’, ‘none’ for no reduction, ‘batchmean’ for the sum of output divided by batch size, ‘mean’ for the average value of all output, ‘sum’ for the sum of the output
name (str, default None) – The name of this layer.
- Returns
The KL divergence loss.
- Return type
kldiv_loss (Variable)
Examples
import paddle.fluid as fluid x = fluid.layers.data(name='x', shape=[4,2,2], dtype='float32') target = fluid.layers.data(name='target', shape=[4,2,2], dtype='float32') loss = fluid.layers.kldiv_loss(x=x, target=target, reduction='batchmean')
l2_normalize¶
-
paddle.fluid.layers.
l2_normalize
(x, axis, epsilon=1e-12, name=None)[source] L2 normalize Layer
The l2 normalize layer normalizes x along dimension axis using an L2 norm. For a 1-D tensor (dim is fixed to 0), this layer computes
\[y = \frac{x}{ \sqrt{\sum {x^2} + epsion }}\]For x with more dimensions, this layer independently normalizes each 1-D slice along dimension axis.
- Parameters
x (Variable|list) – The input tensor to l2_normalize layer.
axis (int) – The axis on which to apply normalization. If axis < 0, the dimension to normalization is rank(X) + axis. -1 is the last dimension.
epsilon (float) – The epsilon value is used to avoid division by zero, the default value is 1e-12.
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
- Returns
The output tensor variable is the same shape with x.
- Return type
Variable
Examples
import paddle.fluid as fluid data = fluid.layers.data(name="data", shape=(3, 17, 13), dtype="float32") normed = fluid.layers.l2_normalize(x=data, axis=1)
label_smooth¶
-
paddle.fluid.layers.
label_smooth
(label, prior_dist=None, epsilon=0.1, dtype='float32', name=None)[source] Label smoothing is a mechanism to regularize the classifier layer and is called label-smoothing regularization (LSR).
Label smoothing is proposed to encourage the model to be less confident, since optimizing the log-likelihood of the correct label directly may cause overfitting and reduce the ability of the model to adapt. Label smoothing replaces the ground-truth label \(y\) with the weighted sum of itself and some fixed distribution \(\mu\). For class \(k\), i.e.
\[\tilde{y_k} = (1 - \epsilon) * y_k + \epsilon * \mu_k,\]where \(1 - \epsilon\) and \(\epsilon\) are the weights respectively, and \(\tilde{y}_k\) is the smoothed label. Usually uniform distribution is used for \(\mu\).
See more details about label smoothing in https://arxiv.org/abs/1512.00567.
- Parameters
label (Variable) – The input variable containing the label data. The label data should use one-hot representation.
prior_dist (Variable) – The prior distribution to be used to smooth labels. If not provided, an uniform distribution is used. The shape of
prior_dist
should be \((1, class\_num)\).epsilon (float) – The weight used to mix up the original ground-truth distribution and the fixed distribution.
dtype (np.dtype|core.VarDesc.VarType|str) – The type of data : float32, float_64, int etc.
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
- Returns
The tensor variable containing the smoothed labels.
- Return type
Variable
Examples
import paddle.fluid as fluid import paddle.fluid.layers as layers label = layers.data(name="label", shape=[1], dtype="float32") one_hot_label = layers.one_hot(input=label, depth=10) smooth_label = layers.label_smooth( label=one_hot_label, epsilon=0.1, dtype="float32")
layer_norm¶
-
paddle.fluid.layers.
layer_norm
(input, scale=True, shift=True, begin_norm_axis=1, epsilon=1e-05, param_attr=None, bias_attr=None, act=None, name=None)[source] Assume feature vectors exist on dimensions
begin_norm_axis ... rank(input)
and calculate the moment statistics along these dimensions for each feature vector \(a\) with size \(H\), then normalize each feature vector using the corresponding statistics. After that, apply learnable gain and bias on the normalized tensor to scale and shift ifscale
andshift
are set.Refer to Layer Normalization
The formula is as follows:
\[ \begin{align}\begin{aligned}\mu & = \frac{1}{H}\sum_{i=1}^{H} a_i\\\sigma & = \sqrt{\frac{1}{H}\sum_{i=1}^{H}(a_i - \mu)^2}\\h & = f(\frac{g}{\sigma}(a - \mu) + b)\end{aligned}\end{align} \]\(a\): the vector representation of the summed inputs to the neurons
in that layer.
\(H\): the number of hidden units in a layers
\(g\): the trainable scale parameter.
\(b\): the trainable bias parameter.
- Parameters
input (Variable) – The input tensor variable.
scale (bool) – Whether to learn the adaptive gain \(g\) after normalization. Default True.
shift (bool) – Whether to learn the adaptive bias \(b\) after normalization. Default True.
begin_norm_axis (int) – The normalization will be performed along dimensions from
begin_norm_axis
torank(input)
. Default 1.epsilon (float) – The small value added to the variance to prevent division by zero. Default 1e-05.
param_attr (ParamAttr|None) – The parameter attribute for the learnable gain \(g\). If
scale
is False,param_attr
is omitted. Ifscale
is True andparam_attr
is None, a defaultParamAttr
would be added as scale. Theparam_attr
is initialized as 1 if it is added. Default None.bias_attr (ParamAttr|None) – The parameter attribute for the learnable bias \(b\). If
shift
is False,bias_attr
is omitted. Ifshift
is True andparam_attr
is None, a defaultParamAttr
would be added as bias. Thebias_attr
is initialized as 0 if it is added. Default None.act (str) – Activation to be applied to the output of layer normalizaiton. Default None.
name (str) – The name of this layer. It is optional. Default None, and a unique name would be generated automatically.
- Returns
Result after normalization
Examples
>>> import paddle.fluid as fluid >>> data = fluid.layers.data(name='data', shape=[3, 32, 32], >>> dtype='float32') >>> x = fluid.layers.layer_norm(input=data, begin_norm_axis=1)
leaky_relu¶
-
paddle.fluid.layers.
leaky_relu
(x, alpha=0.02, name=None)[source] LeakyRelu Activation Operator.
\(out = \max(x, \alpha * x)\)
- Parameters
x (Variable) – Input of LeakyRelu operator
alpha (FLOAT|0.02) – The small negative slope
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
- Returns
Output of LeakyRelu operator
- Return type
output(Variable)
Examples
import paddle.fluid as fluid x = fluid.layers.data(name="x", shape=[2,3,16,16], dtype="float32") y = fluid.layers.leaky_relu(x, alpha=0.01)
linear_chain_crf¶
-
paddle.fluid.layers.
linear_chain_crf
(input, label, param_attr=None)[source] Linear Chain CRF.
Conditional Random Field defines an undirected probabilistic graph with nodes denoting random variables and edges denoting dependencies between these variables. CRF learns the conditional probability \(P(Y|X)\), where \(X = (x_1, x_2, ... , x_n)\) are structured inputs and \(Y = (y_1, y_2, ... , y_n)\) are labels for the inputs.
Linear chain CRF is a special case of CRF that is useful for sequence labeling task. Sequence labeling tasks do not assume a lot of conditional independences among inputs. The only constraint they impose is that the input and output must be linear sequences. Thus, the graph of such a CRF is a simple chain or a line, which results in the linear chain CRF.
This operator implements the Forward-Backward algorithm for the linear chain CRF. Please refer to http://www.cs.columbia.edu/~mcollins/fb.pdf and http://cseweb.ucsd.edu/~elkan/250Bwinter2012/loglinearCRFs.pdf for details.
Equation:
Denote Input(Emission) to this operator as \(x\) here. 2. The first D values of Input(Transition) to this operator are for starting weights, denoted as \(a\) here. 3. The next D values of Input(Transition) of this operator are for ending weights, denoted as \(b\) here. 4. The remaning values of Input(Transition) are for transition weights, denoted as \(w\) here. 5. Denote Input(Label) as \(s\) here.
The probability of a sequence \(s\) of length \(L\) is defined as: $$P(s) = (1/Z) exp(a_{s_1} + b_{s_L} + sum_{l=1}^L x_{s_l} + sum_{l=2}^L w_{s_{l-1},s_l})$$
where \(Z\) is a normalization value so that the sum of \(P(s)\) over all possible sequences is 1, and \(x\) is the emission feature weight to the linear chain CRF.
Finally, the linear chain CRF operator outputs the logarithm of the conditional likelihood of each training sample in a mini-batch.
NOTE:
The feature function for a CRF is made up of the emission features and the transition features. The emission feature weights are NOT computed in this operator. They MUST be computed first before this operator is called.
Because this operator performs global normalization over all possible sequences internally, it expects UNSCALED emission feature weights. Please do not call this op with the emission feature being output of any nonlinear activation.
The 2nd dimension of Input(Emission) MUST be equal to the tag number.
- Parameters
input (Variable) – (LoDTensor, default LoDTensor<float>) A 2-D LoDTensor with shape [N x D], where N is the size of the mini-batch and D is the total tag number. The unscaled emission weight matrix for the linear chain CRF.
input – (Tensor, default Tensor<float>) A 2-D Tensor with shape [(D + 2) x D]. The learnable parameter for the linear_chain_crf operator. See more details in the operator’s comments
label (Variable) – (LoDTensor, default LoDTensor<int64_t>) A LoDTensor with shape [N x 1], where N is the total element number in a mini-batch. The ground truth
param_attr (ParamAttr) – The attribute of the learnable parameter.
- Returns
(Tensor, default Tensor<float>) A 2-D Tensor with shape [N x D]. The exponentials of Input(Emission). This is an intermediate computational result in forward computation, and will be reused in backward computation
output(Variable): (Tensor, default Tensor<float>) A 2-D Tensor with shape [(D + 2) x D]. The exponentials of Input(Transition). This is an intermediate computational result in forward computation, and will be reused in backward computation
output(Variable): (Tensor, default Tensor<float>) The logarithm of the conditional likelihood of each training sample in a mini-batch. This is a 2-D tensor with shape [S x 1], where S is the sequence number in a mini-batch. Note: S is equal to the sequence number in a mini-batch. The output is no longer a LoDTensor
- Return type
output(Variable)
Examples
import paddle.fluid as fluid emission = fluid.layers.data(name='emission', shape=[1000], dtype='float32') target = fluid.layers.data(name='target', shape=[1], dtype='int32') crf_cost = fluid.layers.linear_chain_crf( input=emission, label=target, param_attr=fluid.ParamAttr( name='crfw', learning_rate=0.2))
lod_reset¶
-
paddle.fluid.layers.
lod_reset
(x, y=None, target_lod=None)[source] Set LoD of
x
to a new one specified byy
ortarget_lod
. Wheny
provided,y.lod
would be considered as target LoD first, otherwisey.data
would be considered as target LoD. Ify
is not provided, target LoD should be specified bytarget_lod
. If target LoD is specified byY.data
ortarget_lod
, only one level LoD is supported.* Example 1: Given a 1-level LoDTensor x: x.lod = [[ 2, 3, 1 ]] x.data = [[1.0], [2.0], [3.0], [4.0], [5.0], [6.0]] x.dims = [6, 1] target_lod: [4, 2] then we get a 1-level LoDTensor: out.lod = [[4, 2]] out.data = [[1.0], [2.0], [3.0], [4.0], [5.0], [6.0]] out.dims = [6, 1] * Example 2: Given a 1-level LoDTensor x: x.lod = [[2, 3, 1]] x.data = [[1.0], [2.0], [3.0], [4.0], [5.0], [6.0]] x.dims = [6, 1] y is a Tensor: y.data = [[2, 4]] y.dims = [1, 3] then we get a 1-level LoDTensor: out.lod = [[2, 4]] out.data = [[1.0], [2.0], [3.0], [4.0], [5.0], [6.0]] out.dims = [6, 1] * Example 3: Given a 1-level LoDTensor x: x.lod = [[2, 3, 1]] x.data = [[1.0], [2.0], [3.0], [4.0], [5.0], [6.0]] x.dims = [6, 1] y is a 2-level LoDTensor: y.lod = [[2, 2], [2, 2, 1, 1]] y.data = [[1.1], [2.1], [3.1], [4.1], [5.1], [6.1]] y.dims = [6, 1] then we get a 2-level LoDTensor: out.lod = [[2, 2], [2, 2, 1, 1]] out.data = [[1.0], [2.0], [3.0], [4.0], [5.0], [6.0]] out.dims = [6, 1]
- Parameters
x (Variable) – Input variable which could be a Tensor or LodTensor.
y (Variable|None) – If provided, output’s LoD would be derived from
y
.target_lod (list|tuple|None) – One level LoD which should be considered as target LoD when
y
not provided.
- Returns
Output variable with LoD specified by this layer.
- Return type
Variable
- Raises
ValueError
– Ify
andtarget_lod
are both None.
Examples
import paddle.fluid as fluid x = fluid.layers.data(name='x', shape=[10]) y = fluid.layers.data(name='y', shape=[10, 20], lod_level=2) out = fluid.layers.lod_reset(x=x, y=y)
log¶
-
paddle.fluid.layers.
log
(x, name=None)[source] Calculates the natural log of the given input tensor, element-wise.
\[Out = \ln(x)\]- Parameters
x (Variable) – Input tensor.
name (str|None, default None) – A name for this layer If set None, the layer will be named automatically.
- Returns
The natural log of the input tensor computed element-wise.
- Return type
Variable
Examples
import paddle.fluid as fluid x = fluid.layers.data(name="x", shape=[3, 4], dtype="float32") output = fluid.layers.log(x)
log_loss¶
-
paddle.fluid.layers.
log_loss
(input, label, epsilon=0.0001, name=None)[source] Negative Log Loss Layer
This layer accepts input predictions and target label and returns the negative log loss.
\[Out = -label * \log{(input + \epsilon)} - (1 - label) * \log{(1 - input + \epsilon)}\]- Parameters
input (Variable|list) – a 2-D tensor with shape [N x 1], where N is the batch size. This input is a probability computed by the previous operator.
label (Variable|list) – the ground truth which is a 2-D tensor with shape [N x 1], where N is the batch size.
epsilon (float) – epsilon
name (string) – the name of log_loss
- Returns
A 2-D tensor with shape [N x 1], the negative log loss.
- Return type
Variable
Examples
import paddle.fluid as fluid label = fluid.layers.data(name='label', shape=[1], dtype='int64') prob = fluid.layers.data(name='prob', shape=[10], dtype='float32') cost = fluid.layers.log_loss(input=prob, label=label)
logical_and¶
-
paddle.fluid.layers.
logical_and
(x, y, out=None, name=None)[source] logical_and Operator
It operates element-wise on X and Y, and returns the Out. X, Y and Out are N-dim boolean tensors. Each element of Out is calculated by $$Out = X && Y$$
- Parameters
x (Variable) – (LoDTensor) Left hand operand of logical_and operator
y (Variable) – (LoDTensor) Right hand operand of logical_and operator
out (Tensor) – Output tensor of logical operation.
name (basestring|None) – Name of the output.
- Returns
(LoDTensor) n-dim bool tensor. Each element is $$Out = X && Y$$
- Return type
out(Variable)
Examples
import paddle.fluid as fluid left = fluid.layers.data( name='left', shape=[1], dtype='int32') right = fluid.layers.data( name='right', shape=[1], dtype='int32') result = fluid.layers.logical_and(x=left, y=right)
logical_not¶
-
paddle.fluid.layers.
logical_not
(x, out=None, name=None)[source] logical_not Operator
It operates element-wise on X, and returns the Out. X and Out are N-dim boolean tensors. Each element of Out is calculated by !!Out = !X!!
- Parameters
x (Variable) – (LoDTensor) Operand of logical_not operator
out (Tensor) – Output tensor of logical operation.
name (basestring|None) – Name of the output.
- Returns
(LoDTensor) n-dim bool tensor. Each element is $$Out = !X$$
- Return type
out(Variable)
Examples
import paddle.fluid as fluid left = fluid.layers.data( name='left', shape=[1], dtype='int32') result = fluid.layers.logical_not(x=left)
logical_or¶
-
paddle.fluid.layers.
logical_or
(x, y, out=None, name=None)[source] logical_or Operator
It operates element-wise on X and Y, and returns the Out. X, Y and Out are N-dim boolean tensors. Each element of Out is calculated by $$Out = X || Y$$
- Parameters
x (Variable) – (LoDTensor) Left hand operand of logical_or operator
y (Variable) – (LoDTensor) Right hand operand of logical_or operator
out (Tensor) – Output tensor of logical operation.
name (basestring|None) – Name of the output.
- Returns
(LoDTensor) n-dim bool tensor. Each element is $$Out = X || Y$$
- Return type
out(Variable)
Examples
import paddle.fluid as fluid left = fluid.layers.data( name='left', shape=[1], dtype='int32') right = fluid.layers.data( name='right', shape=[1], dtype='int32') result = fluid.layers.logical_or(x=left, y=right)
logical_xor¶
-
paddle.fluid.layers.
logical_xor
(x, y, out=None, name=None)[source] logical_xor Operator
It operates element-wise on X and Y, and returns the Out. X, Y and Out are N-dim boolean tensors. Each element of Out is calculated by !!Out = (X || Y) && !(X && Y)!!
- Parameters
x (Variable) – (LoDTensor) Left hand operand of logical_xor operator
y (Variable) – (LoDTensor) Right hand operand of logical_xor operator
out (Tensor) – Output tensor of logical operation.
name (basestring|None) – Name of the output.
- Returns
(LoDTensor) n-dim bool tensor. Each element is $$Out = (X || Y) && !(X && Y)$$
- Return type
out(Variable)
Examples
import paddle.fluid as fluid left = fluid.layers.data( name='left', shape=[1], dtype='int32') right = fluid.layers.data( name='right', shape=[1], dtype='int32') result = fluid.layers.logical_xor(x=left, y=right)
lrn¶
-
paddle.fluid.layers.
lrn
(input, n=5, k=1.0, alpha=0.0001, beta=0.75, name=None)[source] Local Response Normalization Layer. This layer performs a type of “lateral inhibition” by normalizing over local input regions.
The formula is as follows:
\[Output(i, x, y) = Input(i, x, y) / \left(k + \alpha \sum\limits^{\min(C-1, i + n/2)}_{j = \max(0, i - n/2)}(Input(j, x, y))^2\right)^{\beta}\]In the above equation:
\(n\): The number of channels to sum over.
\(k\): The offset (avoid being divided by 0).
\(alpha\): The scaling parameter.
\(beta\): The exponent parameter.
Refer to ImageNet Classification with Deep Convolutional Neural Networks
- Parameters
input (Variable) – The input tensor of this layer, and the dimension of input tensor must be 4.
n (int, default 5) – The number of channels to sum over.
k (float, default 1.0) – An offset (usually positive to avoid dividing by 0).
alpha (float, default 1e-4) – The scaling parameter.
beta (float, default 0.75) – The exponent.
name (str, default None) – A name for this operation.
- Raises
ValueError
– If rank of the input tensor is not 4.- Returns
A tensor variable storing the transformation result.
Examples
import paddle.fluid as fluid data = fluid.layers.data( name="data", shape=[3, 112, 112], dtype="float32") lrn = fluid.layers.lrn(input=data)
lstm¶
-
paddle.fluid.layers.
lstm
(input, init_h, init_c, max_len, hidden_size, num_layers, dropout_prob=0.0, is_bidirec=False, is_test=False, name=None, default_initializer=None, seed=-1)[source] If Device is GPU, This op will use cudnn LSTM implementation
A four-gate Long Short-Term Memory network with no peephole connections. In the forward pass the output ht and cell output ct for a given iteration can be computed from the recurrent input ht-1, the cell input ct-1 and the previous layer input xt given matrices W, R and biases bW, bR from the following equations:
\[ \begin{align}\begin{aligned}i_t &= \sigma(W_{ix}x_{t} + W_{ih}h_{t-1} + bx_i + bh_i)\\f_t &= \sigma(W_{fx}x_{t} + W_{fh}h_{t-1} + bx_f + bh_f)\\o_t &= \sigma(W_{ox}x_{t} + W_{oh}h_{t-1} + bx_o + bh_o)\\\tilde{c_t} &= tanh(W_{cx}x_t + W_{ch}h_{t-1} + bx_c + bh_c)\\c_t &= f_t \odot c_{t-1} + i_t \odot \tilde{c_t}\\h_t &= o_t \odot tanh(c_t)\end{aligned}\end{align} \]$W$ terms denote weight matrices (e.g. $W_{ix}$ is the matrix of weights from the input gate to the input)
The b terms denote bias vectors ($bx_i$ and $bh_i$ are the input gate bias vector).
sigmoid is the logistic sigmoid function.
$i, f, o$ and $c$ are the input gate, forget gate, output gate, and cell activation vectors, respectively, all of which have the same size as the cell output activation vector $h$.
The \(\odot\) is the element-wise product of the vectors.
\(tanh\) is the activation functions.
\(\tilde{c_t}\) is also called candidate hidden state, which is computed based on the current input and the previous hidden state.
Where sigmoid is the sigmoid operator: \(sigmoid(x) = 1 / (1 + e^{-x})\) , * represents a point-wise multiplication, X represensts a matrix multiplication
- Parameters
input (Variable) – LSTM input tensor, shape MUST be ( seq_len x batch_size x input_size )
init_h (Variable) – The initial hidden state of the LSTM This is a tensor with shape ( num_layers x batch_size x hidden_size) if is_bidirec = True, shape should be ( num_layers*2 x batch_size x hidden_size)
init_c (Variable) – The initial cell state of the LSTM. This is a tensor with shape ( num_layers x batch_size x hidden_size ) if is_bidirec = True, shape should be ( num_layers*2 x batch_size x hidden_size)
max_len (int) – max length of LSTM. the first dim of input tensor CAN NOT greater than max_len
hidden_size (int) – hidden size of the LSTM
num_layers (int) – total layers number of the LSTM
dropout_prob (float|0.0) – dropout prob, dropout ONLY work between rnn layers, NOT between time steps There is NO dropout work on rnn output of the last RNN layers
is_bidirec (bool) – If it is bidirectional
is_test (bool) – If it is in test phrase
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
default_initializer (Initialize|None) – Where use initializer to initialize the Weight If set None, defaule initializer will be used
seed (int) – Seed for dropout in LSTM, If it’s -1, dropout will use random seed
- Returns
Three tensors, rnn_out, last_h, last_c:
rnn_out is result of LSTM hidden, shape is (seq_len x batch_size x hidden_size) if is_bidirec set to True, shape will be ( seq_len x batch_sze x hidden_size*2)
last_h is the hidden state of the last step of LSTM shape is ( num_layers x batch_size x hidden_size ) if is_bidirec set to True, shape will be ( num_layers*2 x batch_size x hidden_size)
last_c(Tensor): the cell state of the last step of LSTM shape is ( num_layers x batch_size x hidden_size ) if is_bidirec set to True, shape will be ( num_layers*2 x batch_size x hidden_size)
- Return type
rnn_out(Tensor),last_h(Tensor),last_c(Tensor)
Examples
emb_dim = 256 vocab_size = 10000 data = fluid.layers.data(name='x', shape=[-1, 100, 1], dtype='int32') emb = fluid.layers.embedding(input=data, size=[vocab_size, emb_dim], is_sparse=True) batch_size = 20 max_len = 100 dropout_prob = 0.2 input_size = 100 hidden_size = 150 num_layers = 1 init_h = layers.fill_constant( [num_layers, batch_size, hidden_size], 'float32', 0.0 ) init_c = layers.fill_constant( [num_layers, batch_size, hidden_size], 'float32', 0.0 ) rnn_out, last_h, last_c = layers.lstm( emb, init_h, init_c, max_len, hidden_size, num_layers, dropout_prob=dropout_prob)
lstm_unit¶
-
paddle.fluid.layers.
lstm_unit
(x_t, hidden_t_prev, cell_t_prev, forget_bias=0.0, param_attr=None, bias_attr=None, name=None)[source] Lstm unit layer. The equation of a lstm step is:
\[ \begin{align}\begin{aligned}i_t & = \sigma(W_{x_i}x_{t} + W_{h_i}h_{t-1} + b_i)\\f_t & = \sigma(W_{x_f}x_{t} + W_{h_f}h_{t-1} + b_f)\\c_t & = f_tc_{t-1} + i_t tanh (W_{x_c}x_t + W_{h_c}h_{t-1} + b_c)\\o_t & = \sigma(W_{x_o}x_{t} + W_{h_o}h_{t-1} + b_o)\\h_t & = o_t tanh(c_t)\end{aligned}\end{align} \]The inputs of lstm unit include \(x_t\), \(h_{t-1}\) and \(c_{t-1}\). The 2nd dimensions of \(h_{t-1}\) and \(c_{t-1}\) should be same. The implementation separates the linear transformation and non-linear transformation apart. Here, we take \(i_t\) as an example. The linear transformation is applied by calling a fc layer and the equation is:
\[L_{i_t} = W_{x_i}x_{t} + W_{h_i}h_{t-1} + b_i\]The non-linear transformation is applied by calling lstm_unit_op and the equation is:
\[i_t = \sigma(L_{i_t})\]This layer has two outputs including \(h_t\) and \(c_t\).
- Parameters
x_t (Variable) – The input value of current step, a 2-D tensor with shape M x N, M for batch size and N for input size.
hidden_t_prev (Variable) – The hidden value of lstm unit, a 2-D tensor with shape M x S, M for batch size and S for size of lstm unit.
cell_t_prev (Variable) – The cell value of lstm unit, a 2-D tensor with shape M x S, M for batch size and S for size of lstm unit.
forget_bias (float) – The forget bias of lstm unit.
param_attr (ParamAttr|None) – The parameter attribute for the learnable hidden-hidden weights. If it is set to None or one attribute of ParamAttr, lstm_unit will create ParamAttr as param_attr. If the Initializer of the param_attr is not set, the parameter is initialized with Xavier. Default: None.
bias_attr (ParamAttr|None) – The bias attribute for the learnable bias weights. If it is set to False, no bias will be added to the output units. If it is set to None or one attribute of ParamAttr, lstm_unit will create ParamAttr as bias_attr. If the Initializer of the bias_attr is not set, the bias is initialized zero. Default: None.
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
- Returns
The hidden value and cell value of lstm unit.
- Return type
tuple
- Raises
ValueError
– The ranks of x_t, hidden_t_prev and cell_t_prev not be 2 or the 1st dimensions of x_t, hidden_t_prev and cell_t_prev not be the same or the 2nd dimensions of hidden_t_prev and cell_t_prev not be the same.
Examples
import paddle.fluid as fluid dict_dim, emb_dim, hidden_dim = 128, 64, 512 data = fluid.layers.data(name='step_data', shape=[1], dtype='int32') x = fluid.layers.embedding(input=data, size=[dict_dim, emb_dim]) pre_hidden = fluid.layers.data( name='pre_hidden', shape=[hidden_dim], dtype='float32') pre_cell = fluid.layers.data( name='pre_cell', shape=[hidden_dim], dtype='float32') hidden = fluid.layers.lstm_unit( x_t=x, hidden_t_prev=pre_hidden, cell_t_prev=pre_cell)
margin_rank_loss¶
-
paddle.fluid.layers.
margin_rank_loss
(label, left, right, margin=0.1, name=None)[source] Margin Ranking Loss Layer for ranking problem, which compares left score and right score passed in. The ranking loss can be defined as following equation:
\[rank\_loss = max(0, -label * (left - right) + margin)\]- Parameters
label (Variable) – Indicates whether the left is ranked higher than the right or not.
left (Variable) – Ranking score for left.
right (Variable) – Ranking score for right.
margin (float) – Indicates the given margin.
name (str|None) – A name for this layer (optional). If set None, the layer will be named automatically.
- Returns
The ranking loss.
- Return type
Variable
- Raises
ValueError
– Any of label, left, and right is not a Variable.
Examples
import paddle.fluid as fluid label = fluid.layers.data(name="label", shape=[-1, 1], dtype="float32") left = fluid.layers.data(name="left", shape=[-1, 1], dtype="float32") right = fluid.layers.data(name="right", shape=[-1, 1], dtype="float32") out = fluid.layers.margin_rank_loss(label, left, right)
matmul¶
-
paddle.fluid.layers.
matmul
(x, y, transpose_x=False, transpose_y=False, alpha=1.0, name=None)[source] Applies matrix multiplication to two tensors.
Currently, the input tensors’ rank can be any, but when the rank of any inputs is bigger than 3, this two inputs’ rank should be equal.
The actual behavior depends on the shapes of \(x\), \(y\) and the flag values of
transpose_x
,transpose_y
. Specifically:If a transpose flag is specified, the last two dimensions of the tensor are transposed. If the tensor is rank-1 of shape \([D]\), then for \(x\) it is treated as \([1, D]\) in nontransposed form and as \([D, 1]\) in transposed form, whereas for \(y\) it is the opposite: It is treated as \([D, 1]\) in nontransposed form and as \([1, D]\) in transposed form.
After transpose, the two tensors are 2-D or n-D and matrix multiplication performs in the following way.
If both are 2-D, they are multiplied like conventional matrices.
If either is n-D, it is treated as a stack of matrices residing in the last two dimensions and a batched matrix multiply supporting broadcast applies on the two tensors.
Also note that if the raw tensor \(x\) or \(y\) is rank-1 and nontransposed, the prepended or appended dimension \(1\) will be removed after matrix multiplication.
- Parameters
x (Variable) – The input variable which is a Tensor or LoDTensor.
y (Variable) – The input variable which is a Tensor or LoDTensor.
transpose_x (bool) – Whether to transpose \(x\) before multiplication.
transpose_y (bool) – Whether to transpose \(y\) before multiplication.
alpha (float) – The scale of output. Default 1.0.
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
- Returns
The product Tensor variable.
- Return type
Variable
Examples
# Examples to clarify shapes of the inputs and output # x: [B, ..., M, K], y: [B, ..., K, N] # fluid.layers.matmul(x, y) # out: [B, ..., M, N] # x: [B, M, K], y: [B, K, N] # fluid.layers.matmul(x, y) # out: [B, M, N] # x: [B, M, K], y: [K, N] # fluid.layers.matmul(x, y) # out: [B, M, N] # x: [M, K], y: [K, N] # fluid.layers.matmul(x, y) # out: [M, N] # x: [B, M, K], y: [K] # fluid.layers.matmul(x, y) # out: [B, M] # x: [K], y: [K] # fluid.layers.matmul(x, y) # out: [1] # x: [M], y: [N] # fluid.layers.matmul(x, y, True, True) # out: [M, N] import paddle.fluid as fluid x = fluid.layers.data(name='x', shape=[2, 3], dtype='float32') y = fluid.layers.data(name='y', shape=[3, 2], dtype='float32') out = fluid.layers.matmul(x, y, True, True)
maxout¶
-
paddle.fluid.layers.
maxout
(x, groups, name=None)[source] MaxOut Operator.
Assumed the input shape is (N, Ci, H, W). The output shape is (N, Co, H, W). Then \(Co = Ci / groups\) and the operator formula is as follows:
$$ y_{si+j} = max_{k} x_{gsi + sk + j} $$ $$ g = groups $$ $$ s = \frac{input.size}{num\_channels} $$ $$ 0 \le i < \frac{num\_channels}{groups} $$ $$ 0 \le j < s $$ $$ 0 \le k < groups $$
Please refer to Paper: - Maxout Networks: http://www.jmlr.org/proceedings/papers/v28/goodfellow13.pdf - Multi-digit Number Recognition from Street View Imagery using Deep Convolutional Neural Networks: https://arxiv.org/pdf/1312.6082v4.pdf
- Parameters
x (Variable) – (Tensor) The input tensor of maxout operator. The format of input tensor is NCHW. Where N is batch size, C is the number of channels, H and W is the height and width of feature
groups (INT) – (int),Specifies how many groups the input tensor will be splitin the channel dimension. And the number of output channel is the number of channels divided by groups
name (basestring|None) – Name of the output.
- Returns
(Tensor) The output tensor of maxout operator.The format of output tensor is also NCHW.Where N is batch size, C is the number of channels, H and W is the height and width of feature
- Return type
out(Variable)
Examples
import paddle.fluid as fluid input = fluid.layers.data( name='data', shape=[256, 32, 32], dtype='float32') out = fluid.layers.maxout(input, groups=2)
mean¶
-
paddle.fluid.layers.
mean
(x, name=None)[source] Mean Operator calculates the mean of all elements in X.
- Parameters
x (Variable) – (Tensor) The input of mean op
name (basestring|None) – Name of the output.
- Returns
(Tensor) The output of mean op
- Return type
out(Variable)
Examples
import paddle.fluid as fluid input = fluid.layers.data( name='data', shape=[2, 3], dtype='float32') mean = fluid.layers.mean(input)
mean_iou¶
-
paddle.fluid.layers.
mean_iou
(input, label, num_classes)[source] Mean Intersection-Over-Union is a common evaluation metric for semantic image segmentation, which first computes the IOU for each semantic class and then computes the average over classes. IOU is defined as follows:
\[IOU = \frac{true\_positive}{(true\_positive + false\_positive + false\_negative)}.\]The predictions are accumulated in a confusion matrix and mean-IOU is then calculated from it.
- Parameters
input (Variable) – A Tensor of prediction results for semantic labels with type int32 or int64.
label (Variable) – A Tensor of ground truth labels with type int32 or int64. Its shape should be the same as input.
num_classes (int) – The possible number of labels.
- Returns
Three variables:
mean_iou : A Tensor representing the mean intersection-over-union with shape [1].
out_wrong: A Tensor with shape [num_classes]. The wrong numbers of each class.
out_correct: A Tensor with shape [num_classes]. The correct numbers of each class.
- Return type
mean_iou (Variable),out_wrong(Variable),out_correct(Variable)
Examples
import paddle.fluid as fluid predict = fluid.layers.data(name='predict', shape=[3, 32, 32]) label = fluid.layers.data(name='label', shape=[1]) iou, wrongs, corrects = fluid.layers.mean_iou(predict, label, num_classes=5)
merge_selected_rows¶
-
paddle.fluid.layers.
merge_selected_rows
(x, name=None)[source] MergeSelectedRows Operator.
MergeSelectedRows is used to merge the duplicated rows of the input. The output’s row has no duplicated, and it’s order is incremental.
Example: Input: X.rows is [0, 5, 5, 4, 19] X.height is 20 X.value is: [[1, 1] [2, 2] [3, 3] [4, 4] [6, 6]]
Output: Out.row is [0, 4, 5, 19] Out.height is 20 Out.value is: [[1, 1] [4, 4] [5, 5] [6, 6]]
- Parameters
x (Variable) – The input type is SelectedRows, and the selected rows may be duplicated
name (basestring|None) – Name of the output.
- Returns
The output type is SelectedRows, and the selected rows are not duplicated
- Return type
out(Variable)
Examples
import paddle.fluid as fluid b = fluid.default_main_program().global_block() var = b.create_var( name="X", dtype="float32", persistable=True, type=fluid.core.VarDesc.VarType.SELECTED_ROWS) y = fluid.layers.merge_selected_rows(var)
mul¶
-
paddle.fluid.layers.
mul
(x, y, x_num_col_dims=1, y_num_col_dims=1, name=None)[source] Mul Operator.
This operator is used to perform matrix multiplication for input \(X\) and \(Y\).
The equation is:
$$Out = X * Y$$
Both the input \(X\) and \(Y\) can carry the LoD (Level of Details) information, or not. But the output only shares the LoD information with input \(X\).
- Parameters
x (Variable) – (Tensor), The first input tensor of mul op
y (Variable) – (Tensor), The second input tensor of mul op
x_num_col_dims (INT) – (int, default 1), The mul_op can take tensors with more than two dimensions as its inputs. If the input $X$ is a tensor with more than two dimensions, $X$ will be flattened into a two-dimensional matrix first. The flattening rule is: the first num_col_dims will be flattened to form the first dimension of the final matrix (the height of the matrix), and the rest rank(X) - num_col_dims dimensions are flattened to form the second dimension of the final matrix (the width of the matrix). As a result, height of the flattened matrix is equal to the product of $X$’s first x_num_col_dims dimensions’ sizes, and width of the flattened matrix is equal to the product of $X$’s last rank(x) - num_col_dims dimensions’ size. For example, suppose $X$ is a 6-dimensional tensor with the shape [2, 3, 4, 5, 6], and x_num_col_dims = 3. Thus, the flattened matrix will have a shape [2 x 3 x 4, 5 x 6] = [24, 30].
y_num_col_dims (INT) – (int, default 1), The mul_op can take tensors with more than two, dimensions as its inputs. If the input $Y$ is a tensor with more than two dimensions, $Y$ will be flattened into a two-dimensional matrix first. The attribute y_num_col_dims determines how $Y$ is flattened. See comments of x_num_col_dims for more details.
name (basestring|None) – Name of the output.
- Returns
(Tensor), The output tensor of mul op
- Return type
out(Variable)
Examples
import paddle.fluid as fluid dataX = fluid.layers.data(name="dataX", append_batch_size = False, shape=[2, 5], dtype="float32") dataY = fluid.layers.data(name="dataY", append_batch_size = False, shape=[5, 3], dtype="float32") output = fluid.layers.mul(dataX, dataY, x_num_col_dims = 1, y_num_col_dims = 1)
multiplex¶
-
paddle.fluid.layers.
multiplex
(inputs, index)[source] Referring to the given index variable, this layer selects rows from the input variables to construct a multiplex variable. Assuming that there are \(m\) input variables and \(I_i\) represents the i-th input variable and \(i\) is in [0, \(m\)). All input variables are tensors with same shape [\(d_0\), \(d_1\), …, \(d_R\)]. Please note that rank of the input tensor should be at least 2. Each input variable will be treated as a 2-D matrix with shape [\(M\), \(N\)] where \(M\) for \(d_0\) and \(N\) for \(d_1\) * \(d_2\) * … * \(d_R\). Let \(I_i[j]\) be the j-th row of the i-th input variable. The given index variable should be a 2-D tensor with shape [\(M\), 1]. Let ID[i] be the i-th index value of the index variable. Then the output variable will be a tensor with shape [\(d_0\), \(d_1\), …, \(d_R\)]. If we treat the output tensor as a 2-D matrix with shape [\(M\), \(N\)] and let \(O[i]\) be the i-th row of the matrix, then O[i] is equal to \(I_{ID[i]}[i]\).
Ids: the index tensor.
X[0 : N - 1]: the candidate tensors for output (N >= 2).
For each index i from 0 to batchSize - 1, the output is the i-th row of the the (Ids[i])-th tensor.
For i-th row of the output tensor:
$$ y[i] = x_{k}[i] $$
where \(y\) is the output tensor, \(x_{k}\) is the k-th input tensor, and \(k = Ids[i]\).
For Example:
case 1: Given: X = [[[0,0,3,4], [0,1,3,4], [0,2,4,4], [0,3,3,4]], [[1,0,3,4], [1,1,7,8], [1,2,4,2], [1,3,3,4]], [[2,0,3,4], [2,1,7,8], [2,2,4,2], [2,3,3,4]], [[3,0,3,4], [3,1,7,8], [3,2,4,2], [3,3,3,4]]] index = [3,0,1,2] out:[[3 0 3 4] // X[3,0] (3 = index[i], 0 = i); i=0 [0 1 3 4] // X[0,1] (0 = index[i], 1 = i); i=1 [1 2 4 2] // X[1,2] (0 = index[i], 2 = i); i=2 [2 3 3 4]] // X[2,3] (0 = index[i], 3 = i); i=3 case 2: Given: X = [[[0,0,3,4], [0,1,3,4], [0,2,4,4], [0,3,3,4]], [[1,0,3,4], [1,1,7,8], [1,2,4,2], [1,3,3,4]]] index = [1,0] out:[[1 0 3 4] // X[1,0] (3 = index[0], 0 = i); i=1 [0 1 3 4] // X[0,1] (0 = index[1], 1 = i); i=2 [0 2 4 4] // X[0,2] (0 = 0, 2 = i); i=3 [0 3 3 4]] // X[0,3] (0 = 0, 3 = i); i=4
Examples:
import paddle.fluid as fluid x1 = fluid.layers.data(name='x1', shape=[4], dtype='float32') x2 = fluid.layers.data(name='x2', shape=[4], dtype='float32') index = fluid.layers.data(name='index', shape=[1], dtype='int32') out = fluid.layers.multiplex(inputs=[x1, x2], index=index)
- Parameters
inputs (list) – A list of variables to gather from. All variables have the same shape and the rank is at least 2.
index (Variable) – Tensor<int32>, index variable which is a 2-D tensor with shape [M, 1] where M is the batch size.
- Returns
The output tensor of multiplex operator.
nce¶
-
paddle.fluid.layers.
nce
(input, label, num_total_classes, sample_weight=None, param_attr=None, bias_attr=None, num_neg_samples=None, name=None, sampler='uniform', custom_dist=None, seed=0, is_sparse=False)[source] Compute and return the noise-contrastive estimation training loss. See Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. By default this operator uses a uniform distribution for sampling.
- Parameters
input (Variable) – input variable.
label (Variable) – label.
num_total_classes (int) – Total number of classes in all samples
sample_weight (Variable|None) – A Variable of shape [batch_size, 1] storing a weight for each sample. The default weight for each sample is 1.0.
param_attr (ParamAttr|None) – The parameter attribute for learnable parameters/weights of nce. If it is set to None or one attribute of ParamAttr, nce will create ParamAttr as param_attr. If the Initializer of the param_attr is not set, the parameter is initialized with Xavier. Default: None.
bias_attr (ParamAttr|bool|None) – The parameter attribute for the bias of nce. If it is set to False, no bias will be added to the output units. If it is set to None or one attribute of ParamAttr, nce will create ParamAttr as bias_attr. If the Initializer of the bias_attr is not set, the bias is initialized zero. Default: None.
num_neg_samples (int) – The number of negative classes. The default value is 10
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically. Default: None.
sampler (str) – The sampler used to sample class from negtive classes. It can be ‘uniform’, ‘log_uniform’ or ‘custom_dist’. default: ‘uniform’.
custom_dist (float[]) – A float[] with size=num_total_classes. It is used when sampler is set to ‘custom_dist’. custom_dist[i] is the probsbility of i-th class to be sampled. default: None.
seed (int) – The seed used in sampler. default: 0.
is_sparse (bool) – The flag indicating whether to use sparse update, the weight@GRAD and bias@GRAD will be changed to SelectedRows.
- Returns
The output nce loss.
- Return type
Variable
Examples
import paddle.fluid as fluid import numpy as np window_size = 5 words = [] for i in xrange(window_size): words.append(fluid.layers.data( name='word_{0}'.format(i), shape=[1], dtype='int64')) dict_size = 10000 label_word = int(window_size / 2) + 1 embs = [] for i in xrange(window_size): if i == label_word: continue emb = fluid.layers.embedding(input=words[i], size=[dict_size, 32], param_attr='embed', is_sparse=True) embs.append(emb) embs = fluid.layers.concat(input=embs, axis=1) loss = fluid.layers.nce(input=embs, label=words[label_word], num_total_classes=dict_size, param_attr='nce.w_0', bias_attr='nce.b_0') #or use custom distribution dist = np.array([0.05,0.5,0.1,0.3,0.05]) loss = fluid.layers.nce(input=embs, label=words[label_word], num_total_classes=5, param_attr='nce.w_1', bias_attr='nce.b_1', num_neg_samples=3, sampler="custom_dist", custom_dist=dist)
npair_loss¶
-
paddle.fluid.layers.
npair_loss
(anchor, positive, labels, l2_reg=0.002)[source] Npair Loss Layer
Read Improved Deep Metric Learning with Multi class N pair Loss Objective .
Npair loss requires paired data. Npair loss has two parts: the first part is L2 regularizer on the embedding vector; the second part is cross entropy loss which takes the similarity matrix of anchor and positive as logits.
- Parameters
anchor (Variable) – embedding vector for the anchor image. shape=[batch_size, embedding_dims]
positive (Variable) – embedding vector for the positive image. shape=[batch_size, embedding_dims]
labels (Variable) – 1-D tensor. shape=[batch_size]
l2_reg (float32) – L2 regularization term on embedding vector, default: 0.002
- Returns
return npair loss, shape=[1]
- Return type
npair loss(Variable)
Examples
import paddle.fluid as fluid anchor = fluid.layers.data( name = 'anchor', shape = [18, 6], dtype = 'float32', append_batch_size=False) positive = fluid.layers.data( name = 'positive', shape = [18, 6], dtype = 'float32', append_batch_size=False) labels = fluid.layers.data( name = 'labels', shape = [18], dtype = 'float32', append_batch_size=False) npair_loss = fluid.layers.npair_loss(anchor, positive, labels, l2_reg = 0.002)
one_hot¶
-
paddle.fluid.layers.
one_hot
(input, depth)[source] This layer creates the one-hot representations for input indices.
- Parameters
input (Variable) – Input indices, last dimension must be 1.
depth (scalar) – An interger defining the depth of the one-hot dimension.
- Returns
The one-hot representations of input.
- Return type
Variable
Examples
import paddle.fluid as fluid label = fluid.layers.data(name="label", shape=[1], dtype="int64") one_hot_label = fluid.layers.one_hot(input=label, depth=10)
pad¶
-
paddle.fluid.layers.
pad
(x, paddings, pad_value=0.0, name=None)[source] Pads a tensor with a constant value given by
pad_value
, and the padded width is specified bypaddings
.Specifically, the number of values padded before the contents of
x
in dimensioni
is indicated bypaddings[i]
, and the number of values padded after the contents ofx
in dimensioni
is indicated bypaddings[i+1]
.See below for an example.
Given: x = [[1, 2], [3, 4]] paddings = [0, 1, 1, 2] pad_value = 0 Return: out = [[0, 1, 2, 0, 0] [0, 3, 4, 0, 0] [0, 0, 0, 0, 0]]
- Parameters
x (Variable) – The input tensor variable.
paddings (list) – A list of integers. Its elements specify the padded width before and after for each dimension in turn. The length of :attr:paddings must be \(rank(x) \times 2\).
pad_value (float) – The constant value used to pad.
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
- Returns
The padded tensor variable.
- Return type
Variable
Examples
# x is a rank 2 tensor variable. import paddle.fluid as fluid x = fluid.layers.data(name='data', shape=[224], dtype='float32') out = fluid.layers.pad( x=x, paddings=[0, 1, 1, 2], pad_value=0.)
pad2d¶
-
paddle.fluid.layers.
pad2d
(input, paddings=[0, 0, 0, 0], mode='constant', pad_value=0.0, data_format='NCHW', name=None)[source] Pad 2-d images accordding to ‘paddings’ and ‘mode’. If mode is ‘reflect’, paddings[0] and paddings[1] must be no greater than height-1. And the width dimension has the same condition.
Example
Given that X is a channel of image from input: X = [[1, 2, 3], [4, 5, 6]] Case 0: paddings = [0, 1, 2, 3], mode = 'constant' pad_value = 0 Out = [[0, 0, 1, 2, 3, 0, 0, 0] [0, 0, 4, 5, 6, 0, 0, 0] [0, 0, 0, 0, 0, 0, 0, 0]] Case 1: paddings = [0, 1, 2, 1], mode = 'reflect' Out = [[3, 2, 1, 2, 3, 2] [6, 5, 4, 5, 6, 5] [3, 2, 1, 2, 3, 2]] Case 2: paddings = [0, 1, 2, 1], mode = 'edge' Out = [[1, 1, 1, 2, 3, 3] [4, 4, 4, 5, 6, 6] [4, 4, 4, 5, 6, 6]]
- Parameters
input (Variable) – The input image with [N, C, H, W] format or [N, H, W, C] format.
paddings (tuple|list|Variable) – The padding size. If padding is a tuple, it must contain four integers, (padding_top, padding_bottom, padding_left, padding_right). Default: padding = [0, 0, 0, 0].
mode (str) – Three modes: constant(default), reflect, edge. Default: constant
pad_value (float32) – The value to fill the padded areas in constant mode. Default: 0
data_format (str) – An optional string from: “NHWC”, “NCHW”. Specify the data format of the input data. Default: “NCHW”
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
- Returns
The tensor variable padded accordding to paddings and mode.
- Return type
Variable
Examples
import paddle.fluid as fluid data = fluid.layers.data(name='data', shape=[3, 32, 32], dtype='float32') result = fluid.layers.pad2d(input=data, paddings=[1, 2, 3, 4], mode='reflect')
pad_constant_like¶
-
paddle.fluid.layers.
pad_constant_like
(x, y, pad_value=0.0, name=None)[source] Pad input(Y) with
pad_value
, the number of values padded to the edges of each axis is specified by the difference of the shape of X and Y. ((0, shape_x_0 - shape_y_0), … (0, shape_x_n - shape_y_n)) unique pad widths for each axis. The input should be a k-D tensor(k > 0 and k < 7).See below for an example.
Given: X = [[[[ 0, 1, 2], [ 3, 4, 5]], [[ 6, 7, 8], [ 9, 10, 11]], [[12, 13, 14], [15, 16, 17]]], [[[18, 19, 20], [21, 22, 23]], [[24, 25, 26], [27, 28, 29]], [[30, 31, 32], [33, 34, 35]]]] X.shape = (2, 3, 2, 3) Y = [[[[35, 36, 37]], [[38, 39, 40]], [[41, 42, 43]]]] Y.shape = (1, 3, 1, 3) And pad_value = -1, Return: Out = [[[[35, 36, 37], [-1, -1, -1]], [[38, 39, 40], [-1, -1, -1]], [[41, 42, 43], [-1, -1, -1]]], [[[-1, -1, -1], [-1, -1, -1]], [[-1, -1, -1], [-1, -1, -1]], [[-1, -1, -1], [-1, -1, -1]]]] Out.shape = (2, 3, 2, 3)
- Parameters
x (Variable) – The input tensor variable.
y (Variable) – The input tensor variable.
pad_value (float) – The constant value used to pad.
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
- Returns
The padded tensor variable.
- Return type
Variable
Examples
# x is a rank 4 tensor variable, x.shape = (2, 3, 2, 3) # y is a rank 4 tensor variable, y.shape = (1, 3, 1, 3) import paddle.fluid as fluid x = fluid.layers.data(name='x', shape=[2,3,2,3], dtype='float32') y = fluid.layers.data(name='y', shape=[1,3,1,3], dtype='float32') out = fluid.layers.pad_constant_like(x=x, y=y, pad_value=0.) # out is a rank 4 tensor variable, and out.shape = [2, 3 ,2 , 3]
pixel_shuffle¶
-
paddle.fluid.layers.
pixel_shuffle
(x, upscale_factor)[source] Pixel Shuffle Layer
This layer rearranges elements in a tensor of shape [N, C, H, W] to a tensor of shape [N, C/r**2, H*r, W*r]. This is useful for implementing efficient sub-pixel convolution with a stride of 1/r. Please refer to the paper: Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network . by Shi et. al (2016) for more details.
Given a 4-D tensor with the shape: x.shape = [1, 9, 4, 4] Given upscale_factor: upscale_factor= 3 output shape is: [1, 1, 12, 12]
- Parameters
x (Variable) – The input tensor variable.
upscale_factor (int) – factor to increase spatial resolution
- Returns
Reshaped tensor according to the new dimension.
- Return type
Out(Variable)
- Raises
ValueError
– If the square of upscale_factor cannot divide the channels of input.
Examples
import paddle.fluid as fluid input = fluid.layers.data(name="input", shape=[9,4,4]) output = fluid.layers.pixel_shuffle(x=input, upscale_factor=3)
pool2d¶
-
paddle.fluid.layers.
pool2d
(input, pool_size=-1, pool_type='max', pool_stride=1, pool_padding=0, global_pooling=False, use_cudnn=True, ceil_mode=False, name=None, exclusive=True)[source] The pooling2d operation calculates the output based on the input, pooling_type and ksize, strides, paddings parameters. Input(X) and output(Out) are in NCHW format, where N is batch size, C is the number of channels, H is the height of the feature, and W is the width of the feature. Parameters(ksize, strides, paddings) are two elements. These two elements represent height and width, respectively. The input(X) size and output(Out) size may be different.
Example:
Input:
X shape: \((N, C, H_{in}, W_{in})\)
Output:
Out shape: \((N, C, H_{out}, W_{out})\)
For ceil_mode = false: $$ H_{out} = \frac{(H_{in} - ksize[0] + 2 * paddings[0])}{strides[0]} + 1 $$ $$ W_{out} = \frac{(W_{in} - ksize[1] + 2 * paddings[1])}{strides[1]} + 1 $$ For ceil_mode = true: $$ H_{out} = \frac{(H_{in} - ksize[0] + 2 * paddings[0] + strides[0] - 1)}{strides[0]} + 1 $$ $$ W_{out} = \frac{(W_{in} - ksize[1] + 2 * paddings[1] + strides[1] - 1)}{strides[1]} + 1 $$
For exclusive = false: $$ hstart = i * strides[0] - paddings[0] $$ $$ hend = hstart + ksize[0] $$ $$ wstart = j * strides[1] - paddings[1] $$ $$ wend = wstart + ksize[1] $$ $$ Output(i ,j) = \frac{sum(Input[hstart:hend, wstart:wend])}{ksize[0] * ksize[1]} $$
For exclusive = true: $$ hstart = max(0, i * strides[0] - paddings[0]) $$ $$ hend = min(H, hstart + ksize[0]) $$ $$ wstart = max(0, j * strides[1] - paddings[1]) $$ $$ wend = min(W, wstart + ksize[1]) $$ $$ Output(i ,j) = \frac{sum(Input[hstart:hend, wstart:wend])}{(hend - hstart) * (wend - wstart)} $$
- Parameters
input (Variable) – The input tensor of pooling operator. The format of input tensor is NCHW, where N is batch size, C is the number of channels, H is the height of the feature, and W is the width of the feature.
pool_size (int|list|tuple) – The pool kernel size. If pool kernel size is a tuple or list, it must contain two integers, (pool_size_Height, pool_size_Width). Otherwise, the pool kernel size will be a square of an int.
pool_type – (string), pooling type, can be “max” for max-pooling and “avg” for average-pooling
pool_stride (int|list|tuple) – The pool stride size. If pool stride size is a tuple or list, it must contain two integers, (pool_stride_Height, pool_stride_Width). Otherwise, the pool stride size will be a square of an int.
pool_padding (int|list|tuple) – The pool padding size. If pool padding size is a tuple, it must contain two integers, (pool_padding_on_Height, pool_padding_on_Width). Otherwise, the pool padding size will be a square of an int.
global_pooling (bool) – (bool, default false) Whether to use the global pooling. If global_pooling = true, kernel size and paddings will be ignored
use_cudnn (bool) – (bool, default false) Only used in cudnn kernel, need install cudnn
ceil_mode (bool) – (bool, default false) Whether to use the ceil function to calculate output height and width. False is the default. If it is set to False, the floor function will be used
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
exclusive (bool) – Whether to exclude padding points in average pooling mode, default is true
- Returns
The pooling result.
- Return type
Variable
- Raises
ValueError
– If ‘pool_type’ is not “max” nor “avg”ValueError
– If ‘global_pooling’ is False and ‘pool_size’ is -1ValueError
– If ‘use_cudnn’ is not a bool value.
Examples
import paddle.fluid as fluid data = fluid.layers.data( name='data', shape=[3, 32, 32], dtype='float32') pool2d = fluid.layers.pool2d( input=data, pool_size=2, pool_type='max', pool_stride=1, global_pooling=False)
pool3d¶
-
paddle.fluid.layers.
pool3d
(input, pool_size=-1, pool_type='max', pool_stride=1, pool_padding=0, global_pooling=False, use_cudnn=True, ceil_mode=False, name=None, exclusive=True)[source] Pool3d Operator.
The pooling3d operation calculates the output based on the input, pooling_type, ksize, strides, and paddings parameters. Input(X) and output(Out) are in NCDHW format, where N is batch size, C is the number of channels, and D, H and W are the depth, height and width of the feature, respectively. Parameters(ksize, strides, paddings) are three elements. These three elements represent depth, height and width, respectively. The input(X) size and output(Out) size may be different.
Example: Input: X shape: \((N, C, D_{in}, H_{in}, W_{in})\) Output: Out shape: \((N, C, D_{out}, H_{out}, W_{out})\) For ceil_mode = false: $$ D_{out} = \frac{(D_{in} - ksize[0] + 2 * paddings[0])}{strides[0]} + 1 $$ $$ H_{out} = \frac{(H_{in} - ksize[1] + 2 * paddings[1])}{strides[2]} + 1 $$ $$ W_{out} = \frac{(W_{in} - ksize[2] + 2 * paddings[2])}{strides[2]} + 1 $$ For ceil_mode = true: $$ D_{out} = \frac{(D_{in} - ksize[0] + 2 * paddings[0] + strides[0] -1)}{strides[0]} + 1 $$ $$ H_{out} = \frac{(H_{in} - ksize[1] + 2 * paddings[1] + strides[1] -1)}{strides[1]} + 1 $$ $$ W_{out} = \frac{(W_{in} - ksize[2] + 2 * paddings[2] + strides[2] -1)}{strides[2]} + 1 $$
For exclusive = false: $$ dstart = i * strides[0] - paddings[0] $$ $$ dend = dstart + ksize[0] $$ $$ hstart = j * strides[1] - paddings[1] $$ $$ hend = hstart + ksize[1] $$ $$ wstart = k * strides[2] - paddings[2] $$ $$ wend = wstart + ksize[2] $$ $$ Output(i ,j, k) = \frac{sum(Input[dstart:dend, hstart:hend, wstart:wend])}{ksize[0] * ksize[1] * ksize[2]} $$
For exclusive = true: $$ dstart = max(0, i * strides[0] - paddings[0]) $$ $$ dend = min(D, dstart + ksize[0]) $$ $$ hend = min(H, hstart + ksize[1]) $$ $$ wstart = max(0, k * strides[2] - paddings[2]) $$ $$ wend = min(W, wstart + ksize[2]) $$ $$ Output(i ,j, k) = \frac{sum(Input[dstart:dend, hstart:hend, wstart:wend])}{(dend - dstart) * (hend - hstart) * (wend - wstart)} $$
- Parameters
input (Variable) – The input tensor of pooling operator. The format of input tensor is NCDHW, where N is batch size, C is the number of channels, D is the depth of the feature, H is the height of the feature, and W is the width of the feature.
pool_size (int|list|tuple) – The pool kernel size. If pool kernel size is a tuple or list, it must contain three integers, (pool_size_Depth, pool_size_Height, pool_size_Width). Otherwise, the pool kernel size will be the cube of an int.
pool_type (string) – (string) Pooling type, can be “max” for max-pooling and “avg” for average-pooling
pool_stride (int) – stride of the pooling layer.
pool_padding (int) – padding size.
global_pooling (bool) – (bool, default false) Whether to use the global pooling. If global_pooling = true, kernel size and paddings will be ignored
use_cudnn (bool) – (bool, default false) Only used in cudnn kernel, need install cudnn
ceil_mode (bool) – (bool, default false) Whether to use the ceil function to calculate output height and width. False is the default. If it is set to False, the floor function will be used
name (str) – A name for this layer(optional). If set None, the layer will be named automatically.
exclusive (bool) – Whether to exclude padding points in average pooling mode, default is true
- Returns
output of pool3d layer.
- Return type
Variable
Examples
import paddle.fluid as fluid data = fluid.layers.data( name='data', shape=[3, 32, 32, 32], dtype='float32') pool3d = fluid.layers.pool3d( input=data, pool_size=2, pool_type='max', pool_stride=1, global_pooling=False)
pow¶
-
paddle.fluid.layers.
pow
(x, factor=1.0, name=None)[source] Pow Activation Operator.
\(out = x^{factor}\)
- Parameters
x (Variable) – Input of Pow operator
factor (FLOAT|1.0) – The exponential factor of Pow
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
- Returns
Output of Pow operator
- Return type
output(Variable)
Examples
import paddle.fluid as fluid x = fluid.layers.data(name="x", shape=[3,10,32,32], dtype="float32") y = fluid.layers.pow(x, factor=2.0)
prelu¶
-
paddle.fluid.layers.
prelu
(x, mode, param_attr=None, name=None)[source] Equation:
\[y = \max(0, x) + \alpha * \min(0, x)\]There are three modes for the activation:
all: All elements share same alpha. channel: Elements in same channel share same alpha. element: All elements do not share alpha. Each element has its own alpha.
- Parameters
x (Variable) – The input tensor.
mode (string) – The mode for weight sharing.
param_attr (ParamAttr|None) – The parameter attribute for the learnable weight (alpha), it can be create by ParamAttr.
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
- Returns
The output tensor with the same shape as input.
- Return type
Variable
Examples
import paddle.fluid as fluid from paddle.fluid.param_attr import ParamAttr x = fluid.layers.data(name="x", shape=[5,10,10], dtype="float32") mode = 'channel' output = fluid.layers.prelu( x,mode,param_attr=ParamAttr(name='alpha'))
psroi_pool¶
-
paddle.fluid.layers.
psroi_pool
(input, rois, output_channels, spatial_scale, pooled_height, pooled_width, name=None)[source] PSROIPool Operator
Position sensitive region of interest pooling (also known as PSROIPooling) is to perform position-sensitive average pooling on regions of interest specified by input, takes as input N position-sensitive score maps and a list of num_rois regions of interest.
PSROIPooling for R-FCN. Please refer to https://arxiv.org/abs/1605.06409 for more details.
- Parameters
input (Variable) – (Tensor), the input of PSROIPoolOp. The format of input tensor is NCHW. Where N is the batch size, C is the number of input channels, H is the height of the input feature map, and W is the width
rois (Variable) – ROIs (Regions of Interest) to pool over.It should be a 2-D LoDTensor of shape (num_rois, 4), the lod level is 1. Given as [[x1, y1, x2, y2], …], (x1, y1) is the top left coordinates, and (x2, y2) is the bottom right coordinates.
output_channels (integer) – (int), the number of channels of the output feature map. For a task of C classes of objects, output_channels should be (C + 1) for classification only
spatial_scale (float) – (float, default 1.0), Multiplicative spatial scale factor to translate ROI coords from their input scale to the scale used when pooling Default: 1.0
pooled_height (integer) – (int, default 1), the pooled output height Default: 1
pooled_width (integer) – (int, default 1), the pooled output width Default: 1
name (str, default None) – The name of this layer.
- Returns
(Tensor), the output of PSROIPoolOp is a 4-D Tensor with shape (num_rois, output_channels, pooled_h, pooled_w).
- Return type
Variable
Examples
import paddle.fluid as fluid x = fluid.layers.data(name='x', shape=[490, 28, 28], dtype='float32') rois = fluid.layers.data(name='rois', shape=[4], lod_level=1, dtype='float32') pool_out = fluid.layers.psroi_pool(x, rois, 10, 1.0, 7, 7)
py_func¶
-
paddle.fluid.layers.
py_func
(func, x, out, backward_func=None, skip_vars_in_backward_input=None)[source] PyFunc Operator.
User can use
py_func
to register operators in Python side. The inputs offunc
isLoDTensor
and outputs can be numpy array orLoDTensor
. Paddle would call the registeredfunc
in forward part, and callbackward_func
in backward part (ifbackward_func
is not None).User should set the right data type and shape of
out
before calling this function. However, data types and shapes of gradients ofout
andx
would be inferred automatically.Input orders of
backward_func
would be: forward inputsx
, forward outputsout
and backward input gradients ofout
. If some variables ofout
have no gradient, the input tensor would be None in Python side. If some variables ofin
have no gradient, users should return None.This function can also be used to debug the running network. User can add a
py_func
operator without output, and print inputx
insidefunc
.- Parameters
func (callable) – forward Python function.
x (Variable|list(Variable)|tuple(Variable)) – inputs of
func
.out (Variable|list(Variable)|tuple(Variable)) – outputs of
func
. Paddle cannot infer shapes and data types ofout
. Users should createout
beforehand.backward_func (callable|None) – backward Python function. None means no backward. Default None.
skip_vars_in_backward_input (Variable|list(Variable)|tuple(Variable)) – Variables that are not needed in
backward_func
inputs. These variables must be any ofx
andout
. If set, these vars would not be inputs ofbackward_func
, Only useful whenbackward_func
is not None. Default None.
- Returns
input
out
- Return type
out (Variable|list(Variable)|tuple(Variable))
Examples
>>> import paddle.fluid as fluid >>> import six >>> >>> def create_tmp_var(name, dtype, shape): >>> return fluid.default_main_program().current_block().create_var( >>> name=name, dtype=dtype, shape=shape) >>> >>> # tanh activation has been provided by Paddle C++ op >>> # Here, we only use tanh to be an example to show the usage >>> # of py_func >>> def tanh(x): >>> return np.tanh(x) >>> >>> # forward input x is skipped >>> def tanh_grad(y, dy): >>> return np.array(dy) * (1 - np.square(np.array(y))) >>> >>> def debug_func(x): >>> print(x) >>> >>> def simple_net(img, label): >>> hidden = img >>> for idx in six.moves.range(4): >>> hidden = fluid.layers.fc(hidden, size=200) >>> new_hidden = create_tmp_var(name='hidden_{}'.format(idx), >>> dtype=hidden.dtype, shape=hidden.shape) >>> >>> # user-defined layers with forward and backward >>> hidden = fluid.layers.py_func(func=tanh, x=hidden, >>> out=new_hidden, backward_func=tanh_grad, >>> skip_vars_in_backward_input=hidden) >>> >>> # user-defined debug layers to print variables >>> fluid.layers.py_func(func=debug_func, x=hidden, out=None) >>> >>> prediction = fluid.layers.fc(hidden, size=10, act='softmax') >>> loss = fluid.layers.cross_entropy(input=prediction, label=label) >>> return fluid.layers.mean(loss)
random_crop¶
-
paddle.fluid.layers.
random_crop
(x, shape, seed=None)[source] This operator takes a batch of instance, and do random cropping on each instance. It means that cropping positions differs on each instance, which is determined by an uniform random generator. All cropped instances have the same shape, which is determined by the operator’s attribute ‘shape’.
- Parameters
x (Variable) – A batch of instances to random crop
shape (INTS) – The shape of a cropped instance
seed (int|Variable|None) – The random seed By default, the seed will get from random.randint(-65536, 65535).
- Returns
The cropped instance batch
Examples
>>> import paddle.fluid as fluid >>> img = fluid.layers.data("img", [3, 256, 256]) >>> cropped_img = fluid.layers.random_crop(img, shape=[3, 224, 224])
rank¶
-
paddle.fluid.layers.
rank
(input)[source] Rank Layer
Returns the number of dimensions for a tensor, which is a 0-D int32 Tensor.
- Parameters
input (Variable) – The input variable.
- Returns
The rank of the input variable.
- Return type
Variable
Examples
input = layers.data( name="input", shape=[3, 100, 100], dtype="float32") rank = layers.rank(input) # 4
rank_loss¶
-
paddle.fluid.layers.
rank_loss
(label, left, right, name=None)[source] Rank loss layer for RankNet
RankNet is a pairwise ranking model with a training sample consisting of a pair of documents, A and B. Label P indicates whether A is ranked higher than B or not:
P = {0, 1} or {0, 0.5, 1}, where 0.5 means that there is no information about the rank of the input pair.
Rank loss layer takes three inputs: left ( \(o_i\) ), right ( \(o_j\) ) and label ( \(P_{i,j}\) ). The inputs respectively represent RankNet’s output scores for documents A and B and the value of label P. The following equation computes rank loss C_{i,j} from the inputs:
\[ \begin{align}\begin{aligned}\begin{split}C_{i,j} &= -\tilde{P_{ij}} * o_{i,j} + \log(1 + e^{o_{i,j}}) \\\end{split}\\\begin{split}o_{i,j} &= o_i - o_j \\\end{split}\\\tilde{P_{i,j}} &= \left \{0, 0.5, 1 \right \} \ or \ \left \{0, 1 \right \}\end{aligned}\end{align} \]Rank loss layer takes batch inputs with size batch_size (batch_size >= 1).
- Parameters
label (Variable) – Indicats whether A ranked higher than B or not.
left (Variable) – RankNet’s output score for doc A.
right (Variable) – RankNet’s output score for doc B.
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
- Returns
The value of rank loss.
- Return type
list
- Raises
ValueError
– Any of label, left, and right is not a variable.
Examples
import paddle.fluid as fluid label = fluid.layers.data(name="label", shape=[-1, 1], dtype="float32") left = fluid.layers.data(name="left", shape=[-1, 1], dtype="float32") right = fluid.layers.data(name="right", shape=[-1, 1], dtype="float32") out = fluid.layers.rank_loss(label, left, right)
reduce_all¶
-
paddle.fluid.layers.
reduce_all
(input, dim=None, keep_dim=False, name=None)[source] Computes the
logical and
of tensor elements over the given dimension.- Parameters
input (Variable) – The input variable which is a Tensor or LoDTensor.
dim (list|int|None) – The dimension along which the logical and is computed. If
None
, compute the logical and over all elements ofinput
and return a Tensor variable with a single element, otherwise must be in the range \([-rank(input), rank(input))\). If \(dim[i] < 0\), the dimension to reduce is \(rank + dim[i]\).keep_dim (bool) – Whether to reserve the reduced dimension in the output Tensor. The result tensor will have one fewer dimension than the
input
unlesskeep_dim
is true.name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
- Returns
The reduced Tensor variable.
- Return type
Variable
Examples
import paddle.fluid as fluid import paddle.fluid.layers as layers import numpy as np # x is a bool Tensor variable with following elements: # [[True, False] # [True, True]] # Each example is followed by the correspending output tensor. fluid.layers.reduce_all(x) # False fluid.layers.reduce_all(x, dim=0) # [True, False] fluid.layers.reduce_all(x, dim=-1) # [False, True] fluid.layers.reduce_all(x, dim=1, keep_dim=True) # [[False], [True]]
reduce_any¶
-
paddle.fluid.layers.
reduce_any
(input, dim=None, keep_dim=False, name=None)[source] Computes the
logical or
of tensor elements over the given dimension.- Parameters
input (Variable) – The input variable which is a Tensor or LoDTensor.
dim (list|int|None) – The dimension along which the logical or is computed. If
None
, compute the logical or over all elements ofinput
and return a Tensor variable with a single element, otherwise must be in the range \([-rank(input), rank(input))\). If \(dim[i] < 0\), the dimension to reduce is \(rank + dim[i]\).keep_dim (bool) – Whether to reserve the reduced dimension in the output Tensor. The result tensor will have one fewer dimension than the
input
unlesskeep_dim
is true.name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
- Returns
The reduced Tensor variable.
- Return type
Variable
Examples
import paddle.fluid as fluid import paddle.fluid.layers as layers import numpy as np # x is a bool Tensor variable with following elements: # [[True, False] # [False, False]] # Each example is followed by the correspending output tensor. fluid.layers.reduce_any(x) # True fluid.layers.reduce_any(x, dim=0) # [True, False] fluid.layers.reduce_any(x, dim=-1) # [True, False] fluid.layers.reduce_any(x, dim=1, keep_dim=True) # [[True], [False]]
reduce_max¶
-
paddle.fluid.layers.
reduce_max
(input, dim=None, keep_dim=False, name=None)[source] Computes the maximum of tensor elements over the given dimension.
- Parameters
input (Variable) – The input variable which is a Tensor or LoDTensor.
dim (list|int|None) – The dimension along which the maximum is computed. If
None
, compute the maximum over all elements ofinput
and return a Tensor variable with a single element, otherwise must be in the range \([-rank(input), rank(input))\). If \(dim[i] < 0\), the dimension to reduce is \(rank + dim[i]\).keep_dim (bool) – Whether to reserve the reduced dimension in the output Tensor. The result tensor will have one fewer dimension than the
input
unlesskeep_dim
is true.name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
- Returns
The reduced Tensor variable.
- Return type
Variable
Examples
import paddle.fluid as fluid # x is a Tensor variable with following elements: # [[0.2, 0.3, 0.5, 0.9] # [0.1, 0.2, 0.6, 0.7]] # Each example is followed by the correspending output tensor. x = fluid.layers.data(name='x', shape=[4, 2], dtype='float32') fluid.layers.reduce_max(x) # [0.9] fluid.layers.reduce_max(x, dim=0) # [0.2, 0.3, 0.6, 0.9] fluid.layers.reduce_max(x, dim=-1) # [0.9, 0.7] fluid.layers.reduce_max(x, dim=1, keep_dim=True) # [[0.9], [0.7]] # y is a Tensor variable with shape [2, 2, 2] and elements as below: # [[[1.0, 2.0], [3.0, 4.0]], # [[5.0, 6.0], [7.0, 8.0]]] # Each example is followed by the correspending output tensor. y = fluid.layers.data(name='y', shape=[2, 2, 2], dtype='float32') fluid.layers.reduce_max(y, dim=[1, 2]) # [4.0, 8.0] fluid.layers.reduce_max(y, dim=[0, 1]) # [7.0, 8.0]
reduce_mean¶
-
paddle.fluid.layers.
reduce_mean
(input, dim=None, keep_dim=False, name=None)[source] Computes the mean of the input tensor’s elements along the given dimension.
- Parameters
input (Variable) – The input variable which is a Tensor or LoDTensor.
dim (list|int|None) – The dimension along which the mean is computed. If None, compute the mean over all elements of
input
and return a variable with a single element, otherwise it must be in the range \([-rank(input), rank(input))\). If \(dim[i] < 0\), the dimension to reduce is \(rank(input) + dim[i]\).keep_dim (bool) – Whether to reserve the reduced dimension in the output Tensor. The result tensor will have one fewer dimension than the
input
unlesskeep_dim
is true.name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
- Returns
The reduced mean Variable.
- Return type
Variable
Examples
import paddle.fluid as fluid # x is a Tensor variable with following elements: # [[0.2, 0.3, 0.5, 0.9] # [0.1, 0.2, 0.6, 0.7]] # Each example is followed by the correspending output tensor. x = fluid.layers.data(name='x', shape=[4, 2], dtype='float32') fluid.layers.reduce_mean(x) # [0.4375] fluid.layers.reduce_mean(x, dim=0) # [0.15, 0.25, 0.55, 0.8] fluid.layers.reduce_mean(x, dim=-1) # [0.475, 0.4] fluid.layers.reduce_mean(x, dim=1, keep_dim=True) # [[0.475], [0.4]] # y is a Tensor variable with shape [2, 2, 2] and elements as below: # [[[1.0, 2.0], [3.0, 4.0]], # [[5.0, 6.0], [7.0, 8.0]]] # Each example is followed by the correspending output tensor. y = fluid.layers.data(name='y', shape=[2, 2, 2], dtype='float32') fluid.layers.reduce_mean(y, dim=[1, 2]) # [2.5, 6.5] fluid.layers.reduce_mean(y, dim=[0, 1]) # [4.0, 5.0]
reduce_min¶
-
paddle.fluid.layers.
reduce_min
(input, dim=None, keep_dim=False, name=None)[source] Computes the minimum of tensor elements over the given dimension.
- Parameters
input (Variable) – The input variable which is a Tensor or LoDTensor.
dim (list|int|None) – The dimensions along which the minimum is computed. If
None
, compute the minimum over all elements ofinput
and return a Tensor variable with a single element, otherwise must be in the range \([-rank(input), rank(input))\). If \(dim[i] < 0\), the dimension to reduce is \(rank + dim[i]\).keep_dim (bool) – Whether to reserve the reduced dimension in the output Tensor. The result tensor will have one fewer dimension than the
input
unlesskeep_dim
is true.name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
- Returns
The reduced Tensor variable.
- Return type
Variable
Examples
import paddle.fluid as fluid # x is a Tensor variable with following elements: # [[0.2, 0.3, 0.5, 0.9] # [0.1, 0.2, 0.6, 0.7]] # Each example is followed by the correspending output tensor. x = fluid.layers.data(name='x', shape=[4, 2], dtype='float32') fluid.layers.reduce_min(x) # [0.1] fluid.layers.reduce_min(x, dim=0) # [0.1, 0.2, 0.5, 0.7] fluid.layers.reduce_min(x, dim=-1) # [0.2, 0.1] fluid.layers.reduce_min(x, dim=1, keep_dim=True) # [[0.2], [0.1]] # y is a Tensor variable with shape [2, 2, 2] and elements as below: # [[[1.0, 2.0], [3.0, 4.0]], # [[5.0, 6.0], [7.0, 8.0]]] # Each example is followed by the correspending output tensor. y = fluid.layers.data(name='y', shape=[2, 2, 2], dtype='float32') fluid.layers.reduce_min(y, dim=[1, 2]) # [1.0, 5.0] fluid.layers.reduce_min(y, dim=[0, 1]) # [1.0, 2.0]
reduce_prod¶
-
paddle.fluid.layers.
reduce_prod
(input, dim=None, keep_dim=False, name=None)[source] Computes the product of tensor elements over the given dimension.
- Parameters
input (Variable) – The input variable which is a Tensor or LoDTensor.
dim (list|int|None) – The dimensions along which the product is performed. If
None
, multipy all elements ofinput
and return a Tensor variable with a single element, otherwise must be in the range \([-rank(input), rank(input))\). If \(dim[i] < 0\), the dimension to reduce is \(rank + dim[i]\).keep_dim (bool|False) – Whether to reserve the reduced dimension in the output Tensor. The result tensor will have one fewer dimension than the
input
unlesskeep_dim
is true.name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
- Returns
The reduced Tensor variable.
- Return type
Variable
Examples
import paddle.fluid as fluid # x is a Tensor variable with following elements: # [[0.2, 0.3, 0.5, 0.9] # [0.1, 0.2, 0.6, 0.7]] # Each example is followed by the correspending output tensor. x = fluid.layers.data(name='x', shape=[4, 2], dtype='float32') fluid.layers.reduce_prod(x) # [0.0002268] fluid.layers.reduce_prod(x, dim=0) # [0.02, 0.06, 0.3, 0.63] fluid.layers.reduce_prod(x, dim=-1) # [0.027, 0.0084] fluid.layers.reduce_prod(x, dim=1, keep_dim=True) # [[0.027], [0.0084]] # y is a Tensor variable with shape [2, 2, 2] and elements as below: # [[[1.0, 2.0], [3.0, 4.0]], # [[5.0, 6.0], [7.0, 8.0]]] # Each example is followed by the correspending output tensor. y = fluid.layers.data(name='y', shape=[2, 2, 2], dtype='float32') fluid.layers.reduce_prod(y, dim=[1, 2]) # [24.0, 1680.0] fluid.layers.reduce_prod(y, dim=[0, 1]) # [105.0, 384.0]
reduce_sum¶
-
paddle.fluid.layers.
reduce_sum
(input, dim=None, keep_dim=False, name=None)[source] Computes the sum of tensor elements over the given dimension.
- Parameters
input (Variable) – The input variable which is a Tensor or LoDTensor.
dim (list|int|None) – The dimensions along which the sum is performed. If
None
, sum all elements ofinput
and return a Tensor variable with a single element, otherwise must be in the range \([-rank(input), rank(input))\). If \(dim[i] < 0\), the dimension to reduce is \(rank + dim[i]\).keep_dim (bool|False) – Whether to reserve the reduced dimension in the output Tensor. The result tensor will have one fewer dimension than the
input
unlesskeep_dim
is true.name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
- Returns
The reduced Tensor variable.
- Return type
Variable
Examples
import paddle.fluid as fluid # x is a Tensor variable with following elements: # [[0.2, 0.3, 0.5, 0.9] # [0.1, 0.2, 0.6, 0.7]] # Each example is followed by the corresponding output tensor. x = fluid.layers.data(name='x', shape=[4, 2], dtype='float32') fluid.layers.reduce_sum(x) # [3.5] fluid.layers.reduce_sum(x, dim=0) # [0.3, 0.5, 1.1, 1.6] fluid.layers.reduce_sum(x, dim=-1) # [1.9, 1.6] fluid.layers.reduce_sum(x, dim=1, keep_dim=True) # [[1.9], [1.6]] # y is a Tensor variable with shape [2, 2, 2] and elements as below: # [[[1, 2], [3, 4]], # [[5, 6], [7, 8]]] # Each example is followed by the corresponding output tensor. y = fluid.layers.data(name='y', shape=[2, 2, 2], dtype='float32') fluid.layers.reduce_sum(y, dim=[1, 2]) # [10, 26] fluid.layers.reduce_sum(y, dim=[0, 1]) # [16, 20]
relu¶
-
paddle.fluid.layers.
relu
(x, name=None)[source] Relu takes one input data (Tensor) and produces one output data (Tensor) where the rectified linear function, y = max(0, x), is applied to the tensor elementwise.
\[Out = \max(0, x)\]- Parameters
x (Variable) – The input tensor.
name (str|None, default None) – A name for this layer If set None, the layer will be named automatically.
- Returns
The output tensor with the same shape as input.
- Return type
Variable
Examples
import paddle.fluid as fluid x = fluid.layers.data(name="x", shape=[3, 4], dtype="float32") output = fluid.layers.relu(x)
relu6¶
-
paddle.fluid.layers.
relu6
(x, threshold=6.0, name=None)[source] Relu6 Activation Operator.
\(out = \min(\max(0, x), 6)\)
- Parameters
x (Variable) – Input of Relu6 operator
threshold (FLOAT|6.0) – The threshold value of Relu6
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
- Returns
Output of Relu6 operator
- Return type
output(Variable)
Examples
import paddle.fluid as fluid x = fluid.layers.data(name="x", shape=[3,10,32,32], dtype="float32") y = fluid.layers.relu6(x, threshold=6.0)
reshape¶
-
paddle.fluid.layers.
reshape
(x, shape, actual_shape=None, act=None, inplace=False, name=None)[source] Gives a new shape to the input Tensor without changing its data.
The target shape can be given by
shape
oractual_shape
.shape
is a list of integer whileactual_shape
is a tensor variable.actual_shape
has a higher priority thanshape
if it is provided, whileshape
still should be set correctly to gurantee shape inference in compile-time.Some tricks exist when specifying the target shape.
1. -1 means the value of this dimension is inferred from the total element number of x and remaining dimensions. Thus one and only one dimension can be set -1.
2. 0 means the actual dimension value is going to be copied from the corresponding dimension of x. The indice of 0s in shape can not exceed Rank(X).
Here are some examples to explain it.
1. Given a 3-D tensor x with a shape [2, 4, 6], and the target shape is [6, 8], the reshape operator will transform x into a 2-D tensor with shape [6, 8] and leaving x’s data unchanged.
2. Given a 3-D tensor x with a shape [2, 4, 6], and the target shape specified is [2, 3, -1, 2], the reshape operator will transform x into a 4-D tensor with shape [2, 3, 4, 2] and leaving x’s data unchanged. In this case, one dimension of the target shape is set to -1, the value of this dimension is inferred from the total element number of x and remaining dimensions.
3. Given a 3-D tensor x with a shape [2, 4, 6], and the target shape is [-1, 0, 3, 2], the reshape operator will transform x into a 4-D tensor with shape [2, 4, 3, 2] and leaving x’s data unchanged. In this case, besides -1, 0 means the actual dimension value is going to be copied from the corresponding dimension of x.
- Parameters
x (variable) – The input tensor.
shape (list) – The new shape. At most one dimension of the new shape can be -1.
actual_shape (variable) – An optional input. If provided, reshape according to this given shape rather than
shape
specifying shape. That is to sayactual_shape
has a higher priority thanshape
.act (str) – The non-linear activation to be applied to the reshaped tensor variable.
inplace (bool) – If
inplace
is True, the input and output oflayers.reshape
are the same variable, otherwise, the input and output oflayers.reshape
are different variables. Note that ifx
is more than one layer’s input,inplace
must beFalse
.name (str) – The name of this layer. It is optional.
- Returns
The reshaped tensor variable if
act
is None. It is a new tensor variable ifinplace
isFalse
, otherwise it isx
. Ifact
is not None, return the activated tensor variable.- Return type
Variable
- Raises
TypeError
– if actual_shape is neither Variable nor None.
Examples
import paddle.fluid as fluid data = fluid.layers.data( name='data', shape=[2, 4, 6], dtype='float32') reshaped = fluid.layers.reshape( x=data, shape=[-1, 0, 3, 2], inplace=True)
resize_bilinear¶
-
paddle.fluid.layers.
resize_bilinear
(input, out_shape=None, scale=None, name=None, actual_shape=None, align_corners=True, align_mode=1)[source] Resize input by performing bilinear interpolation based on given output shape which specified by actual_shape, out_shape and scale in priority order.
Bilinear interpolation is an extension of linear interpolation for interpolating functions of two variables (e.g. H-direction and W-direction in this op) on a rectilinear 2D grid. The key idea is to perform linear interpolation first in one direction, and then again in the other direction.
For details of bilinear interpolation, please refer to Wikipedia: https://en.wikipedia.org/wiki/Bilinear_interpolation
Align_corners and align_mode are optinal parameters,the calculation method of interpolation can be selected by them.
Example:
For scale: if align_corners = True && out_size > 1 : scale_factor = (in_size-1.0)/(out_size-1.0) else: scale_factor = float(in_size/out_size) Bilinear interpolation: if: align_corners = False , align_mode = 0 input : (N,C,H_in,W_in) output: (N,C,H_out,W_out) where: H_out = (H_{in}+0.5) * scale_{factor} - 0.5 W_out = (W_{in}+0.5) * scale_{factor} - 0.5 else: input : (N,C,H_in,W_in) output: (N,C,H_out,W_out) where: H_out = H_{in} * scale_{factor} W_out = W_{in} * scale_{factor}
- Parameters
input (Variable) – The input tensor of interpolate operator, This is a 4-D tensor with shape of [N, C, H, w].
out_shape (list|tuple|Variable|None) – Output shape of resize bilinear layer, the shape is (out_h, out_w). Default: None
scale (float|None) – The multiplier for the input height or width. At least one of
out_shape
orscale
must be set. Andout_shape
has a higher priority thanscale
. Default: None.name (str|None) – The output variable name.
actual_shape (Variable) – An optional input to specify output shape dynamically. If provided, image resize according to this given shape rather than
out_shape
andscale
specifying shape. That is to say actual_shape has the highest priority. It is recommended to use actual_shape instead ofout_shape
if you want to specify output shape dynamically. When using actual_shape to specify output shape, one ofout_shape
andscale
should also be set, otherwise errors would be occured in graph constructing stage. Default: Nonealign_corners (bool) – an optional bool. Defaults to True. If True, the centers of 4 corner pixels of the input and output tensors are aligned, preserving the values at the corner pixels, If False, are not aligned
align_mode (bool) – (int, default ‘1’), optional for bilinear interpolation, can be ‘0’ for src_idx = scale*(dst_indx+0.5)-0.5 , can be ‘1’ for src_idx = scale*dst_index
- Returns
The output tensor of interpolate operator, This is a 4-D tensor with shape of [N, C, H, W].
Examples
import paddle.fluid as fluid input = fluid.layers.data(name="input", shape=[3,6,9], dtype="float32") out = fluid.layers.resize_bilinear(input, out_shape=[12, 12])
resize_nearest¶
-
paddle.fluid.layers.
resize_nearest
(input, out_shape=None, scale=None, name=None, actual_shape=None, align_corners=True)[source] Resize input by performing nearest neighbor interpolation in both the 3rd dimension(in height direction) and the 4th dimension(in width direction) based on given output shape which is specified by actual_shape, out_shape and scale in priority order.
Example:
For scale: if align_corners = True && out_size > 1 : scale_factor = (in_size-1.0)/(out_size-1.0) else: scale_factor = float(in_size/out_size) Nearest neighbor interpolation: if: align_corners = False input : (N,C,H_in,W_in) output: (N,C,H_out,W_out) where: H_out = floor(H_{in} * scale_{factor}) W_out = floor(W_{in} * scale_{factor}) else: align_corners = True input : (N,C,H_in,W_in) output: (N,C,H_out,W_out) where: H_out = round(H_{in} * scale_{factor}) W_out = round(W_{in} * scale_{factor})
For details of nearest neighbor interpolation, please refer to Wikipedia: https://en.wikipedia.org/wiki/Nearest-neighbor_interpolation
- Parameters
input (Variable) – The input tensor of interpolate operator, This is a 4-D tensor with shape of [N, C, H, w].
out_shape (list|tuple|Variable|None) – Output shape of resize nearest layer, the shape is (out_h, out_w). Default: None
scale (float|None) – The multiplier for the input height or width. At least one of
out_shape
orscale
must be set. Andout_shape
has a higher priority thanscale
. Default: None.name (str|None) – The output variable name.
actual_shape (Variable) – An optional input to specify output shape dynamically. If provided, image resize according to this given shape rather than
out_shape
andscale
specifying shape. That is to say actual_shape has the highest priority. It is recommended to use actual_shape instead ofout_shape
if you want to specify output shape dynamically. When using actual_shape to specify output shape, one ofout_shape
andscale
should also be set, otherwise errors would be occured in graph constructing stage. Default: Nonealign_corners (bool) – an optional bool. Defaults to True. If True, the centers of 4 corner pixels of the input and output tensors are aligned, preserving the values at the corner pixels, If False, are not aligned
- Returns
The output tensor of interpolate operator, This is a 4-D tensor with shape of [N, C, H, W].
Examples
import paddle.fluid as fluid input = fluid.layers.data(name="input", shape=[3,6,9], dtype="float32") out = fluid.layers.resize_nearest(input, out_shape=[12, 12])
roi_align¶
-
paddle.fluid.layers.
roi_align
(input, rois, pooled_height=1, pooled_width=1, spatial_scale=1.0, sampling_ratio=-1, name=None)[source] RoIAlign Operator
Region of interest align (also known as RoI align) is to perform bilinear interpolation on inputs of nonuniform sizes to obtain fixed-size feature maps (e.g. 7*7)
Dividing each region proposal into equal-sized sections with the pooled_width and pooled_height. Location remains the origin result.
In each ROI bin, the value of the four regularly sampled locations are computed directly through bilinear interpolation. The output is the mean of four locations. Thus avoid the misaligned problem.
- Parameters
input (Variable) – (Tensor), The input of ROIAlignOp. The format of input tensor is NCHW. Where N is batch size, C is the number of input channels, H is the height of the feature, and W is the width of the feature
rois (Variable) – ROIs (Regions of Interest) to pool over.
pooled_height (integer) – (int, default 1), The pooled output height Default: 1
pooled_width (integer) – (int, default 1), The pooled output width Default: 1
spatial_scale (float) – (float, default 1.0), Multiplicative spatial scale factor to translate ROI coords from their input scale to the scale used when pooling Default: 1.0
sampling_ratio (intger) – (int,default -1),number of sampling points in the interpolation gridIf <=0, then grid points are adaptive to roi_width and pooled_w, likewise for height Default: -1
- Returns
(Tensor), The output of ROIAlignOp is a 4-D tensor with shape (num_rois, channels, pooled_h, pooled_w).
- Return type
Variable
Examples
import paddle.fluid as fluid x = fluid.layers.data( name='data', shape=[256, 32, 32], dtype='float32') rois = fluid.layers.data( name='rois', shape=[4], dtype='float32') align_out = fluid.layers.roi_align(input=x, rois=rois, pooled_height=7, pooled_width=7, spatial_scale=0.5, sampling_ratio=-1)
roi_pool¶
-
paddle.fluid.layers.
roi_pool
(input, rois, pooled_height=1, pooled_width=1, spatial_scale=1.0)[source] ROIPool Operator
Region of interest pooling (also known as RoI pooling) is to perform is to perform max pooling on inputs of nonuniform sizes to obtain fixed-size feature maps (e.g. 7*7).
The operator has three steps:
Dividing each region proposal into equal-sized sections with the pooled_width and pooled_height
Finding the largest value in each section
Copying these max values to the output buffer
ROI Pooling for Faster-RCNN. The link below is a further introduction: https://stackoverflow.com/questions/43430056/what-is-roi-layer-in-fast-rcnn
- Parameters
input (Variable) – (Tensor), the input of ROIPoolOp. The format of input tensor is NCHW. Where N is batch size, C is the number of input channels, H is the height of the feature, and W is the width of the feature
rois (Variable) – ROIs (Regions of Interest) to pool over.
pooled_height (integer) – (int, default 1), The pooled output height Default: 1
pooled_width (integer) – (int, default 1), The pooled output width Default: 1
spatial_scale (float) – (float, default 1.0), Multiplicative spatial scale factor to translate ROI coords from their input scale to the scale used when pooling Default: 1.0
- Returns
(Tensor), The output of ROIPoolOp is a 4-D tensor with shape (num_rois, channels, pooled_h, pooled_w).
- Return type
Variable
Examples
import paddle.fluid as fluid x = fluid.layers.data( name='x', shape=[8, 112, 112], dtype='float32') rois = fluid.layers.data( name='roi', shape=[4], lod_level=1, dtype='float32') pool_out = fluid.layers.roi_pool( input=x, rois=rois, pooled_height=7, pooled_width=7, spatial_scale=1.0)
row_conv¶
-
paddle.fluid.layers.
row_conv
(input, future_context_size, param_attr=None, act=None)[source] Row-convolution operator
The row convolution is called lookahead convolution. This operator was introduced in the following paper for DeepSpeech2: http://www.cs.cmu.edu/~dyogatam/papers/wang+etal.iclrworkshop2016.pdf
The main motivation is that a bidirectional RNN, useful in DeepSpeech like speech models, learns representation for a sequence by performing a forward and a backward pass through the entire sequence. However, unlike unidirectional RNNs, bidirectional RNNs are challenging to deploy in an online and low-latency setting. The lookahead convolution incorporates information from future subsequences in a computationally efficient manner to improve unidirectional recurrent neural networks. The row convolution operator is different from the 1D sequence convolution, and is computed as follows:
Given an input sequence \(X\) of length \(t\) and input dimension \(D\), and a filter (\(W\)) of size \(context \times D\), the output sequence is convolved as:
$$ out_{i} = \sum_{j=i}^{i + context - 1} X_{j} \cdot W_{j-i} $$
In the above equation:
\(Out_{i}\): The i-th row of output variable with shape [1, D].
\(context\): Future context size.
\(X_{j}\): The j-th row of input variable with shape [1, D].
\(W_{j-i}\): The (j-i)-th row of parameters with shape [1, D].
More details about row_conv please refer to the design document https://github.com/PaddlePaddle/Paddle/issues/2228#issuecomment-303903645 .
- Parameters
input (Variable) – the input(X) is a LodTensor, which supports variable time-length input sequences. The underlying tensor in this LoDTensor is a matrix with shape (T x N), where T is the total time steps in this mini-batch and N is the input data dimension.
future_context_size (int) – Future context size. Please note, the shape of convolution kernel is [future_context_size + 1, D].
param_attr (ParamAttr) – Attributes of parameters, including name, initializer etc.
act (str) – Non-linear activation to be applied to output variable.
- Returns
the output(Out) is a LodTensor, which supports variable time-length input sequences. The underlying tensor in this LodTensor is a matrix with shape T x N, i.e., the same shape as X.
Examples
>>> import paddle.fluid as fluid >>> x = fluid.layers.data(name='x', shape=[16], >>> dtype='float32', lod_level=1) >>> out = fluid.layers.row_conv(input=x, future_context_size=2)
sampled_softmax_with_cross_entropy¶
-
paddle.fluid.layers.
sampled_softmax_with_cross_entropy
(logits, label, num_samples, num_true=1, remove_accidental_hits=True, use_customized_samples=False, customized_samples=None, customized_probabilities=None, seed=0)[source] Sampled Softmax With Cross Entropy Operator.
Cross entropy loss with sampled softmax is used as the output layer for larger output classes extensively. This operator samples a number of samples for all examples, and computes the softmax normalized values for each row of the sampled tensor, after which cross-entropy loss is computed.
Because this operator performs a softmax on logits internally, it expects unscaled logits. This operator should not be used with the output of softmax operator since that would produce incorrect results.
For examples with T true labels (T >= 1), we assume that each true label has a probability of 1/T. For each sample, S samples are generated using a log uniform distribution. True labels are concatenated with these samples to form T + S samples for each example. So, assume the shape of logits is [N x K], the shape for samples is [N x (T+S)]. For each sampled label, a probability is calculated, which corresponds to the Q(y|x) in [Jean et al., 2014](http://arxiv.org/abs/1412.2007).
Logits are sampled according to the sampled labels. Then if remove_accidental_hits is True, if a sample[i, j] accidentally hits true labels, then the corresponding sampled_logits[i, j] is minus by 1e20 to make its softmax result close to zero. Then sampled logits are subtracted by logQ(y|x), these sampled logits and re-indexed labels are used to compute a softmax with cross entropy.
- Parameters
logits (Variable) – The unscaled log probabilities, which is a 2-D tensor with shape [N x K]. N is the batch_size, and K is the class number.
label (Variable) – The ground truth which is a 2-D tensor. Label is a Tensor<int64> with shape [N x T], where T is the number of true labels per example.
num_samples (int) – The number for each example, num_samples should be less than the number of class.
num_true (int) – The number of target classes per training example.
remove_accidental_hits (bool) – A flag indicating whether to remove accidental hits when sampling. If True and if a sample[i, j] accidentally hits true labels, then the corresponding sampled_logits[i, j] is minus by 1e20 to make its softmax result close to zero. Default is True.
use_customized_samples (bool) – Whether to use custom samples and probabities to sample logits.
customized_samples (Variable) – User defined samples, which is a 2-D tensor with shape [N, T + S]. S is the num_samples, and T is the number of true labels per example.
customized_probabilities (Variable) – User defined probabilities of samples, a 2-D tensor which has the same shape with customized_samples.
seed (int) – The random seed for generating random number, which is used in the process of sampling. Default is 0.
- Returns
- Return the cross entropy loss which is a 2-D tensor with shape
[N x 1].
- Return type
Variable
Examples
import paddle.fluid as fluid input = fluid.layers.data(name='data', shape=[256], dtype='float32') label = fluid.layers.data(name='label', shape=[5], dtype='int64') fc = fluid.layers.fc(input=input, size=100) out = fluid.layers.sampled_softmax_with_cross_entropy( logits=fc, label=label, num_samples=25)
sampling_id¶
-
paddle.fluid.layers.
sampling_id
(x, min=0.0, max=1.0, seed=0, dtype='float32')[source] SamplingId Operator. A layer for sampling id from multinomial distribution from the input. Sampling one id for one sample.
- Parameters
x (Variable) – The input tensor of softmax. 2-D with shape [batch_size, input_feature_dimensions]
min (Float) – Minimum value of random. (float, default 0.0)
max (Float) – Maximun value of random. (float, default 1.0)
seed (Float) – Random seed used for the random number engine. 0 means use a seed generated by the system.Note that if seed is not 0, this operator will always generate the same random numbers every time. (int, default 0)
dtype (np.dtype|core.VarDesc.VarType|str) – The type of output data : float32, float_16, int etc
- Returns
SamplingId data tensor
- Return type
out (Variable)
Examples
import paddle.fluid as fluid x = fluid.layers.data( name="X", shape=[13, 11], dtype='float32', append_batch_size=False) out = fluid.layers.sampling_id(x)
scale¶
-
paddle.fluid.layers.
scale
(x, scale=1.0, bias=0.0, bias_after_scale=True, act=None, name=None)[source] Scale operator
Apply scaling and bias addition to the input tensor.
if bias_after_scale=True:
$$Out = scale*X + bias$$
else:
$$Out = scale*(X + bias)$$
- Parameters
x (Variable) – (Tensor) Input tensor of scale operator
scale (FLOAT) – The scaling factor of the scale operator
bias (FLOAT) – The bias of the scale operator
bias_after_scale (BOOLEAN) – Apply bias addition after or before scaling. It is useful for numeric stability in some circumstances
act (basestring|None) – Activation applied to the output.
name (basestring|None) – Name of the output.
- Returns
(Tensor) Output tensor of scale operator
- Return type
out(Variable)
Examples
import paddle.fluid as fluid x = fluid.layers.data(name="X", shape=[1, 2, 5, 5], dtype='float32') y = fluid.layers.scale(x, scale = 2.0, bias = 1.0)
scatter¶
-
paddle.fluid.layers.
scatter
(input, index, updates, name=None, overwrite=True)[source] Scatter Layer
Output is obtained by updating the input on selected indices on the first axis.
\[Out = X Out[Ids] = Updates\]- Parameters
input (Variable) – The source input with rank>=1.
index (Variable) – The index input with rank=1. Its dtype should be int32 or int64 as it is used as indexes.
updates (Variable) – The updated value of scatter op.
name (str|None) – The output variable name. Default None.
overwrite (bool) – The mode that updating the output when has same index. If True, use the overwrite mode to update the output of the same index, if False, use the accumulate mode to update the output of the same index. Default value is True.You can set overwrite=False to implement scatter_add.
- Returns
The output is a tensor with the same shape as input.
- Return type
output (Variable)
Examples
import paddle.fluid as fluid input = fluid.layers.data(name='data', shape=[3, 5, 9], dtype='float32', append_batch_size=False) index = fluid.layers.data(name='index', shape=[3], dtype='int64', append_batch_size=False) updates = fluid.layers.data(name='update', shape=[3, 5, 9], dtype='float32', append_batch_size=False) output = fluid.layers.scatter(input, index, updates)
selu¶
-
paddle.fluid.layers.
selu
(x, scale=None, alpha=None, name=None)[source] Selu Operator.
The equation is: $$ f(x) =lambda* begin{cases} quad quad x, quad quad quad text{if} x > 0 \ alpha * e^x - alpha, qquad text{if} x <= 0 end{cases} $$
The input X can carry the LoD (Level of Details) information, or not. And the output shares the LoD information with input X.
- Parameters
x (Variable) – The input tensor.
scale (float, None) – If the scale is not set, the default value is 1.0507009873554804934193349852946. For more information about this value, please refer to: https://arxiv.org/abs/1706.02515.
alpha (float, None) – If the alpha is not set, the default value is 1.6732632423543772848170429916717. For more information about this value, please refer to: https://arxiv.org/abs/1706.02515.
name (str|None, default None) – A name for this layer If set None, the layer will be named automatically.
- Returns
The output tensor with the same shape as input.
- Return type
Variable
Examples
import paddle.fluid as fluid input = fluid.layers.data( name="input", shape=[3, 9, 5], dtype="float32") output = fluid.layers.selu(input)
sequence_concat¶
-
paddle.fluid.layers.
sequence_concat
(input, name=None)[source] Sequence Concat Op It will concat LoD tensors by its sequence information. For example: LoD of X1 = [0, 3, 7] LoD of X2 = [0, 7, 9] Result LoD is [0, (3+7), (7+9)] i.e.[0, 10, 16]
- Parameters
input (list) – List of Variables to be concatenated.
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
- Returns
Output variable of the concatenation.
- Return type
Variable
Examples
import paddle.fluid as fluid x = fluid.layers.data(name='x', shape=[10], dtype='float32') y = fluid.layers.data(name='y', shape=[10], dtype='float32') out = fluid.layers.sequence_concat(input=[x, y])
sequence_conv¶
-
paddle.fluid.layers.
sequence_conv
(input, num_filters, filter_size=3, filter_stride=1, padding=None, bias_attr=None, param_attr=None, act=None, name=None)[source] This function creates the op for sequence_conv, using the inputs and other convolutional configurations for the filters and stride as given in the input parameters to the function.
- Parameters
input (Variable) – (LoDTensor) the input(X) is a LodTensor, which supports variable-time length input sequence. The underlying tensor in this LoDTensor is a matrix with shape (T, N), where T is the total time steps in this mini-batch and N is the input_hidden_size
num_filters (int) – number of filters.
filter_size (int) – the filter size (H and W).
filter_stride (int) – stride of the filter.
padding (bool) – if True, add paddings.
bias_attr (ParamAttr|bool|None) – The parameter attribute for the bias of sequence_conv. If it is set to False, no bias will be added to the output units. If it is set to None or one attribute of ParamAttr, sequence_conv will create ParamAttr as bias_attr. If the Initializer of the bias_attr is not set, the bias is initialized zero. Default: None.
param_attr (ParamAttr|None) – The parameter attribute for learnable parameters/weights of sequence_conv. If it is set to None or one attribute of ParamAttr, sequence_conv will create ParamAttr as param_attr. If the Initializer of the param_attr is not set, the parameter is initialized with Xavier. Default: None.
act (str) – Activation type, if it is set to None, activation is not appended. Default: None.
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically. Default: None.
- Returns
output of sequence_conv
- Return type
Variable
Examples
import paddle.fluid as fluid x = fluid.layers.data(name='x', shape=[10,10], append_batch_size=False, dtype='float32') x_conved = fluid.layers.sequence_conv(x,2)
sequence_enumerate¶
-
paddle.fluid.layers.
sequence_enumerate
(input, win_size, pad_value=0, name=None)[source] Generate a new sequence for the input index sequence, which enumerates all the sub-sequences with length win_size of the input. The enumerated sequence has the same 1st dimension with variable input, and the 2nd dimension is win_size, padded by pad_value if necessary in generation.
Case 1: Input: X.lod = [[0, 3, 5]] X.data = [[1], [2], [3], [4], [5]] X.dims = [5, 1] Attrs: win_size = 2 pad_value = 0 Output: Out.lod = [[0, 3, 5]] Out.data = [[1, 2], [2, 3], [3, 0], [4, 5], [5, 0]] Out.dims = [5, 2]
- Parameters
input (Variable) – The input variable which is a index sequence.
win_size (int) – The window size for enumerating all sub-sequences.
pad_value (int) – The padding value, default 0.
- Returns
The enumerate sequence variable which is a LoDTensor.
- Return type
Variable
Examples
x = fluid.layers.data(shape[-1, 1], dtype='int32', lod_level=1) out = fluid.layers.sequence_enumerate(input=x, win_size=3, pad_value=0)
sequence_expand¶
-
paddle.fluid.layers.
sequence_expand
(x, y, ref_level=-1, name=None)[source] Sequence Expand Layer. This layer will expand the input variable x according to specified level lod of y. Please note that lod level of x is at most 1 and rank of x is at least 2. When rank of x is greater than 2, then it would be viewed as a 2-D tensor. Following examples will explain how sequence_expand works:
* Case 1 x is a LoDTensor: x.lod = [[2, 2]] x.data = [[a], [b], [c], [d]] x.dims = [4, 1] y is a LoDTensor: y.lod = [[2, 2], [3, 3, 1, 1]] ref_level: 0 then output is a 1-level LoDTensor: out.lod = [[2, 2, 2, 2]] out.data = [[a], [b], [a], [b], [c], [d], [c], [d]] out.dims = [8, 1] * Case 2 x is a Tensor: x.data = [[a], [b], [c]] x.dims = [3, 1] y is a LoDTensor: y.lod = [[2, 0, 3]] ref_level: -1 then output is a Tensor: out.data = [[a], [a], [c], [c], [c]] out.dims = [5, 1]
- Parameters
x (Variable) – The input variable which is a Tensor or LoDTensor.
y (Variable) – The input variable which is a LoDTensor.
ref_level (int) – Lod level of y to be referred by x. If set to -1, refer the last level of lod.
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
- Returns
The expanded variable which is a LoDTensor.
- Return type
Variable
Examples
import paddle.fluid as fluid import paddle.fluid.layers as layers x = fluid.layers.data(name='x', shape=[10], dtype='float32') y = fluid.layers.data(name='y', shape=[10, 20], dtype='float32', lod_level=1) out = layers.sequence_expand(x=x, y=y, ref_level=0)
sequence_expand_as¶
-
paddle.fluid.layers.
sequence_expand_as
(x, y, name=None)[source] Sequence Expand As Layer. This layer will expand the input variable x according to the zeroth level lod of y. Current implementation requires the level number of Input(Y)’s lod must be 1, and the first dimension of Input(X) should be equal to the size of Input(Y)’s zeroth level lod, and lod of Input(X) is not considered.
Following examples will explain how sequence_expand_as works:
* Case 1: Given a 1-level LoDTensor input(X) X.data = [[a], [b], [c], [d]] X.dims = [4, 1] and input(Y) Y.lod = [[0, 3, 6, 7, 8]] ref_level: 0 then we get 1-level LoDTensor Out.lod = [[0, 3, 6, 7, 8]] Out.data = [[a], [a], [a], [b], [b], [b], [c], [d]] Out.dims = [8, 1] * Case 2: Given a common Tensor input(X) X.data = [[a, b], [c, d], [e, f]] X.dims = [3, 2] and input(Y) Y.lod = [[0, 2, 3, 6]] ref_level: 0 then we get a common LoDTensor Out.lod = [[0, 2, 3, 6]] Out.data = [[a, b], [a, b] [c, d], [e, f], [e, f], [e, f]] Out.dims = [6, 2]
- Parameters
x (Variable) – The input variable which is a Tensor or LoDTensor.
y (Variable) – The input variable which is a LoDTensor.
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
- Returns
The expanded variable which is a LoDTensor.
- Return type
Variable
Examples
import paddle.fluid as fluid import paddle.fluid.layers as layers x = fluid.layers.data(name='x', shape=[10], dtype='float32') y = fluid.layers.data(name='y', shape=[10, 20], dtype='float32', lod_level=1) out = layers.sequence_expand_as(x=x, y=y)
sequence_first_step¶
-
paddle.fluid.layers.
sequence_first_step
(input)[source] This function gets the first step of sequence.
x is a 1-level LoDTensor: x.lod = [[2, 3, 2]] x.data = [1, 3, 2, 4, 6, 5, 1] x.dims = [7, 1] then output is a Tensor: out.dim = [3, 1] with condition len(x.lod[-1]) == out.dims[0] out.data = [1, 2, 5], where 1=first(1,3), 2=first(2,4,6), 5=first(5,1)
- Parameters
input (variable) – The input variable which is a LoDTensor.
- Returns
The sequence’s first step variable which is a Tensor.
Examples
import paddle.fluid as fluid x = fluid.layers.data(name='x', shape=[7, 1], dtype='float32', lod_level=1) x_first_step = fluid.layers.sequence_first_step(input=x)
sequence_last_step¶
-
paddle.fluid.layers.
sequence_last_step
(input)[source] This function gets the last step of sequence.
x is a 1-level LoDTensor: x.lod = [[2, 3, 2]] x.data = [1, 3, 2, 4, 6, 5, 1] x.dims = [7, 1] then output is a Tensor: out.dim = [3, 1] with condition len(x.lod[-1]) == out.dims[0] out.data = [3, 6, 1], where 3=last(1,3), 6=last(2,4,6), 1=last(5,1)
- Parameters
input (variable) – The input variable which is a LoDTensor.
- Returns
The sequence’s last step variable which is a Tensor.
Examples
import paddle.fluid as fluid x = fluid.layers.data(name='x', shape=[7, 1], dtype='float32', lod_level=1) x_last_step = fluid.layers.sequence_last_step(input=x)
sequence_mask¶
-
paddle.fluid.layers.
sequence_mask
(x, maxlen=None, dtype='int64', name=None)[source] SequenceMask Layer
This layer outputs a mask according to the input
x
andmaxlen
with data type ofdtype
.Supposing
x
is a Tensor with shape [d_1, d_2, …, d_n], they
is a mask with shape [d_1, d_2, …, d_n, maxlen], where:\[y(i_1, i_2,..., i_n, j) = (j < x(i_1, i_2,..., i_n))\]- Parameters
x (Variable) – Input tensor of sequence_mask layer, whose elements are integers less than
maxlen
.maxlen (int|None) – Maximum length of the sequence. If
maxlen
is None, it would be replace with \(max(x)\).dtype (np.dtype|core.VarDesc.VarType|str) – Data type of the output.
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
- Returns
The output sequence mask.
- Return type
Variable
Examples
import paddle.fluid as fluid import paddle.fluid.layers as layers x = fluid.layers.data(name='x', shape=[10], dtype='float32', lod_level=1) mask = layers.sequence_mask(x=x)
sequence_pad¶
-
paddle.fluid.layers.
sequence_pad
(x, pad_value, maxlen=None, name=None)[source] Sequence Pad Operator
This operator pads sequences in a same batch to a consistent length. The length is specified by attribute ‘padded_length’. New elements, whose values are specified by input ‘PadValue’, will be appended to the end of each sequence, to make their final lengths consistent.
Following are cases to better explain how this works:
Case 1:
Given a 1-level LoDTensor input(X): X.lod = [[0, 2, 5]] X.data = [a, b, c, d, e] and Input(PadValue): PadValue.data = [0] and attribite ‘padded_length’ = 4, then we get LoDTensor: Out.data = [[a, b, 0, 0], [c, d, e, 0]] Length.data = [[2], [3]]
Case 2:
Given a 1-level LoDTensor input(X): X.lod = [[0, 2, 5]] X.data = [[a1, a2], [b1, b2], [c1, c2], [d1, d2], [e1, e2]] and Input(PadValue): PadValue.data = [0] and attribite ‘padded_length’ = -1, which mean using the length of longest input sequence(3 in this case), then we get LoDTensor: Out.data = [[[a1, a2], [b1, b2], [0, 0]], [[c1, c2], [d1, d2], [e1, e2]]] Length.data = [[2], [3]]
Case 3:
Given a 1-level LoDTensor input(X): X.lod = [[0, 2, 5]] X.data = [[a1, a2], [b1, b2], [c1, c2], [d1, d2], [e1, e2]] and Input(PadValue): PadValue.data = [p1, p2] and attribite ‘padded_length’ = -1, which mean using the length of longest input sequence(3 in this case), then we get LoDTensor: Out.data = [[[a1, a2], [b1, b2], [p1, p2]], [[c1, c2], [d1, d2], [e1, e2]]] Length.data = [[2], [3]]
- Parameters
x (Variable) – Input variable which should contain lod information.
pad_value (Variable) – The Variable that holds values that will be fill into padded steps. It can be a scalar or a tensor whose shape equals to time steps in sequences. If it’s a scalar, it will be automatically broadcasted to the shape of time step.
maxlen (int, default None) – The length of padded sequences. It can be None or any positive int. When it is None, all sequences will be padded up to the length of the longest one among them; when it a certain positive value, it must be greater than the length of the longest original sequence.
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
- Returns
- The padded sequence batch and the original lengths before
padding. All sequences has the same length.
- Return type
Variable
Examples
import paddle.fluid as fluid import numpy x = fluid.layers.data(name='y', shape=[10, 5], dtype='float32', lod_level=1) pad_value = fluid.layers.assign( input=numpy.array([0.0], dtype=numpy.float32)) out = fluid.layers.sequence_pad(x=x, pad_value=pad_value)
sequence_pool¶
-
paddle.fluid.layers.
sequence_pool
(input, pool_type, is_test=False, pad_value=0.0)[source] This function add the operator for sequence pooling. It pools features of all time-steps of each instance, and is applied on top of the input using pool_type mentioned in the parameters.
It supports four pool_type:
average: \(Out[i] = \frac{\sum_i X_i}{N}\)
sum: \(Out[i] = \sum_jX_{ij}\)
sqrt: \(Out[i] = \frac{\sum_jX_{ij}}{\sqrt{len(X_i)}}\)
max: \(Out[i] = max(X_i)\)
x is a 1-level LoDTensor and **pad_value** = 0.0: x.lod = [[2, 3, 2, 0]] x.data = [1, 3, 2, 4, 6, 5, 1] x.dims = [7, 1] then output is a Tensor: out.dim = [4, 1] with condition len(x.lod[-1]) == out.dims[0] for different pool_type: average: out.data = [2, 4, 3, 0.0], where 2=(1+3)/2, 4=(2+4+6)/3, 3=(5+1)/2 sum : out.data = [4, 12, 6, 0.0], where 4=1+3, 12=2+4+6, 6=5+1 sqrt : out.data = [2.82, 6.93, 4.24, 0.0], where 2.82=(1+3)/sqrt(2), 6.93=(2+4+6)/sqrt(3), 4.24=(5+1)/sqrt(2) max : out.data = [3, 6, 5, 0.0], where 3=max(1,3), 6=max(2,4,6), 5=max(5,1) last : out.data = [3, 6, 1, 0.0], where 3=last(1,3), 6=last(2,4,6), 1=last(5,1) first : out.data = [1, 2, 5, 0.0], where 1=first(1,3), 2=first(2,4,6), 5=first(5,1) and all above 0.0 = **pad_value**.
- Parameters
input (variable) – The input variable which is a LoDTensor.
pool_type (string) – The pooling type of sequence_pool. It supports average, sum, sqrt and max.
is_test (bool) – Used to distinguish training from scoring mode. Default False.
pad_value (float) – Used to pad the pooling result for empty input sequence.
- Returns
The sequence pooling variable which is a Tensor.
Examples
import paddle.fluid as fluid x = fluid.layers.data(name='x', shape=[7, 1], dtype='float32', lod_level=1) avg_x = fluid.layers.sequence_pool(input=x, pool_type='average') sum_x = fluid.layers.sequence_pool(input=x, pool_type='sum') sqrt_x = fluid.layers.sequence_pool(input=x, pool_type='sqrt') max_x = fluid.layers.sequence_pool(input=x, pool_type='max') last_x = fluid.layers.sequence_pool(input=x, pool_type='last') first_x = fluid.layers.sequence_pool(input=x, pool_type='first')
sequence_reshape¶
-
paddle.fluid.layers.
sequence_reshape
(input, new_dim)[source] Sequence Reshape Layer
This layer will rearrange the input sequences. The new dimension is set by user. Length of each sequence is computed according to original length, original dimension and new dimension. The following example will help to illustrate the function of this layer:
x is a LoDTensor: x.lod = [[0, 2, 6]] x.data = [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10], [11, 12]] x.dims = [6, 2] set new_dim = 4 then out is a LoDTensor: out.lod = [[0, 1, 3]] out.data = [[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]] out.dims = [3, 4]
Currently, only 1-level LoDTensor is supported and please make sure (original length * original dimension) can be divided by new dimension with no remainder for each sequence.
- Parameters
input (Variable) – A 2-D LoDTensor with shape being [N, M] where M for dimension.
new_dim (int) – New dimension that the input LoDTensor is reshaped to.
- Returns
Reshaped LoDTensor according to new dimension.
- Return type
Variable
Examples
import paddle.fluid as fluid x = fluid.layers.data(name='x', shape=[2, 6], append_batch_size=False, dtype='float32', lod_level=1) x_reshaped = fluid.layers.sequence_reshape(input=x, new_dim=4)
sequence_reverse¶
-
paddle.fluid.layers.
sequence_reverse
(x, name=None)[source] SequenceReverse Operator.
Reverse each sequence in input X along dim 0.
Assuming X is a LoDTensor with dims [5, 4] and lod [[0, 2, 5]], where:
X.data() = [ [1, 2, 3, 4], [5, 6, 7, 8], # the 0-th sequence with length 2 [9, 10, 11, 12], [13, 14, 15, 16], [17, 18, 19, 20] # the 1-st sequence with length 3 ]
The output Y would be a LoDTensor sharing the same dims and lod with input X, and:
Y.data() = [ [5, 6, 7, 8], [1, 2, 3, 4], # the reversed 0-th sequence with length 2 [17, 18, 19, 20], [13, 14, 15, 16], [9, 10, 11, 12] # the reversed 1-st sequence with length 3 ]
This Operator is useful to build a reverse dynamic RNN network.
This Operator only supports one-level lod currently.
- Parameters
x (Variable) – The input LoDTensor of sequence_reverse op
name (basestring|None) – Name of the output.
- Returns
The output LoDTensor of sequence_reverse op
- Return type
out(Variable)
Examples
import paddle.fluid as fluid x = fluid.layers.data(name='x', shape=[2, 6], dtype='float32') x_reversed = fluid.layers.sequence_reverse(x)
sequence_scatter¶
-
paddle.fluid.layers.
sequence_scatter
(input, index, updates, name=None)[source] Sequence Scatter Layer
This operator scatters the Updates tensor to the input X. It uses the LoD information of Ids to select the rows to update, and use the values in Ids as the columns to update in each row of X.
Here is an example:
Given the following input:
input.data = [[1.0, 1.0, 1.0, 1.0, 1.0, 1.0], [1.0, 1.0, 1.0, 1.0, 1.0, 1.0], [1.0, 1.0, 1.0, 1.0, 1.0, 1.0]] input.dims = [3, 6] index.data = [[0], [1], [2], [5], [4], [3], [2], [1], [3], [2], [5], [4]] index.lod = [[0, 3, 8, 12]] updates.data = [[0.3], [0.3], [0.4], [0.1], [0.2], [0.3], [0.4], [0.0], [0.2], [0.3], [0.1], [0.4]] updates.lod = [[ 0, 3, 8, 12]]
Then we have the output:
out.data = [[1.3, 1.3, 1.4, 1.0, 1.0, 1.0], [1.0, 1.0, 1.4, 1.3, 1.2, 1.1], [1.0, 1.0, 1.3, 1.2, 1.4, 1.1]] out.dims = X.dims = [3, 6]
- Parameters
input (Variable) – The source input with rank>=1.
index (Variable) – A LoD Tensor. The index input of sequence scatter op where input will be updated. The index input with rank=1. Its dtype should be int32 or int64 as it is used as indexes.
updates (Variable) – A LoD Tensor. The values to scatter to the input tensor X, must be a LoDTensor with the same LoD information as index.
name (str|None) – The output variable name. Default None.
- Returns
The output is a tensor with the same shape as input.
- Return type
Variable
Examples
import paddle.fluid as fluid import paddle.fluid.layers as layers input = layers.data( name="x", shape=[3, 6], append_batch_size=False, dtype='float32' ) index = layers.data( name='index', shape=[1], dtype='int32') updates = layers.data( name='updates', shape=[1], dtype='float32') output = fluid.layers.sequence_scatter(input, index, updates)
sequence_slice¶
-
paddle.fluid.layers.
sequence_slice
(input, offset, length, name=None)[source] Sequence Slice Layer
The layer crops a subsequence from given sequence with given start offset and subsequence length.
It only supports sequence data (LoDTensor with lod_level equal to 1).
- Case: Given the input Variable **input**: input.data = [[a1, a2], [b1, b2], [c1, c2], [d1, d2], [e1, e2]], input.lod = [[3, 2]], input.dims = (5, 2), with offset.data = [[0], [1]] and length.data = [[2], [1]], the output Variable will be out.data = [[a1, a2], [b1, b2], [e1, e2]], out.lod = [[2, 1]], out.dims = (3, 2).
Note
The first dimension size of input, offset and length should be equal. The offset should start from 0.
- Parameters
input (Variable) – The input Variable which consists of the complete sequences.
offset (Variable) – The offset to slice each sequence.
length (Variable) – The length of each subsequence.
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
- Returns
The output subsequences.
- Return type
Variable
Examples
import paddle.fluid as fluid import numpy as np seqs = fluid.layers.data(name='x', shape=[10, 5], dtype='float32', lod_level=1) offset = fluid.layers.assign(input=np.array([[0, 1]]).astype("int32")) length = fluid.layers.assign(input=np.array([[2, 1]]).astype("int32")) subseqs = fluid.layers.sequence_slice(input=seqs, offset=offset, length=length)
sequence_softmax¶
-
paddle.fluid.layers.
sequence_softmax
(input, use_cudnn=False, name=None)[source] This function computes the softmax activation among all time-steps for each sequence. The dimension of each time-step should be 1. Thus, the shape of input Tensor can be either \([N, 1]\) or \([N]\), where \(N\) is the sum of the length of all sequences.
For i-th sequence in a mini-batch:
\[Out(X[lod[i]:lod[i+1]], :) = \frac{\exp(X[lod[i]:lod[i+1], :])}{\sum(\exp(X[lod[i]:lod[i+1], :]))}\]For example, for a mini-batch of 3 sequences with variable-length, each containing 2, 3, 2 time-steps, the lod of which is [0, 2, 5, 7], then softmax will be computed among \(X[0:2, :]\), \(X[2:5, :]\), \(X[5:7, :]\), and \(N\) turns out to be 7.
- Parameters
input (Variable) – The input variable which is a LoDTensor.
use_cudnn (bool) – Use cudnn kernel or not, it is valid only when the cudnn library is installed. Default: False.
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically. Default: None.
- Returns
output of sequence_softmax
- Return type
Variable
Examples
import paddle.fluid as fluid x = fluid.layers.data(name='x', shape=[7, 1], dtype='float32', lod_level=1) x_sequence_softmax = fluid.layers.sequence_softmax(input=x)
sequence_unpad¶
-
paddle.fluid.layers.
sequence_unpad
(x, length, name=None)[source] Sequence Unpad Layer
This layer removes the padding data in the input sequences and convert them into sequences with actual length as output, identitied by lod information.
Example: Given input Variable **x**: x.data = [[ 1.0, 2.0, 3.0, 4.0, 5.0], [ 6.0, 7.0, 8.0, 9.0, 10.0], [11.0, 12.0, 13.0, 14.0, 15.0]], in which there are 3 sequences padded to length 5, and the acutal length specified by input Variable **length**: length.data = [[2], [3], [4]], after unpadding, the output Variable will be: out.data = [[1.0, 2.0, 6.0, 7.0, 8.0, 11.0, 12.0, 13.0, 14.0]] out.lod = [[2, 3, 4]]
- Parameters
x (Variable) – Input Variable which contains the padded sequences with equal length.
length (Variable) – The Variable that specifies the actual ength of sequences after unpadding.
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
- Returns
The Variable contains the unpadded sequences.
- Return type
Variable
Examples
import paddle.fluid as fluid x = fluid.layers.data(name='x', shape=[10, 5], dtype='float32') len = fluid.layers.data(name='length', shape=[1], dtype='int64') out = fluid.layers.sequence_unpad(x=x, length=len)
shape¶
-
paddle.fluid.layers.
shape
(input)[source] Shape Layer
Get the shape of the input.
- Parameters
input (Variable) – The input variable.
- Returns
The shape of the input variable.
- Return type
Variable
Examples
import paddle.fluid as fluid input = fluid.layers.data( name="input", shape=[3, 100, 100], dtype="float32") out = fluid.layers.shape(input)
shuffle_channel¶
-
paddle.fluid.layers.
shuffle_channel
(x, group, name=None)[source] Shuffle Channel Operator
This operator shuffles the channels of input x. It divide the input channels in each group into
group
subgroups, and obtain a new order by selecting element from every subgroup one by one.Please refer to the paper https://arxiv.org/pdf/1707.01083.pdf
Given a 4-D tensor input with the shape (N, C, H, W): input.shape = (1, 4, 2, 2) input.data =[[[[0.1, 0.2], [0.2, 0.3]], [[0.3, 0.4], [0.4, 0.5]], [[0.5, 0.6], [0.6, 0.7]], [[0.7, 0.8], [0.8, 0.9]]]] Given group: 2 then we get a 4-D tensor out whth the same shape of input: out.shape = (1, 4, 2, 2) out.data = [[[[0.1, 0.2], [0.2, 0.3]], [[0.5, 0.6], [0.6, 0.7]], [[0.3, 0.4], [0.4, 0.5]], [[0.7, 0.8], [0.8, 0.9]]]]
- Parameters
x (Variable) – The input tensor variable. It should be a 4-D tensor with shape [N, C, H, W]
group (int) – Indicating the conuts of subgroups, It should divide the number of channels.
- Returns
the channels shuffling result is a tensor variable with the same shape and same type as the input.
- Return type
out(Variable)
- Raises
ValueError
– If group is not an int type variable.
Examples
import paddle.fluid as fluid input = fluid.layers.data(name='input', shape=[4,2,2], dtype='float32') out = fluid.layers.shuffle_channel(x=input, group=2)
sigmoid_cross_entropy_with_logits¶
-
paddle.fluid.layers.
sigmoid_cross_entropy_with_logits
(x, label, ignore_index=-100, name=None, normalize=False)[source] SigmoidCrossEntropyWithLogits Operator.
This measures the element-wise probability error in classification tasks in which each class is independent. This can be thought of as predicting labels for a data-point, where labels are not mutually exclusive. For example, a news article can be about politics, technology or sports at the same time or none of these.
The logistic loss is given as follows:
$$loss = -Labels * log(sigma(X)) - (1 - Labels) * log(1 - sigma(X))$$
We know that $$sigma(X) = \frac{1}{1 + exp(-X)}$$. By substituting this we get:
$$loss = X - X * Labels + log(1 + exp(-X))$$
For stability and to prevent overflow of $$exp(-X)$$ when X < 0, we reformulate the loss as follows:
$$loss = max(X, 0) - X * Labels + log(1 + exp(-|X|))$$
Both the input X and Labels can carry the LoD (Level of Details) information. However the output only shares the LoD with input X.
- Parameters
x (Variable) – (Tensor, default Tensor<float>), a 2-D tensor with shape N x D, where N is the batch size and D is the number of classes. This input is a tensor of logits computed by the previous operator. Logits are unscaled log probabilities given as log(p/(1-p))
label (Variable) – (Tensor, default Tensor<float>), a 2-D tensor of the same type and shape as X. This input is a tensor of probabalistic labels for each logit
ignore_index (&{ignore_index}) – (int, default kIgnoreIndex), Specifies a target value that is ignored anddoes not contribute to the input gradient
name (basestring|None) – Name of the output.
normalize (bool) – If true, divide the output by the number of targets != ignore_index.
- Returns
(Tensor, default Tensor<float>), a 2-D tensor with shape N x D of elementwise logistic losses
- Return type
out(Variable)
Examples
import paddle.fluid as fluid input = fluid.layers.data( name='data', shape=[10], dtype='float32') label = fluid.layers.data( name='data', shape=[10], dtype='float32') loss = fluid.layers.sigmoid_cross_entropy_with_logits( x=input, label=label, ignore_index=-1, normalize=True) # or False # loss = fluid.layers.reduce_sum(loss) # summation of loss
sign¶
-
paddle.fluid.layers.
sign
(x)[source] sign
This function returns sign of every element in x: 1 for positive, -1 for negative and 0 for zero.
- Parameters
x (Variable|numpy.ndarray) – The input tensor.
- Returns
The output sign tensor with identical shape and dtype to x.
- Return type
Variable
Examples
# [1, 0, -1] data = fluid.layers.sign(np.array([3, 0, -2]))
similarity_focus¶
-
paddle.fluid.layers.
similarity_focus
(input, axis, indexes, name=None)[source] SimilarityFocus Operator
Generate a similarity focus mask with the same shape of input using the following method:
Extract the 3-D tensor(here the first dimension is BatchSize) corresponding to the axis according to the indexes. For example, if axis=1 and indexes=[a], it will get the matrix T=X[:, a, :, :]. In this case, if the shape of input X is (BatchSize, A, B, C), the shape of tensor T is (BatchSize, B, C).
For each index, find the largest numbers in the tensor T, so that the same row and same column has at most one number(what it means is that if the largest number has been found in the i-th row and the j-th column, then the numbers in the i-th row or j-th column will be skipped. And then the next largest number will be selected from the remaining numbers. Obviously there will be min(B, C) numbers), and mark the corresponding position of the 3-D similarity focus mask as 1, otherwise as 0. Do elementwise-or for each index.
Broadcast the 3-D similarity focus mask to the same shape of input X.
Refer to Similarity Focus Layer
* Example : Given a 4-D tensor x with the shape (BatchSize, C, A, B), where C is the number of channels and the shape of feature map is (A, B): x.shape = (2, 3, 2, 2) x.data = [[[[0.8, 0.1], [0.4, 0.5]], [[0.9, 0.7], [0.9, 0.9]], [[0.8, 0.9], [0.1, 0.2]]], [[[0.2, 0.5], [0.3, 0.4]], [[0.9, 0.7], [0.8, 0.4]], [[0.0, 0.2], [0.4, 0.7]]]] Given axis: 1 (the axis of the channel) Given indexes: [0] then we get a 4-D tensor out with the same shape of input x: out.shape = (2, 3, 2, 2) out.data = [[[[1.0, 0.0], [0.0, 1.0]], [[1.0, 0.0], [0.0, 1.0]], [[1.0, 0.0], [0.0, 1.0]]], [[[0.0, 1.0], [1.0, 0.0]], [[0.0, 1.0], [1.0, 0.0]], [[0.0, 1.0], [1.0, 0.0]]]]
- Parameters
input (Variable) – The input tensor variable(default float). It should be a 4-D tensor with shape [BatchSize, A, B, C].
axis (int) – Indicating the dimension to be selected. It can only be 1, 2 or 3.
indexes (list) – Indicating the indexes of the selected dimension.
- Returns
A tensor variable with the same shape and same type as the input.
- Return type
Variable
Examples
import paddle.fluid as fluid data = fluid.layers.data( name='data', shape=[-1, 3, 2, 2], dtype='float32') fluid.layers.similarity_focus(input=data, axis=1, indexes=[0])
slice¶
-
paddle.fluid.layers.
slice
(input, axes, starts, ends)[source] Slice Operator.
Produces a slice of the input tensor along multiple axes. Similar to numpy: https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html Slice uses axes, starts and ends attributes to specify the start and end dimension for each axis in the list of axes, it uses this information to slice the input data tensor. If a negative value is passed for any of the start or end indices, it represents number of elements before the end of that dimension. If the value passed to start or end is larger than the n (the number of elements in this dimension), it represents n. For slicing to the end of a dimension with unknown size, it is recommended to pass in INT_MAX. The size of axes must be equal to starts’ and ends’. Following examples will explain how slice works:
Case1: Given: data = [ [1, 2, 3, 4], [5, 6, 7, 8], ] axes = [0, 1] starts = [1, 0] ends = [2, 3] Then: result = [ [5, 6, 7], ] Case2: Given: data = [ [1, 2, 3, 4], [5, 6, 7, 8], ] axes = [0, 1] starts = [0, 1] ends = [-1, 1000] Then: result = [ [2, 3, 4], ]
- Parameters
input (Variable) – Tensor of data to extract slices from.
axes (List) – (list<int>) Axes that starts and ends apply to. It’s optional.If not present, will be treated as [0, 1, …, len(starts) - 1]
starts (List) – (list<int>) Starting indices of corresponding axis in axes
ends (List) – (list<int>) Starting indices of corresponding axis in axes
- Returns
Sliced data tensor
- Return type
out (Variable)
Examples
import paddle.fluid as fluid starts = [1, 0, 2] ends = [3, 3, 4] axes = [0, 1, 2] input = fluid.layers.data( name="input", shape=[3, 4, 5, 6], dtype='float32') out = fluid.layers.slice(input, axes=axes, starts=starts, ends=ends)
smooth_l1¶
-
paddle.fluid.layers.
smooth_l1
(x, y, inside_weight=None, outside_weight=None, sigma=None)[source] This layer computes the smooth L1 loss for Variable
x
andy
. It takes the first dimension ofx
andy
as batch size. For each instance, it computes the smooth L1 loss element by element first and then sums all the losses. So the shape of ouput Variable is [batch_size, 1].- Parameters
x (Variable) – A tensor with rank at least 2. The input value of smooth L1 loss op with shape [batch_size, dim1, …, dimN].
y (Variable) – A tensor with rank at least 2. The target value of smooth L1 loss op with same shape as
x
.inside_weight (Variable|None) – A tensor with rank at least 2. This input is optional and should have same shape with
x
. If provided, the result of (x
-y
) will be multiplied by this tensor element by element.outside_weight (Variable|None) – A tensor with rank at least 2. This input is optional and should have same shape with
x
. If provided, the out smooth L1 loss will be multiplied by this tensor element by element.sigma (float|None) – Hyper parameter of smooth L1 loss layer. A float scalar with default value 1.0.
- Returns
The output smooth L1 loss with shape [batch_size, 1].
- Return type
Variable
Examples
import paddle.fluid as fluid data = fluid.layers.data(name='data', shape=[128], dtype='float32') label = fluid.layers.data( name='label', shape=[100], dtype='float32') fc = fluid.layers.fc(input=data, size=100) out = fluid.layers.smooth_l1(x=fc, y=label)
soft_relu¶
-
paddle.fluid.layers.
soft_relu
(x, threshold=40.0, name=None)[source] SoftRelu Activation Operator.
\(out = \ln(1 + \exp(\max(\min(x, threshold), -threshold)))\)
- Parameters
x (Variable) – Input of SoftRelu operator
threshold (FLOAT|40.0) – The threshold value of SoftRelu
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
- Returns
Output of SoftRelu operator
- Return type
output(Variable)
Examples
import paddle.fluid as fluid x = fluid.layers.data(name="x", shape=[3,16,16], dtype="float32") y = fluid.layers.soft_relu(x, threshold=20.0)
softmax¶
-
paddle.fluid.layers.
softmax
(input, use_cudnn=False, name=None, axis=-1)[source] The input of the softmax operator is a tensor of any rank. The output tensor has the same shape as the input.
The dimension
axis
of the input tensor will be permuted to the last. Then the input tensor will be logically flattened to a 2-D matrix. The matrix’s second dimension(row length) is the same as the dimensionaxis
of the input tensor, and the first dimension(column length) is the product of all other dimensions of the input tensor. For each row of the matrix, the softmax operator squashes the K-dimensional(K is the width of the matrix, which is also the size of the input tensor’s dimensionaxis
) vector of arbitrary real values to a K-dimensional vector of real values in the range [0, 1] that add up to 1.It computes the exponential of the given dimension and the sum of exponential values of all the other dimensions in the K-dimensional vector input. Then the ratio of the exponential of the given dimension and the sum of exponential values of all the other dimensions is the output of the softmax operator.
For each row \(i\) and each column \(j\) in the matrix, we have:
\[Out[i, j] = \frac{\exp(X[i, j])}{\sum_j(exp(X[i, j])}\]- Parameters
input (Variable) – The input variable.
use_cudnn (bool) – Use cudnn kernel or not, it is valid only when the cudnn library is installed. To improve numerical stablity, set use_cudnn to False by default. Default: False
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically. Default: None.
axis (int) – The index of dimension to perform softmax calculations, it should be in range \([-1, rank - 1]\), while \(rank\) is the rank of input variable. Default: -1.
- Returns
output of softmax
- Return type
Variable
Examples
import paddle.fluid as fluid x = fluid.layers.data(name='x', shape=[2], dtype='float32') fc = fluid.layers.fc(input=x, size=10) # perform softmax in the second dimension softmax = fluid.layers.softmax(input=fc, axis=1) # perform softmax in the last dimension softmax = fluid.layers.softmax(input=fc, axis=-1)
softmax_with_cross_entropy¶
-
paddle.fluid.layers.
softmax_with_cross_entropy
(logits, label, soft_label=False, ignore_index=-100, numeric_stable_mode=True, return_softmax=False, axis=-1)[source] Softmax With Cross Entropy Operator.
Cross entropy loss with softmax is used as the output layer extensively. This operator computes the softmax normalized values for dimension
axis
of the input tensor, after which cross-entropy loss is computed. This provides a more numerically stable gradient.Because this operator performs a softmax on logits internally, it expects unscaled logits. This operator should not be used with the output of softmax operator since that would produce incorrect results.
When the attribute
soft_label
is setFalse
, this operators expects mutually exclusive hard labels, each sample in a batch is in exactly one class with a probability of 1.0. Each sample in the batch will have a single label.The equation is as follows:
Hard label (one-hot label, so every sample has exactly one class)
\[loss_j = -\text{logit}_{label_j} + \log\left(\sum_{i=0}^{K}\exp(\text{logit}_i)\right), j = 1,..., K\]Soft label (each sample can have a distribution over all classes)
\[loss_j = -\sum_{i=0}^{K}\text{label}_i \left(\text{logit}_i - \log\left(\sum_{i=0}^{K} \exp(\text{logit}_i)\right)\right), j = 1,...,K\]3) If
numeric_stable_mode
isTrue
, softmax is calculated first by:\[ \begin{align}\begin{aligned}max_j &= \max_{i=0}^{K}{\text{logit}_i}\\log\_max\_sum_j &= \log\sum_{i=0}^{K}\exp(logit_i - max_j)\\softmax_j &= \exp(logit_j - max_j - {log\_max\_sum}_j)\end{aligned}\end{align} \]and then cross entropy loss is calculated by softmax and label.
- Parameters
logits (Variable) – The input tensor of unscaled log probabilities.
label (Variable) – The ground truth tensor. If
soft_label
is set toTrue
, Label is a Tensor<float/double> in the same shape withlogits
. Ifsoft_label
is set toTrue
, Label is a Tensor<int64> in the same shape withlogits
expect shape in dimensionaxis
as 1.soft_label (bool) – A flag to indicate whether to interpretate the given labels as soft labels. Default False.
ignore_index (int) – Specifies a target value that is ignored and does not contribute to the input gradient. Only valid if
soft_label
is set toFalse
. Default: kIgnoreIndexnumeric_stable_mode (bool) – A flag to indicate whether to use a more numerically stable algorithm. Only valid when
soft_label
isFalse
and GPU is used. Whensoft_label
isTrue
or CPU is used, the algorithm is always numerically stable. Note that the speed may be slower when use stable algorithm. Default: Truereturn_softmax (bool) – A flag indicating whether to return the softmax along with the cross entropy loss. Default: False
axis (int) – The index of dimension to perform softmax calculations. It should be in range \([-1, rank - 1]\), while \(rank\) is the rank of input
logits
. Default: -1.
- Returns
Return the cross entropy loss if return_softmax is False, otherwise the tuple (loss, softmax), softmax is in the same shape with input logits and cross entropy loss is in the same shape with input logits except shape in dimension
axis
as 1.- Return type
Variable or Tuple of two Variables
Examples
data = fluid.layers.data(name='data', shape=[128], dtype='float32') label = fluid.layers.data(name='label', shape=[1], dtype='int64') fc = fluid.layers.fc(input=data, size=100) out = fluid.layers.softmax_with_cross_entropy( logits=fc, label=label)
space_to_depth¶
-
paddle.fluid.layers.
space_to_depth
(x, blocksize, name=None)[source] Gives a blocksize to space_to_depth the input LoDtensor with Layout: [batch, channel, height, width]
This op rearranges blocks of spatial data, into depth. More specifically, this op outputs a copy of the input LoDtensor where values from the height and width dimensions are moved to the channel dimension. The attr blocksize indicates the input block size.
space_to_depth will reorgnize the elements of input with shape[batch, channel, height, width] according to blocksize to construct output with shape [batch, channel * blocksize * blocksize, height/blocksize, width/blocksize]:
space_to_depth is used to This operation is useful for resizing the activations between convolutions (but keeping all data)
Non-overlapping blocks of size block_size x block size are rearranged into depth at each location.
The depth of the output tensor is block_size * block_size * input channel
The Y, X coordinates within each block of the input become the high order component of the output channel index
channel should be divisible by square of blocksize
height, width should be divsible by blocksize
- Parameters
x (variable) – The input LoDtensor.
blocksize (variable) – The blocksize to select the element on each feature map should be > 2
- Returns
The output LoDtensor.
- Return type
Variable
- Raises
TypeError
– blocksize type must be a long.
Examples
import paddle.fluid as fluid import numpy as np data = fluid.layers.data( name='data', shape=[1, 4, 2, 2], dtype='float32', append_batch_size=False) space_to_depthed = fluid.layers.space_to_depth( x=data, blocksize=2) exe = fluid.Executor(fluid.CUDAPlace(0)) data_np = np.arange(0,16).reshape((1,4,2,2)).astype('float32') out_main = exe.run(fluid.default_main_program(), feed={'data': data_np}, fetch_list=[space_to_depthed])
spectral_norm¶
-
paddle.fluid.layers.
spectral_norm
(weight, dim=0, power_iters=1, eps=1e-12, name=None)[source] Spectral Normalization Layer
This layer calculates the spectral normalization value of weight parameters of fc, conv1d, conv2d, conv3d layers which should be 2-D, 3-D, 4-D, 5-D Parameters. Calculations are showed as follows.
Step 1: Generate vector U in shape of [H], and V in shape of [W]. While H is the
dim
th dimension of the input weights, and W is the product result of remaining dimensions.Step 2:
power_iters
shoule be a positive interger, do following calculations with U and V forpower_iters
rounds.\[ \begin{align}\begin{aligned}\mathbf{v} := \frac{\mathbf{W}^{T} \mathbf{u}}{\|\mathbf{W}^{T} \mathbf{u}\|_2}\\\mathbf{u} := \frac{\mathbf{W}^{T} \mathbf{v}}{\|\mathbf{W}^{T} \mathbf{v}\|_2}\end{aligned}\end{align} \]Step 3: Calculate \(\sigma(\mathbf{W})\) and normalize weight values.
\[ \begin{align}\begin{aligned}\sigma(\mathbf{W}) = \mathbf{u}^{T} \mathbf{W} \mathbf{v}\\\mathbf{W} = \frac{\mathbf{W}}{\sigma(\mathbf{W})}\end{aligned}\end{align} \]Refer to Spectral Normalization .
- Parameters
weight (Variable) – The input weight tensor of spectral_norm operator, This can be a 2-D, 3-D, 4-D, 5-D tensor which is the weights of fc, conv1d, conv2d, conv3d layer
dim (int) – The index of dimension which should be permuted to the first before reshaping Input(Weight) to matrix, it should be set as 0 if Input(Weight) is the weight of fc layer, and should be set as 1 if Input(Weight) is the weight of conv layer, default 0
power_iters (int) – number of power iterations to calculate spectral norm, default 1
eps (float) – epsilon for numerical stability in calculating norms
name (str) – The name of this layer. It is optional.
- Returns
A tensor variable of weight parameters after spectral normalization.
- Return type
Variable
Examples
import paddle.fluid as fluid weight = fluid.layers.data(name='weight', shape=[2, 8, 32, 32], append_batch_size=False, dtype='float32') x = fluid.layers.spectral_norm(weight=weight, dim=1, power_iters=2)
split¶
-
paddle.fluid.layers.
split
(input, num_or_sections, dim=-1, name=None)[source] Split the input tensor into multiple sub-tensors.
- Parameters
input (Variable) – The input variable which is a Tensor or LoDTensor.
num_or_sections (int|list) – If
num_or_sections
is an integer, then the integer indicates the number of equal sized sub-tensors that the tensor will be divided into. Ifnum_or_sections
is a list of integers, the length of list indicates the number of sub-tensors and the integers indicate the sizes of sub-tensors’dim
dimension orderly.dim (int) – The dimension along which to split. If \(dim < 0\), the dimension to split along is \(rank(input) + dim\).
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
- Returns
The list of segmented tensor variables.
- Return type
list(Variable)
Examples
import paddle.fluid as fluid # input is a variable which shape is [-1, 3, 9, 5] input = fluid.layers.data( name="input", shape=[3, 9, 5], dtype="float32") x0, x1, x2 = fluid.layers.split(x, num_or_sections=3, dim=2) # x0.shape [-1, 3, 3, 5] # x1.shape [-1, 3, 3, 5] # x2.shape [-1, 3, 3, 5] x0, x1, x2 = fluid.layers.split(input, num_or_sections=3, dim=2) # x0.shape [-1, 3, 2, 5] # x1.shape [-1, 3, 3, 5] # x2.shape [-1, 3, 4, 5]
square_error_cost¶
-
paddle.fluid.layers.
square_error_cost
(input, label)[source] Square error cost layer
This layer accepts input predictions and target label and returns the squared error cost.
For predictions, \(X\), and target labels, \(Y\), the equation is:
\[Out = (X - Y)^2\]In the above equation:
\(X\): Input predictions, a tensor.
\(Y\): Input labels, a tensor.
\(Out\): Output value, same shape with \(X\).
- Parameters
input (Variable) – Input tensor, has predictions.
label (Variable) – Label tensor, has target labels.
- Returns
The tensor variable storing the element-wise squared error difference of input and label.
- Return type
Variable
Examples
import paddle.fluid as fluid y = fluid.layers.data(name='y', shape=[1], dtype='float32') y_predict = fluid.layers.data(name='y_predict', shape=[1], dtype='float32') cost = fluid.layers.square_error_cost(input=y_predict, label=y)
squeeze¶
-
paddle.fluid.layers.
squeeze
(input, axes, name=None)[source] Remove single-dimensional entries from the shape of a tensor. Takes a parameter axes with a list of axes to squeeze. If axes is not provided, all the single dimensions will be removed from the shape. If an axis is selected with shape entry not equal to one, an error is raised.
For example:
Case 1: Given X.shape = (1, 3, 1, 5) and axes = [0] we get: Out.shape = (3, 1, 5) Case 2: Given X.shape = (1, 3, 1, 5) and axes = [] we get: Out.shape = (3, 5)
- Parameters
input (Variable) – The input variable to be squeezed.
axes (list) – List of integers, indicating the dimensions to be squeezed.
name (str|None) – Name for this layer.
- Returns
Output squeezed variable.
- Return type
Variable
Examples
import paddle.fluid as fluid import paddle.fluid.layers as layers x = layers.data(name='x', shape=[5, 1, 10]) y = layers.squeeze(input=x, axes=[1])
stack¶
-
paddle.fluid.layers.
stack
(x, axis=0)[source] Stack Layer
This layer stacks all of the input
x
along axis.Input
x
can be a single variable, alist
of variables, or atuple
of variables. Ifx
is alist
ortuple
, the shapes of all these variables must be the same. Supposing the shape of each input is \([d_0, d_1, ..., d_{n-1}]\), the shape of the output variable would be \([d_0, d_1, ..., d_{axis}=len(x), ..., d_{n-1}]\). Ifaxis
< 0, it would be replaced withaxis+rank(x[0])+1
. Ifaxis
is None, it would be replaced with 0.For Example:
Case 1: Input: x[0].data = [ [1.0 , 2.0 ] ] x[0].dims = [1, 2] x[1].data = [ [3.0 , 4.0 ] ] x[1].dims = [1, 2] x[2].data = [ [5.0 , 6.0 ] ] x[2].dims = [1, 2] Attrs: axis = 0 Output: Out.data =[ [ [1.0, 2.0] ], [ [3.0, 4.0] ], [ [5.0, 6.0] ] ] Out.dims = [3, 1, 2] Case 2: Given x[0].data = [ [1.0 , 2.0 ] ] x[0].dims = [1, 2] x[1].data = [ [3.0 , 4.0 ] ] x[1].dims = [1, 2] x[2].data = [ [5.0 , 6.0 ] ] x[2].dims = [1, 2] Attrs: axis = 1 or axis = -2 Output: Out.data =[ [ [1.0, 2.0] [3.0, 4.0] [5.0, 6.0] ] ] Out.dims = [1, 3, 2]
- Parameters
x (Variable|list(Variable)|tuple(Variable)) – Input variables.
axis (int|None) – The axis along which all inputs are stacked.
- Returns
The stacked variable.
- Return type
Variable
Examples
import paddle.fluid as fluid import paddle.fluid.layers as layers x1 = layers.data(name='x1', shape=[1, 2], dtype='int32') x2 = layers.data(name='x2', shape=[1, 2], dtype='int32') data = layers.stack([x1,x2])
stanh¶
-
paddle.fluid.layers.
stanh
(x, scale_a=0.6666666666666666, scale_b=1.7159, name=None)[source] STanh Activation Operator.
$$out = b * \frac{e^{a * x} - e^{-a * x}}{e^{a * x} + e^{-a * x}}$$
- Parameters
x (Variable) – Input of STanh operator
scale_a (FLOAT|2.0 / 3.0) – The scale parameter of a for the input
scale_b (FLOAT|1.7159) – The scale parameter of b for the input
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
- Returns
Output of STanh operator
- Return type
output(Variable)
Examples
import paddle.fluid as fluid x = fluid.layers.data(name="x", shape=[3,10,32,32], dtype="float32") y = fluid.layers.stanh(x, scale_a=0.67, scale_b=1.72)
sum¶
-
paddle.fluid.layers.
sum
(x)[source] Sum operator.
This operators sums the input tensors. All the inputs can carry the LoD (Level of Details) information. However, the output only shares the LoD information with the first input.
- Parameters
x (Variable) – (vector<Tensor>) The input tensors of sum operator
- Returns
(Tensor) The output tensor of sum operator
- Return type
out (Variable)
Examples
import paddle.fluid as fluid import paddle.fluid.layers as layers input0 = layers.data(name="input0", shape=[13, 11], dtype='float32') input1 = layers.data(name="input1", shape=[13, 11], dtype='float32') out = layers.sum([input0,input1])
swish¶
-
paddle.fluid.layers.
swish
(x, beta=1.0, name=None)[source] Swish Activation Operator.
$$out = \frac{x}{1 + e^{- beta x}}$$
- Parameters
x (Variable) – Input of Swish operator
beta (FLOAT|1.0) – Constant beta of swish operator
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
- Returns
Output of Swish operator
- Return type
output(Variable)
Examples
import paddle.fluid as fluid x = fluid.layers.data(name="x", shape=[3,10,32,32], dtype="float32") y = fluid.layers.swish(x, beta=2.0)
teacher_student_sigmoid_loss¶
-
paddle.fluid.layers.
teacher_student_sigmoid_loss
(input, label, soft_max_up_bound=15.0, soft_max_lower_bound=-15.0)[source] Teacher Student Log Loss Layer
This layer accepts input predictions and target label and returns the teacher_student loss.
\[loss = max(x, 0) - x * z + log(1 + exp(-abs(x))) + max(x, 0) - x * z' + log(1 + exp(-abs(x)))\]- Parameters
input (Variable|list) – a 2-D tensor with shape [N x 1], where N is the batch size. This input is a probability computed by the previous operator.
label (Variable|list) – the ground truth which is a 2-D tensor with shape [N x 1], where N is the batch size.
soft_max_up_bound (float) – if input > soft_max_up_bound, will be bound
soft_max_lower_bound (float) – if input < soft_max_lower_bound, will be bound
- Returns
A 2-D tensor with shape [N x 1], the teacher_student_sigmoid_loss.
- Return type
Variable
Examples
import paddle.fluid as fluid batch_size = 64 label = fluid.layers.data( name="label", shape=[batch_size, 1], dtype="int64", append_batch_size=False) similarity = fluid.layers.data( name="similarity", shape=[batch_size, 1], dtype="float32", append_batch_size=False) cost = fluid.layers.teacher_student_sigmoid_loss(input=similarity, label=label)
temporal_shift¶
-
paddle.fluid.layers.
temporal_shift
(x, seg_num, shift_ratio=0.25, name=None)[source] Temporal Shift Operator
This operator calculates the temporal shifting features for Input(X).
Input(X) should be in shape of [N*T, C, H, W], while N is the batch size, T is the temporal segment number specified by
seg_num
, C is the channel number, H and W is the height and width of features.Temporal Shifting is calculated as follows:
Step 1: Reshape Input(X) to [N, T, C, H, W].
Step 2: Pad 0 to reshaping result in the 2nd(T) dimension with padding width as 1 on each side, padding result will be in shape of [N, T+2, C, H, W].
Step 3: Assume
shift_ratio
is \(1/4\), slice padding result as follows:$$ slice1 = x[:, :T, :C/4, :, :] $$ $$ slice2 = x[:, 2:T+2, C/4:C/2, :, :] $$ $$ slice3 = x[:, 1:T+1, C/2:, :, :] $$
Step 4: Concatenate three slices along the 3rd(C) dimension and reshape result to [N*T, C, H, W].
For details of temporal shifting, please refer to paper: Temporal Shift Module .
- Parameters
x (Variable) – The input tensor of temporal shift operator. This is a 4-D tensor with shape of [N*T, C, H, W]. While N is the batch size, T is the temporal segment number, C is the channel number, H is the height of features and W is the width of features
seg_num (int) – The temporal segment number, this should be a positive integer
shift_ratio (float) – The shift ratio of the channels, the first
shift_ratio
part of channels will be shifted by -1 along the temporal dimension, and the secondshift_ratio
part of channels will be shifted by 1 along the temporal dimension. Default 0.25name (str, default None) – The name of this layer.
- Returns
The temporal shifting result is a tensor variable with the same shape and same type as the input.
- Return type
out(Variable)
- Raises
TypeError
– seg_num must be int type.
Examples
import paddle.fluid as fluid input = fluid.layers.data(name='input', shape=[4,2,2], dtype='float32') out = fluid.layers.temporal_shift(x=input, seg_num=2, shift_ratio=0.2)
topk¶
-
paddle.fluid.layers.
topk
(input, k, name=None)[source] This operator is used to find values and indices of the k largest entries for the last dimension.
If the input is a vector (1-D Tensor), finds the k largest entries in the vector and outputs their values and indices as vectors. Thus values[j] is the j-th largest entry in input, and its index is indices[j].
If the input is a Tensor with higher rank, this operator computes the top k entries along the last dimension.
For example:
If: input = [[5, 4, 2, 3], [9, 7, 10, 25], [6, 2, 10, 1]] k = 2 Then: The first output: values = [[5, 4], [10, 25], [6, 10]] The second output: indices = [[0, 1], [2, 3], [0, 2]]
- Parameters
input (Variable) – The input variable which can be a vector or Tensor with higher rank.
k (int | Variable) – The number of top elements to look for along the last dimension of input.
name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically. Default: None
- Returns
A tuple with two elements. Each element is a Variable. The first one is k largest elements along each last dimensional slice. The second one is indices of values within the last dimension of input.
- Return type
Tuple[Variable]
- Raises
ValueError
– If k < 1 or k is not less than the last dimension of input
Examples
import paddle.fluid as fluid import paddle.fluid.layers as layers input = layers.data(name="input", shape=[13, 11], dtype='float32') top5_values, top5_indices = layers.topk(input, k=5)
transpose¶
-
paddle.fluid.layers.
transpose
(x, perm, name=None)[source] Permute the dimensions of input according to perm.
The i-th dimension of the returned tensor will correspond to the perm[i]-th dimension of input.
- Parameters
x (Variable) – The input Tensor.
perm (list) – A permutation of the dimensions of input.
name (str) – The name of this layer. It is optional.
- Returns
A transposed Tensor.
- Return type
Variable
Examples
# use append_batch_size=False to avoid prepending extra # batch size in shape import paddle.fluid as fluid x = fluid.layers.data(name='x', shape=[5, 10, 15], dtype='float32', append_batch_size=False) x_transposed = fluid.layers.transpose(x, perm=[1, 0, 2])
tree_conv¶
-
paddle.fluid.layers.
tree_conv
(nodes_vector, edge_set, output_size, num_filters=1, max_depth=2, act='tanh', param_attr=None, bias_attr=None, name=None)[source] Tree-Based Convolution Operator
Tree-Based Convolution is a kind of convolution based on tree structure. Tree-Based Convolution is a part of Tree-Based Convolution Neural Network(TBCNN), which is used to classify tree structures, such as Abstract Syntax Tree. Tree-Based Convolution proposed a kind of data structure called continuous binary tree, which regards multiway tree as binary tree. The paper of Tree-Based Convolution Operator is here: https://arxiv.org/abs/1409.5718v1
- Parameters
nodes_vector (Variable) – (Tensor) The feature vector of every node on the tree. The shape of the feature vector must be [max_tree_node_size, feature_size]
edge_set (Variable) – (Tensor) The Edges of Tree. The edge must be directional. The shape of the edge set must be [max_tree_node_size, 2]
output_size (int) – output feature width
num_filters (int) – number of filters, Default 1
max_depth (int) – max depth of filters, Default 2
act (str) – activation function, Default tanh
param_attr (ParamAttr) – the parameter attribute for the filters, Default None
bias_attr (ParamAttr) – the parameter attribute for the bias of this layer, Default None
name (str) – a name of this layer(optional). If set None, the layer will be named automatically, Default None
- Returns
(Tensor) The feature vector of subtrees. The shape of the output tensor is [max_tree_node_size, output_size, num_filters]. The output tensor could be a new feature vector for next tree convolution layers
- Return type
out(Variable)
Examples
import paddle.fluid as fluid # 10 for max_node_size of dataset, 5 for vector width nodes_vector = fluid.layers.data(name='vectors', shape=[10, 5], dtype='float32') # 10 for max_node_size of dataset, 2 for every edge has two nodes # edges must be directional edge_set = fluid.layers.data(name='edge_set', shape=[10, 2], dtype='float32') # the shape of output will be [10, 6, 1], # 10 for max_node_size of dataset, 6 for output size, 1 for 1 filter out_vector = fluid.layers.tree_conv(nodes_vector, edge_set, 6, 1, 2) # After reshape, output tensor could be nodes_vector for next tree convolution out_vector = fluid.layers.reshape(out_vector, shape=[-1, 10, 6]) out_vector_2 = fluid.layers.tree_conv(out_vector, edge_set, 3, 4, 2) # also output tensor could be pooling(the pooling in paper called global pooling) pooled = fluid.layers.reduce_max(out_vector, dim=2) # global pooling
uniform_random_batch_size_like¶
-
paddle.fluid.layers.
uniform_random_batch_size_like
(input, shape, dtype='float32', input_dim_idx=0, output_dim_idx=0, min=-1.0, max=1.0, seed=0)[source] UniformRandomBatchSizeLike operator.
This operator initializes a tensor with the same batch_size as the Input tensor with random values sampled from a uniform distribution.
- Parameters
input (Variable) – Tensor whose input_dim_idx’th dimension specifies the batch_size
shape (tuple|list) – The shape of the output
input_dim_idx (Int) – default 0. The index of input’s batch size dimension
output_dim_idx (Int) – default 0. The index of output’s batch size dimension
min (Float) – (float, default -1.0) Minimum value of uniform random
max (Float) – (float, default 1.0) Maximun value of uniform random
seed (Int) – (int, default 0) Random seed used for generating samples. 0 means use a seed generated by the system.Note that if seed is not 0, this operator will always generate the same random numbers every time
dtype (np.dtype|core.VarDesc.VarType|str) – The type of data : float32, float_16, int etc
- Returns
Tensor of specified shape will be filled with the specified value
- Return type
out (Variable)
Examples
import paddle.fluid as fluid import paddle.fluid.layers as layers input = layers.data(name="input", shape=[13, 11], dtype='float32') out = layers.uniform_random_batch_size_like(input, [-1, 11])
unsqueeze¶
-
paddle.fluid.layers.
unsqueeze
(input, axes, name=None)[source] Insert single-dimensional entries to the shape of a tensor. Takes one required argument axes, a list of dimensions that will be inserted. Dimension indices in axes are as seen in the output tensor.
For example:
Given a tensor such that tensor with shape [3, 4, 5], then Unsqueezed tensor with axes=[0, 4] has shape [1, 3, 4, 5, 1].
- Parameters
input (Variable) – The input variable to be unsqueezed.
axes (list) – List of integers, indicating the dimensions to be inserted.
name (str|None) – Name for this layer.
- Returns
Output unsqueezed variable.
- Return type
Variable
Examples
import paddle.fluid as fluid x = fluid.layers.data(name='x', shape=[5, 10]) y = fluid.layers.unsqueeze(input=x, axes=[1])
unstack¶
-
paddle.fluid.layers.
unstack
(x, axis=0, num=None)[source] UnStack Layer
This layer unstacks input
x
into several tensors along axis.If
axis
< 0, it would be replaced withaxis+rank(x)
. Ifnum
is None, it would be inferred fromx.shape[axis]
, and ifx.shape[axis]
<= 0 or is unknown,ValueError
is raised.- Parameters
x (Variable) – Input variable.
axis (int) – The axis along which the input is unstacked.
num (int|None) – The number of output variables.
- Returns
The unstacked variables.
- Return type
list(Variable)
Examples
import paddle.fluid as fluid x = fluid.layers.data(name='x', shape=[5, 10], dtype='float32') y = fluid.layers.unstack(x, axis=1)
warpctc¶
-
paddle.fluid.layers.
warpctc
(input, label, blank=0, norm_by_times=False, use_cudnn=False)[source] An operator integrating the open source Warp-CTC library (https://github.com/baidu-research/warp-ctc) to compute Connectionist Temporal Classification (CTC) loss. It can be aliased as softmax with CTC, since a native softmax activation is interated to the Warp-CTC library, to to normlize values for each row of the input tensor.
- Parameters
input (Variable) – The unscaled probabilities of variable-length sequences, which is a 2-D Tensor with LoD information. It’s shape is [Lp, num_classes + 1], where Lp is the sum of all input sequences’ length and num_classes is the true number of classes. (not including the blank label).
label (Variable) – The ground truth of variable-length sequence, which is a 2-D Tensor with LoD information. It is of the shape [Lg, 1], where Lg is th sum of all labels’ length.
blank (int, default 0) – The blank label index of Connectionist Temporal Classification (CTC) loss, which is in the half-opened interval [0, num_classes + 1).
norm_by_times (bool, default false) – Whether to normalize the gradients by the number of time-step, which is also the sequence’s length. There is no need to normalize the gradients if warpctc layer was follewed by a mean_op.
use_cudnn (bool, default false) – Whether to use cudnn.
- Returns
The Connectionist Temporal Classification (CTC) loss, which is a 2-D Tensor of the shape [batch_size, 1].
- Return type
Variable
Examples
import paddle.fluid as fluid label = fluid.layers.data(name='label', shape=[11, 8], dtype='float32', lod_level=1) predict = fluid.layers.data(name='predict', shape=[11, 1], dtype='float32') cost = fluid.layers.warpctc(input=predict, label=label)
where¶
-
paddle.fluid.layers.
where
(condition)[source] Return an int64 tensor with rank 2, specifying the coordinate of true element in condition.
Output’s first dimension is the number of true element, second dimension is rank(number of dimension) of condition. If there is zero true element, then an empty tensor will be generated.
- Parameters
condition (Variable) – A bool tensor with rank at least 1.
- Returns
The tensor variable storing a 2-D tensor.
- Return type
Variable
Examples
import paddle.fluid as fluid import paddle.fluid.layers as layers import numpy as np # condition is a tensor [True, False, True] out = fluid.layers.where(condition) # [[0], [2]] # condition is a tensor [[True, False], [False, True]] out = fluid.layers.where(condition) # [[0, 0], [1, 1]] # condition is a tensor [False, False, False] out = fluid.layers.where(condition) # [[]]