control_flow¶
array_length¶
-
paddle.fluid.layers.
array_length
(array)[source] Get the Length of Input LoDTensorArray
This function performs the operation to find the length of the input LOD_TENSOR_ARRAY.
Related API: array_read, array_write, While.
- Parameters
array (LOD_TENSOR_ARRAY) – The input array that will be used to compute the length.
- Returns
The length of the input LoDTensorArray.
- Return type
Variable
Examples
import paddle.fluid as fluid tmp = fluid.layers.zeros(shape=[10], dtype='int32') i = fluid.layers.fill_constant(shape=[1], dtype='int64', value=10) arr = fluid.layers.array_write(tmp, i=i) arr_len = fluid.layers.array_length(arr)
array_read¶
-
paddle.fluid.layers.
array_read
(array, i)[source] This function performs the operation to read the data in as an LOD_TENSOR_ARRAY.
Given: array = [0.6, 0.1, 0.3, 0.1] And: i = 2 Then: output = 0.3
- Parameters
array (Variable|list) – The input tensor that store data to be read.
i (Variable|list) – The index of the data to be read from input array.
- Returns
The tensor type variable that has the data written to it.
- Return type
Variable
Examples
import paddle.fluid as fluid array = fluid.layers.create_array(dtype='float32') i = fluid.layers.fill_constant(shape=[1], dtype='int64', value=10) item = fluid.layers.array_read(array, i)
array_write¶
-
paddle.fluid.layers.
array_write
(x, i, array=None)[source] This function writes the given input variable to the specified position indicating by the arrary index to an output LOD_TENSOR_ARRAY. If the output LOD_TENSOR_ARRAY is not given(None), a new one will be created and returned.
- Parameters
x (Variable|list) – The input tensor from which the data will be read.
i (Variable|list) – The index of the output LOD_TENSOR_ARRAY, pointing to the position to which the input tensor will be written.
array (Variable|list) – The output LOD_TENSOR_ARRAY to which the input tensor will be written. If this parameter is NONE, a new LOD_TENSOR_ARRAY will be created and returned.
- Returns
The output LOD_TENSOR_ARRAY where the input tensor is written.
- Return type
Variable
Examples
import paddle.fluid as fluid tmp = fluid.layers.zeros(shape=[10], dtype='int32') i = fluid.layers.fill_constant(shape=[1], dtype='int64', value=10) arr = fluid.layers.array_write(tmp, i=i)
create_array¶
-
paddle.fluid.layers.
create_array
(dtype)[source] Create LoDTensorArray
This function creates an array of LOD_TENSOR_ARRAY . It is mainly used to implement RNN with array_write, array_read and While.
- Parameters
dtype (int|float) – The data type of the elements in the lod_tensor_array.
- Returns
The lod_tensor_array variable storing the elements of data type.
- Return type
Variable
Examples
import paddle.fluid as fluid data = fluid.layers.create_array(dtype='float32')
DynamicRNN¶
-
class
paddle.fluid.layers.
DynamicRNN
(name=None)[source] The dynamic RNN can process a batch of sequence data. The length of each sample sequence can be different. This API automatically process them in batch.
The input lod must be set. Please reference to lod_tensor.
The dynamic RNN will unfold sequence into timesteps. Users need to define how to process each time step during the
with
block.The memory is used staging data cross time step. The initial value of memory can be zero or another variable.
The dynamic RNN can mark multiple variables as its output. Use drnn() to get the output sequence.
Notes
Currently it is not supported that setting is_sparse to True of any layers within DynamicRNN.
Examples
import paddle.fluid as fluid sentence = fluid.layers.data(name='sentence', shape=[1], dtype='int64', lod_level=1) embedding = fluid.layers.embedding(input=sentence, size=[65536, 32], is_sparse=True) drnn = fluid.layers.DynamicRNN() with drnn.block(): word = drnn.step_input(embedding) prev = drnn.memory(shape=[200]) hidden = fluid.layers.fc(input=[word, prev], size=200, act='relu') drnn.update_memory(prev, hidden) # set prev to hidden drnn.output(hidden) # Get the last time step of rnn. It is the encoding result. rnn_output = drnn() last = fluid.layers.sequence_last_step(rnn_output)
-
step_input
(x, level=0) Mark a sequence as a dynamic RNN input.
- Parameters
x (Variable) – The input sequence which should have lod information.
level (int) – The level of lod used to split steps. Default: 0.
- Returns
The current timestep in the input sequence.
-
static_input
(x) Mark a variable as a RNN input. The input will not be scattered into time steps. It is optional.
- Parameters
x (Variable) – The input variable.
- Returns
The input variable that can access in RNN.
Examples
import paddle.fluid as fluid sentence = fluid.layers.data(name='sentence', dtype='float32', shape=[32], lod_level=1) encoder_proj = fluid.layers.data(name='encoder_proj', dtype='float32', shape=[32], lod_level=1) decoder_boot = fluid.layers.data(name='boot', dtype='float32', shape=[10], lod_level=1) drnn = fluid.layers.DynamicRNN() with drnn.block(): current_word = drnn.step_input(sentence) encoder_word = drnn.static_input(encoder_proj) hidden_mem = drnn.memory(init=decoder_boot, need_reorder=True) fc_1 = fluid.layers.fc(input=encoder_word, size=30, bias_attr=False) fc_2 = fluid.layers.fc(input=current_word, size=30, bias_attr=False) decoder_inputs = fc_1 + fc_2 h, _, _ = fluid.layers.gru_unit(input=decoder_inputs, hidden=hidden_mem, size=30) drnn.update_memory(hidden_mem, h) out = fluid.layers.fc(input=h, size=10, bias_attr=True, act='softmax') drnn.output(out) rnn_output = drnn()
-
block
() The block for user to define operators in RNN.
-
memory
(init=None, shape=None, value=0.0, need_reorder=False, dtype='float32') Create a memory variable for dynamic rnn.
If the
init
is not None,memory
will be initialized by this variable. Theneed_reorder
is used to reorder the memory as the input variable. It should be set to true when the initialized memory depends on the input sample.Examples
import paddle.fluid as fluid sentence = fluid.layers.data(name='sentence', shape=[32], dtype='float32', lod_level=1) boot_memory = fluid.layers.data(name='boot', shape=[10], dtype='float32', lod_level=1) drnn = fluid.layers.DynamicRNN() with drnn.block(): word = drnn.step_input(sentence) memory = drnn.memory(init=boot_memory, need_reorder=True) hidden = fluid.layers.fc(input=[word, memory], size=10, act='tanh') drnn.update_memory(ex_mem=memory, new_mem=hidden) drnn.output(hidden) rnn_output = drnn()
Otherwise, if
shape
,value
,dtype
are set, thememory
will be initialized by thisvalue
.Examples
import paddle.fluid as fluid sentence = fluid.layers.data(name='sentence', dtype='float32', shape=[32], lod_level=1) drnn = fluid.layers.DynamicRNN() with drnn.block(): word = drnn.step_input(sentence) memory = drnn.memory(shape=[10], dtype='float32', value=0) hidden = fluid.layers.fc(input=[word, memory], size=10, act='tanh') drnn.update_memory(ex_mem=memory, new_mem=hidden) drnn.output(hidden) rnn_output = drnn()
- Parameters
init (Variable|None) – The initialized variable.
shape (list|tuple) – The memory shape. The shape does not contain batch_size.
value (float) – the initalized value.
need_reorder (bool) – True if the initialized memory depends on the input sample.
dtype (str|numpy.dtype) – The data type of the initialized memory.
- Returns
The memory variable.
-
update_memory
(ex_mem, new_mem) Update the memory from ex_mem to new_mem. NOTE that the shape and data type of
ex_mem
andnew_mem
must be same.- Parameters
ex_mem (Variable) – the memory variable.
new_mem (Variable) – the plain variable generated in RNN block.
- Returns
None
-
output
(*outputs) Mark the RNN output variables.
- Parameters
outputs – The output variables.
- Returns
None
-
equal¶
-
paddle.fluid.layers.
equal
(x, y, cond=None)[source] This layer returns the truth value of \(x == y\) elementwise.
- Parameters
x (Variable) – First operand of equal
y (Variable) – Second operand of equal
cond (Variable|None) – Optional output variable to store the result of equal
- Returns
The tensor variable storing the output of equal.
- Return type
Variable
Examples
import paddle.fluid as fluid label = fluid.layers.data(name="label", shape=[3,10,32,32], dtype="float32") limit = fluid.layers.data(name="limit", shape=[3,10,32,32], dtype="float32") less = fluid.layers.equal(x=label, y=limit)
greater_equal¶
-
paddle.fluid.layers.
greater_equal
(x, y, cond=None)[source] This layer returns the truth value of \(x >= y\) elementwise, which is equivalent to the overloaded operator >=.
- Parameters
x (Variable) – First operand of greater_equal
y (Variable) – Second operand of greater_equal
cond (Variable|None) – Optional output variable to store the result of greater_equal
- Returns
The tensor variable storing the output of greater_equal.
- Return type
Variable
Examples
out = fluid.layers.greater_equal(x=label, y=limit)
greater_than¶
-
paddle.fluid.layers.
greater_than
(x, y, cond=None)[source] This layer returns the truth value of \(x > y\) elementwise, which is equivalent to the overloaded operator >.
- Parameters
x (Variable) – First operand of greater_than
y (Variable) – Second operand of greater_than
cond (Variable|None) – Optional output variable to store the result of greater_than
- Returns
The tensor variable storing the output of greater_than.
- Return type
Variable
Examples
out = fluid.layers.greater_than(x=label, y=limit)
IfElse¶
-
class
paddle.fluid.layers.
IfElse
(cond, name=None)[source] if-else control flow.
- Parameters
cond (Variable) – condition used to compare.
name (str, default None) – The name of this layer.
Examples
import paddle.fluid as fluid image = fluid.layers.data(name="X", shape=[2, 5, 5], dtype='float32') label = fluid.layers.data(name='label', shape=[1], dtype='int64') limit = fluid.layers.fill_constant_batch_size_like( input=label, dtype='int64', shape=[1], value=5.0) cond = fluid.layers.less_than(x=label, y=limit) ie = fluid.layers.IfElse(cond) with ie.true_block(): true_image = ie.input(image) hidden = fluid.layers.fc(input=true_image, size=100, act='tanh') prob = fluid.layers.fc(input=hidden, size=10, act='softmax') ie.output(prob) with ie.false_block(): false_image = ie.input(image) hidden = fluid.layers.fc( input=false_image, size=200, act='tanh') prob = fluid.layers.fc(input=hidden, size=10, act='softmax') ie.output(prob) prob = ie()
increment¶
-
paddle.fluid.layers.
increment
(x, value=1.0, in_place=True)[source] This function performs an operation that increments the value in the input \(x\) by an amount: \(value\) as mentioned in the input parameter. This operation is performed in-place by default. Notice that the number of elements in \(x\) must be equal to 1.
- Parameters
x (Variable|list) – The tensor that has the input values.
value (float) – The amount by which the values should be incremented.
in_place (bool) – If the increment should be performed in-place.
- Returns
The elementwise-incremented object.
- Return type
Variable
Examples
import paddle.fluid as fluid data = fluid.layers.data(name='data', shape=[1], dtype='float32', append_batch_size=False) data = fluid.layers.increment(x=data, value=3.0, in_place=True)
is_empty¶
-
paddle.fluid.layers.
is_empty
(x, cond=None)[source] Test whether a Variable is empty.
- Parameters
x (Variable) – The Variable to be tested.
cond (Variable|None) – Output parameter. Returns the test result of given ‘x’. Default: None
- Returns
A bool scalar. True if ‘x’ is an empty Variable.
- Return type
Variable
- Raises
TypeError
– If input cond is not a variable, or cond’s dtype is not bool.
Examples
import paddle.fluid as fluid input = fluid.layers.data(name="input", shape=[4, 32, 32], dtype="float32") res = fluid.layers.is_empty(x=input) # or: # fluid.layers.is_empty(x=input, cond=res)
less_equal¶
-
paddle.fluid.layers.
less_equal
(x, y, cond=None)[source] This layer returns the truth value of \(x <= y\) elementwise, which is equivalent to the overloaded operator <=.
- Parameters
x (Variable) – First operand of less_equal
y (Variable) – Second operand of less_equal
cond (Variable|None) – Optional output variable to store the result of less_equal
- Returns
The tensor variable storing the output of less_equal.
- Return type
Variable
Examples
out = fluid.layers.less_equal(x=label, y=limit)
less_than¶
-
paddle.fluid.layers.
less_than
(x, y, force_cpu=None, cond=None)[source] It operates element-wise on X and Y, and returns the Out. Each of them is a N-dim tensor. X and Y could be any type. The each element of the Out tensor is calculated by \(Out = X < Y\)
- Parameters
x (Variable) – the left hand operand of less_than operator.
y (Variable) – the right hand operand of less_than operator.
force_cpu (BOOLEAN) – Force fill output variable to cpu memory. Otherwise, fill output variable to the running device [default true].
cond (Variable|None) – Optional output variable to store the result of less_than
- Returns
n-dim bool tensor. Each element is Out = X < Y.
Examples
import paddle.fluid as fluid label = fluid.layers.data(name='y', shape=[1], dtype='int64') limit = fluid.layers.fill_constant(shape=[1], dtype='int64', value=5) cond = fluid.layers.less_than(x=label, y=limit)
not_equal¶
-
paddle.fluid.layers.
not_equal
(x, y, cond=None)[source] This layer returns the truth value of \(x != y\) elementwise, which is equivalent to the overloader operator !=.
- Parameters
x (Variable) – First operand of not_equal
y (Variable) – Second operand of not_equal
cond (Variable|None) – Optional output variable to store the result of not_equal
- Returns
The tensor variable storing the output of not_equal.
- Return type
Variable
Examples
out = fluid.layers.not_equal(x=label, y=limit)
Print¶
-
paddle.fluid.layers.
Print
(input, first_n=-1, message=None, summarize=-1, print_tensor_name=True, print_tensor_type=True, print_tensor_shape=True, print_tensor_lod=True, print_phase='both')[source] Print operator
This creates a print op that will print when a tensor is accessed.
Wraps the tensor passed in so that whenever that a tensor is accessed, the message message is printed, along with the current value of the tensor t.
- Parameters
input (Variable) – A Tensor to print.
summarize (int) – Print this number of elements in the tensor, will print all if left is negative.
message (str) – A string message to print as a prefix.
first_n (int) – Only log first_n number of times.
print_tensor_name (bool) – Print the tensor name.
print_tensor_type (bool) – Print the tensor type.
print_tensor_shape (bool) – Print the tensor shape.
print_tensor_lod (bool) – Print the tensor lod.
print_phase (str) – Which phase to displace, including ‘forward’, ‘backward’ and ‘both’. If set to ‘backward’ or ‘both’, will print the gradients of input tensor.
- Returns
Output tensor.
- Return type
Variable
Notes
The input and output are two different variables, and in the following process, you should use the output variable but not the input, otherwise, the print layer doesn’t have backward.
Examples
import paddle.fluid as fluid input = fluid.layers.data(name="input", shape=[4, 32, 32], dtype="float32") input = fluid.layers.Print(input, message = "The content of input layer:") # value = some_layer(...) # Print(value, summarize=10, # message="The content of some_layer: ")
reorder_lod_tensor_by_rank¶
-
paddle.fluid.layers.
reorder_lod_tensor_by_rank
(x, rank_table)[source] ReorderLoDTensorByRankTable operator.
Input(X) is a batch of sequences. Input(RankTable) stores new orders of the input sequence batch. The reorder_lod_tensor_by_rank operator reorders the Input(X) according to the information provided by Input(RankTable).
For example:
If the indices stored in the Input(RankTable) are [3, 0, 2, 1], the Input(X) will be reordered that the fourth sequence in Input(X) will become the first one, and then followed by the original first, third, and the second one.
This is: X = [Seq0, Seq1, Seq2, Seq3]. The indices in RankTable are [3, 0, 2, 1]. Out = [Seq3, Seq0, Seq2, Seq1] with a new LoD information.
If the LoD information of Input(X) is empty, this means Input(X) is not sequence data. This is also identical to a batch of sequences where each sequence has a fixed length 1. In this case, the reorder_lod_tensor_by_rank operator reorders each slice of Input(X) along the first axis according to Input(RankTable).
This is: X = [Slice0, Slice1, Slice2, Slice3] and its LoD information is empty. The indices in RankTable are [3, 0, 2, 1]. Out = [Slice3, Slice0, Slice2, Slice1] with no LoD information is appended.
NOTE: This operator sorts Input(X) according to a given LoDRankTable which does not need to be calculated according to Input(X). It can be calculated according to another different sequence, and then this operator sorts Input(X) according to the given LoDRankTable.
- Parameters
x (Variable) – (LoDTensor), the input lod tensor to be reordered according to Input(RankTable)
rank_table (Variable) – Variable
- Returns
(LoDTensor), the reordered lod tensor
- Return type
out(Variable)
Examples
import paddle.fluid as fluid data_desc = (['input', [9], 0], ['ref', [5], 1]) data = fluid.layers.data(name=data_desc[0][0], shape=data_desc[0][1]) rank_data = fluid.layers.data(name=data_desc[1][0], shape=data_desc[1][1]) table = fluid.layers.control_flow.lod_rank_table(rank_data) new_data = fluid.layers.reorder_lod_tensor_by_rank( x=data, rank_table=table)
StaticRNN¶
-
class
paddle.fluid.layers.
StaticRNN
(name=None)[source] StaticRNN class.
The StaticRNN can process a batch of sequence data. The length of each sample sequence must be equal. The StaticRNN will have its own parameters like inputs, outputs, memories. Note that the first dimension of inputs represents sequence length, and all the sequence length of inputs must be the same. And the meaning of each axis of input and output are the same.
Examples
import paddle.fluid as fluid import paddle.fluid.layers as layers vocab_size, hidden_size=10000, 200 x = layers.data(name="x", shape=[-1, 1, 1], dtype='int64') x_emb = layers.embedding( input=x, size=[vocab_size, hidden_size], dtype='float32', is_sparse=False) x_emb = layers.transpose(x_emb, perm=[1, 0, 2]) rnn = fluid.layers.StaticRNN() with rnn.step(): word = rnn.step_input(x_emb) prev = rnn.memory(shape=[-1, hidden_size], batch_ref = word) hidden = fluid.layers.fc(input=[word, prev], size=hidden_size, act='relu') rnn.update_memory(prev, hidden) # set prev to hidden rnn.step_output(hidden) result = rnn()
The StaticRNN will unfold sequence into time steps. Users need to define how to process each time step during the
with
step.The
memory
is used as a staging data cross time step. The initial value of memory can be a variable that is filled with a constant value or a specified variable.The StaticRNN can mark multiple variables as its output. Use rnn() to get the output sequence.
-
step
() The block for user to define operators in RNN.
-
memory
(init=None, shape=None, batch_ref=None, init_value=0.0, init_batch_dim_idx=0, ref_batch_dim_idx=1) Create a memory variable for static rnn.
If the
init
is not None,memory
will be initialized by this Variable. If theinit
is None,shape
andbatch_ref
must be set, and this function will initialize ainit
Variable.- Parameters
init (Variable|None) – The initialized variable. If it is not set,
shape
andbatch_ref
must be provided. Default: None.shape (list|tuple) – The shape of the boot memory. NOTE the shape does not contain batch_size. Default: None.
batch_ref (Variable|None) – The batch size reference Variable. Default: None.
init_value (float) – the init value of boot memory. Default: 0.0.
init_batch_dim_idx (int) – the batch_size axis of the
init
Variable. Default: 0.ref_batch_dim_idx (int) – the batch_size axis of the
batch_ref
Variable. Default: 1.
- Returns
The memory variable.
Examples
import paddle.fluid as fluid import paddle.fluid.layers as layers vocab_size, hidden_size=10000, 200 x = layers.data(name="x", shape=[-1, 1, 1], dtype='int64') x_emb = layers.embedding( input=x, size=[vocab_size, hidden_size], dtype='float32', is_sparse=False) x_emb = layers.transpose(x_emb, perm=[1, 0, 2]) rnn = fluid.layers.StaticRNN() with rnn.step(): word = rnn.step_input(x_emb) prev = rnn.memory(shape=[-1, hidden_size], batch_ref = word) hidden = fluid.layers.fc(input=[word, prev], size=hidden_size, act='relu') rnn.update_memory(prev, hidden)
-
step_input
(x) Mark a sequence as a StaticRNN input.
- Parameters
x (Variable) – The input sequence, the shape of x should be [seq_len, …].
- Returns
The current time step in the input sequence.
-
step_output
(o) Mark a sequence as a StaticRNN output.
- Parameters
o (Variable) – The output sequence.
- Returns
None.
-
output
(*outputs) Mark the StaticRNN output variables.
- Parameters
outputs – The output Variables.
- Returns
None
-
update_memory
(mem, var) Update the memory from ex_mem to new_mem. NOTE that the shape and data type of
ex_mem
andnew_mem
must be same.- Parameters
mem (Variable) – the memory variable.
var (Variable) – the plain variable generated in RNN block.
- Returns
None
-
Switch¶
-
class
paddle.fluid.layers.
Switch
(name=None)[source] Switch class works just like a if-elif-else. Can be used in learning rate scheduler to modify learning rate
The Semantics:
A switch control-flow checks cases one-by-one.
The condition of each case is a boolean value, which is a scalar Variable.
It runs the first matched case, or the default case if there is one.
Once it matches a case, it runs the corresponding branch and only that branch.
Examples
import paddle.fluid as fluid lr = fluid.layers.create_global_var( shape=[1], value=0.0, dtype='float32', persistable=True, name="learning_rate") zero_var = fluid.layers.fill_constant( shape=[1], dtype='float32', value=0.0) one_var = fluid.layers.fill_constant( shape=[1], dtype='float32', value=1.0) two_var = fluid.layers.fill_constant( shape=[1], dtype='float32', value=2.0) global_step = fluid.layers.autoincreased_step_counter( counter_name='@LR_DECAY_COUNTER@', begin=0, step=1) with fluid.layers.control_flow.Switch() as switch: with switch.case(global_step == zero_var): fluid.layers.assign(input=one_var, output=lr) with switch.default(): fluid.layers.assign(input=two_var, output=lr)
While¶
-
class
paddle.fluid.layers.
While
(cond, is_test=False, name=None)[source] while loop control flow.
- Parameters
cond (Variable) – condition used to compare.
is_test (bool) – A flag indicating whether execution is in test phase.
name (str) – The name of this layer.
Examples
import paddle.fluid as fluid i = fluid.layers.fill_constant(shape=[1], dtype='int64', value=0) d0 = fluid.layers.data("d0", shape=[10], dtype='float32') data_array = fluid.layers.array_write(x=d0, i=i) array_len = fluid.layers.fill_constant(shape=[1],dtype='int64', value=3) cond = fluid.layers.less_than(x=i, y=array_len) while_op = fluid.layers.While(cond=cond) with while_op.block(): d = fluid.layers.array_read(array=data_array, i=i) i = fluid.layers.increment(x=i, value=1, in_place=True) fluid.layers.less_than(x=i, y=array_len, cond=cond)