Single-node training¶
Preparation¶
To perform single-node training in PaddlePaddle Fluid, you need to read Prepare Data and Set up Simple Model . When you have finished reading Set up Simple Model , you can get two fluid.Program
, namely startup_program
and main_program
. By default, you can use fluid.default_startup_program()
and fluid.default_main_program()
to get global fluid.Program
.
For example:
import paddle.fluid as fluid
image = fluid.layers.data(name="image", shape=[784])
label = fluid.layers.data(name="label", shape=[1])
hidden = fluid.layers.fc(input=image, size=100, act='relu')
prediction = fluid.layers.fc(input=hidden, size=10, act='softmax')
loss = fluid.layers.cross_entropy(input=prediction, label=label)
loss = fluid.layers.mean(loss)
sgd = fluid.optimizer.SGD(learning_rate=0.001)
sgd.minimize(loss)
# Here the fluid.default_startup_program() and fluid.default_main_program()
# has been constructed.
After the configuration of model, the configurations of fluid.default_startup_program()
and fluid.default_main_program()
have been finished.
Initialize Parameters¶
Random Initialization of Parameters¶
After the configuration of model,the initialization of parameters will be written into fluid.default_startup_program()
. By running this program in fluid.Executor()
, the random initialization of parameters will be finished in global scope, i.e. fluid.global_scope()
.For example:
exe = fluid.Executor(fluid.CUDAPlace(0))
exe.run(program=fluid.default_startup_program())
Load Predefined Parameters¶
In the neural network training, predefined models are usually loaded to continue training. For how to load predefined parameters, please refer to Save, Load Models or Variables & Incremental Learning.
Single-card Training¶
Single-card training can be performed through calling run()
of fluid.Executor()
to run training fluid.Program
.
In the runtime, feed data with run(feed=...)
and get persistable data with run(fetch=...)
. For example:
import paddle.fluid as fluid
import numpy
train_program = fluid.Program()
startup_program = fluid.Program()
with fluid.program_guard(train_program, startup_program):
data = fluid.layers.data(name='X', shape=[1], dtype='float32')
hidden = fluid.layers.fc(input=data, size=10)
loss = fluid.layers.mean(hidden)
sgd = fluid.optimizer.SGD(learning_rate=0.001)
sgd.minimize(loss)
use_cuda = True
place = fluid.CUDAPlace(0) if use_cuda else fluid.CPUPlace()
exe = fluid.Executor(place)
# Run the startup program once and only once.
# Not need to optimize/compile the startup program.
startup_program.random_seed=1
exe.run(startup_program)
# Run the main program directly without compile.
x = numpy.random.random(size=(10, 1)).astype('float32')
loss_data, = exe.run(train_program,
feed={"X": x},
fetch_list=[loss.name])
# Or
# compiled_prog = compiler.CompiledProgram(train_program)
# loss_data, = exe.run(compiled_prog,
# feed={"X": x},
# fetch_list=[loss.name])
Notes:
About data type supported by feed, please refer to the article Transfer Train Data to Executor.
The return value of
Executor.run
is the variable value offetch_list=[...]
.The fetched Variable must be persistable.fetch_list
can be fed with either Variable list or name list of variables .Executor.run
returns Fetch result list.If the fetched data contain sequence information, you can set
exe.run(return_numpy=False, ...)
to directly getfluid.LoDTensor
. You can directly access the information influid.LoDTensor
.
Multi-card Training¶
In multi-card training, you can use fluid.compiler.CompiledProgram
to compile the fluid.Program
, and then call with_data_parallel
. For example:
# NOTE: If you use CPU to run the program, you need
# to specify the CPU_NUM, otherwise, fluid will use
# all the number of the logic core as the CPU_NUM,
# in that case, the batch size of the input should be
# greater than CPU_NUM, if not, the process will be
# failed by an exception.
if not use_cuda:
os.environ['CPU_NUM'] = str(2)
compiled_prog = compiler.CompiledProgram(
train_program).with_data_parallel(
loss_name=loss.name)
loss_data, = exe.run(compiled_prog,
feed={"X": x},
fetch_list=[loss.name])
Notes:
CompiledProgram will convert the input Program into a computational graph, and
compiled_prog
is a completely different object from the incomingtrain_program
. At present,compiled_prog
can not be saved.Multi-card training can also be used: ref:api_fluid_ParallelExecutor , but now it is recommended to use: CompiledProgram.
If
exe
is initialized with CUDAPlace, the model will be run in GPU. In the mode of graphics card training, all graphics card will be occupied. Users can configure CUDA_VISIBLE_DEVICES to change graphics cards that are being used.If
exe
is initialized with CPUPlace, the model will be run in CPU. In this situation, the multi-threads are used to run the model, and the number of threads is equal to the number of logic cores. Users can configure CPU_NUM to change the number of threads that are being used.