What is the use of allow_soft_placement and log_device_placement in ConfigProto in Tensorflow?

allow_soft_placement

This option allows for elastic device allocation, but it only works if your tensorflow is not GPU compiled. If your tensor stream supports GPUs, the operation is always performed on the GPU, whether or not you set the device to CPU, whether or not you set allow_soft_placement. However, if you set it to false and set the device to GPU but the GPU is not found on your computer, an error is thrown.

log_device_placement

This configuration tells you which device is assigned the action when building the drawing. It always finds the best device with the best performance on your machine. It seems to just ignore your settings.

SUMMARY:

allow_soft_placement allows dynamic allocation of GPU memory,

log_device_placement prints out device information

 

On a typical system, there are multiple computing devices. In TensorFlow, the supported device types are CPU and GPU. They are represented as strings. For example:

  • "/cpu:0": The CPU of your machine.
  • "/device:GPU:0": The GPU of your machine, if you have one.
  • "/device:GPU:1": The second GPU of your machine, etc.

Using GPUs

Supported devices

On a typical system, there are multiple computing devices. In TensorFlow, the supported device types are CPU and GPU. They are represented as strings. For example:

  • "/cpu:0": The CPU of your machine.
  • "/device:GPU:0": The GPU of your machine, if you have one.
  • "/device:GPU:1": The second GPU of your machine, etc.

If a TensorFlow operation has both CPU and GPU implementations, the GPU devices will be given priority when the operation is assigned to a device. For example, matmul has both CPU and GPU kernels. On a system with devices cpu:0 and gpu:0gpu:0 will be selected to run matmul.

Logging Device placement

To find out which devices your operations and tensors are assigned to, create the session with log_device_placement configuration option set to True.

# Creates a graph.
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
# Creates a session with log_device_placement set to True.
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
# Runs the op.
print(sess.run(c))

You should see the following output:

Device mapping:
/job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: Tesla K40c, pci bus
id: 0000:05:00.0
b: /job:localhost/replica:0/task:0/device:GPU:0
a: /job:localhost/replica:0/task:0/device:GPU:0
MatMul: /job:localhost/replica:0/task:0/device:GPU:0
[[ 22.  28.]
 [ 49.  64.]]