An Introduction to TensorFlow Programming in Python

0
6904

Deep learning is now widely used for the development of intelligent systems and has become a powerful tool for Big Data analysis. TensorFlow is the leading open source software for deep learning and is used for computer based natural language processing (NLP), computer vision, speech recognition, fault diagnosis, predictive maintenance, mineral exploration and much more.

This article will acquaint readers with the basic environment of TensorFlow, its computational library, and with dataflow graphs, which will help them with its advanced applications. Since TensorFlow’s computational environment is graph based processing, it is of utmost necessity to understand it at the outset. Here we shall learn how graph based computing in TensorFlow is performed within a Python Anaconda environment. TensorFlow building blocks are constants, placeholders and variables, all of which form the graph based machine learning computation environment where computing operations interact with each other.

A higher level TensorFlow API assists in building prototype models, but the knowledge of lower level TensorFlow core is valuable for experimentation and debugging code. It gives an inner view of the operation model of the code, which helps us to understand the code using higher level APIs.

Graphs and sessions
TensorFlow uses a dataflow graph to represent all computations in terms of the dependencies between individual operations. At the outset, programming requires a dataflow graph to define all operations, after which a TensorFlow session is created to run parts of the graph across a set of local and remote devices. High level APIs such as tf.estimator.Estimator and Keras hide the details of graphs and sessions from the end user. Low level programming is useful to understand how the graph model works under the process session.
In a dataflow graph, the nodes represent units of computation and the edges represent the data consumed or produced by a computation. For example, in a matrix multiplication operation, tf.matmul is a node in a TensorFlow graph and the multiplicand, multiplier and the result of multiplication are the three edges. This dataflow graph processing environment is helpful for distributed and parallel computing. It also speeds up the compilation process to convert the graph to code. Since the dataflow graph is a language-independent representation of the entire process environment, it is portable amongst different programming platforms. The dataflow representation of a process environment can be saved in one programming environment and transported to another programming environment.
The TensorFlow token tf.Graph contains dual information — the graph structure and graph collection. Using nodes and edges, the graph structure represents the composition of each individual operation, but it does not prescribe the methods of their use. Graph collections, on the other hand, provide a general mechanism for storing a collection of metadata within the graph. The graph collection mechanism provides ways to add and view relevant objects to and from key fields.

Creating a graph
In general, a TensorFlow program starts with a graph building phase. During this process, the API adds a new node and edge to a default graph instance. For example, a ts.constant(x) creates a single operation to produce x, then adds the value to the default graph and returns a tensor that represents the constant value. In the case of a variable, tf.Variable(0) adds an operation to store a writeable tensor value. The variable survives in between two session runs (tf.Session.run). If we consider a matrix multiplication operation tf.matmul(a,b), then it will create and add a matrix multiplication operation to the default graph to multiply matrices a and b. It will return a tensor tf.Tensor to represent the multiplication.
In most cases a default graph is sufficient to run a program, but in the case of multiple graphs, API tf.estimator is used to manage the default graph, and it uses different graphs for training and evaluation.

Session
An interactive session TensorFlow class is used in interactive contexts, such as a shell. This is convenient in interactive shells and IPython notebooks, as it is not required to pass an explicit session object to run an operation. The following example shows how a tensor constant c can be evaluated without an explicit call to a ‘session run’ operation. ‘Method close’ terminates an open interactive session.

sess = tf.InteractiveSession()
a = tf.constant(5.0)
b = tf.constant(6.0)
c = a * b
# We can just use ‘c.eval()’ without passing ‘sess’
print(c.eval())
sess.close()

In the case of a regular session activation using a with statement, in non-interactive programs the tensor object is executed as follows:

a = tf.constant(5.0)
b = tf.constant(6.0)
c = a * b
with tf.Session():
# We can also use ‘c.eval()’ here.
print(c.eval())

Session method as_default returns a context manager that makes the session object a default session. When used by the with keyword tf.Operation.run or tf.Tensor.eval should be executed in this session.

c = tf.constant(599)
sess = tf.Session()
with sess.as_default():
#assert tf.get_default_session() is sess
print(c.eval())

Since the as_default context manager does not close the session upon exit from the context, it is mandatory to close the session explicitly on exiting the context. This can be avoided by invoking the session with tf.Session() directly within the ‘with statement’. This class closes the session upon exiting from the context.
The default session is a property of the current thread. To attach the default session to a new thread, it is necessary to explicitly use with sess.as_default() in that thread. In the case of multiple graphs, if sess.graph() is different from the default graph (tf_get_default_graph), it is necessary to set sess.graph.as_default() for that graph.

list_devices
Session method list_devices() lists available devices in an activated session.

devices = sess.list_devices()
for d in devices:
print(d.name)

This will display the full name of the device along with its type and the maximum amount of memory available. For example, a typical display of the above script may be
/job:localhost/replica:0/task:0/device:CPU:0.

Slice
This operation is performed with tf.slice(), and it extracts a slice from a tensor object starting at the marked template location. The slice size is represented as a tensor shape, where size[i] is the number of elements of the ith dimension of the input that are to be sliced. The starting location begin of a slice is an offset in each dimension of the input. This starting location is zero-based, whereas the size is one-based. A value -1 of size[i] indicates that all remaining elements in dimension i are included in the slice and can be written as:

size[i] = input.dim_size(i) - begin[i]

For example:

t = tf.constant([[[1, 1, 1], [2, 2, 2]],
[[3, 3, 3], [4, 4, 4]],
[[5, 5, 5], [6, 6, 6]]])
init_op = tf.global_variables_initializer()
withtf.Session() as sess:
sess.run(init_op)

tf.slice(t, [1, 0, 0], [1, 1, 3]) 
tf.slice(t, [1, 0, 0], [1, 2, 3]) 
tf.slice(t, [1, 0, 0], [2, 1, 3])

init_op = tf.global_variables_initializer()

withtf.Session() as sess:
sess.run(init_op)
print( “Slice 1:”,sess.run(tf.slice(t, [1, 0, 0], [1, 1, 3])),”\n”)

withtf.Session() as sess:
sess.run(init_op)
print( “Slice 2:”,sess.run(tf.slice(t, [1, 0, 0], [1, 2, 3])),”\n”)

withtf.Session() as sess:
sess.run(init_op)
print( “Slice 2:”,sess.run(tf.slice(t, [1, 0, 0], [2, 1, 3])),”\n”)Conclusion
Output:
Slice 1: [[[3 3 3]]] 
Slice 2: [[[3 3 3]
[4 4 4]]] 
Slice 2: [[[3 3 3]]
[[5 5 5]]]

Placeholder
A placeholder is simply a variable that we will assign data at a later state. It allows us to create our operations and build our computation graphs, without needing the data. In TensorFlow terminology, we then feed data into the graph through these placeholders.

import tensorflow as tf 
x = tf.placeholder(“float”, None)
y = x + 2.5
withtf.Session() as session:
result = session.run(y, feed_dict={x: [1, 2, 3]})
print(result)

Output:

[3.5 4.5 5.5]

The first line creates a placeholder called x. It reserves a place in memory to store values at a later stage. This is used only to define the structure in the memory and no initial value is assigned into that location. Then a tensor call is used for adding x with 2.5. At a later stage, one can use this storage space and the defined operation to perform a vector addition. For this it is necessary to initiate a session and execute the defined operation within the session for a set of values of x.
Running y requires values of x. This can be assigned to run inside the feed_dict argument. Here, values of x are [1, 2, 3]. The run of the session of y will produce [3.5 4.5 5.5].
A larger graph consisting of numerous computations can be divided into small segments of atomic operations, and each of these operations can be performed individually. Such partial executions of a larger graph operation are an exclusive feature of TensorFlow and are not available in many other libraries that do similar jobs. During operation, placeholders do not require any static shape and can hold values of any length.

import tensorflow as tf 
x = tf.placeholder(“float”, None)
y = x + 2.5
with tf.Session() as session:
result = session.run(y, feed_dict={x: [1, 2, 3, 4]})
print(result)

This script will run for four values of vector x and will produce: [3.5 4.5 5.5 6.5].
To perform the same operation of adding 2.5 with an array of three columns with no upper boundary for the number of rows, one can modify the above script as follows:

x = tf.placeholder(“float”, [None, 3]) 
y = x + 5

with tf.Session() as session:
x_data = [[2, 3, 1],
[4, 7, 6],]
result = session.run(y, feed_dict={x: x_data})
print(result)

Output:

[[ 7. 8. 6.]
[ 9. 12. 11.]]

The placeholder can be extended to an arbitrary number of None dimensions. For example, to load an RGB image, a placeholder needs to have three slices to store all the three RGB image planes. Therefore, the placeholder needs two None for the first two dimensions and three for the last dimension. Then TensorFlow’s slice method can be used to take a sub-segment out of the image.

importmatplotlib.image as mpimg
importmatplotlib.pyplot as plt
import os

# First, load the image again
dir_path = os.path.dirname(os.path.realpath(__file__))

filename = dir_path + “\MarshOrchid.jpg”

print(dir_path)

raw_image_data = mpimg.imread(filename)
print(raw_image_data.shape)
plt.imshow(raw_image_data)
plt.show()
image = tf.placeholder(“int32”, [None, None, 3])
slice = tf.slice(image, [0, 0, 0], [300, -1, 1])

withtf.Session() as session:
result = session.run(slice, feed_dict={image: raw_image_data})
print(result.shape)

plt.imshow(result)
plt.show()

Variables
In TensorFlow, variables are used to manage data. A TensorFlow variable is the best way to represent shared and persistent tensor states to be manipulated by a program. Specific operations are required to read and modify the values of this tensor. Since a tf.Variable exists outside the context of a single session.run call, modified values are visible across multiple tf.Session, and multiple users can view the same values. These variables can be accessed via the tf.Variable class objects.
Many parts of the TensorFlow library use this facility. For example, when a variable is created, it is added by default to collections representing global variables and trainable variables. When in later stages tf.train.Saver or tf.train.Optimizer are created, the variables in these collections are used as the default arguments.

Variable initialisers
Global tensor variables can be initialised using the following TensorFlow methods:

tf.global_variables_initializer
tf.initializers.global_variables
tf.initializers.global_variables()

There is an alternative shortcut to implicitly initialise a variable list. The shortcut is tf.variables_initializer(var_list, name=’init’). If there is no definition of the variables before calling tf.global_variables_initializer, the variable list is essentially empty.
The code below illustrates this:

import tensorflow as tf

with tf.Graph().as_default():
# Nothing is printed
for v in tf.global_variables():
print(“List Global variable 1”, v,”\n”)


init_op = tf.global_variables_initializer()

a = tf.Variable(0)
b = tf.Variable(0)
c = tf.Variable(0)

init_op = tf.global_variables_initializer()

# 3 Variables are printed here
for v in tf.global_variables():
print(“List Global Variable 2”, v,”\n”)

with tf.Session() as sess:
sess.run(init_op)
print( “List session”,sess.run(a),”\n”)

Execution of this script will not generate the variable list of the first loop, whereas the second for-loop will iterate four times over the global variable list and will display the variable graph in four lines.

init_op = tf.global_variables_initializer()

After this, the following lines:

a = tf.Variable(0)
b = tf.Variable(0)
c = tf.Variable(0)
d = tf.Variable(0)

…will initialise all the global variables lists for further processing of the graph.

A simple mathematical operation
Here is an example of a simple TensorFlow mathematical operation using variables a and b:

a = tf.Variable([4])
b = tf.Variable([7])
with tf.Session() as session:
session.run(tf.global_variables_initializer())
b = a + b
result = session.run(a)
print(a)
result = session.run(b)
print(b)
print(session.run(a))
print(session.run(b))

TensorFlow software is a new programming concept designed on graph based computing. This article is an introductory tutorial to understand this open source programming environment. The building blocks of this computing environment have been discussed with simple examples. The reader will hopefully get an idea of TensorFlow programming from this brief introduction.

LEAVE A REPLY

Please enter your comment!
Please enter your name here