Breaking the ice with TensorFlow

Vitality Learning
5 min readMay 18, 2019

Have you decided to use Tensorflow for Machine Learning applications, but you can’t make heads or tails of it? Do you need the basic principles and few elementary examples to start tinkering? This post is for you!

Simple examples to break the ice with TensorFlow will be discussed. But, first of all, some basic principles.

What is TensorFlow?

TensorFlow is an open source software library used for Machine Learning developed by Google. Its source code has been released in 2015 with an Apache 2.0 open source licence. Today, TensorFlow is used in many scientific and industrial frameworks, mainly for Deep Learning applications.

TensorFlow can be installed on the main 64-bit PC operative systems (Windows, Linux, Mac OSx) and on Android. TensorFlow APIs can be called from Python, C, Java, Go and Rust, but using Python is recommended because it simplifies its use.

The name of the library comes from Tensor and Flow because, in linear algebra, a tensor can be represented by a multidimensional array, while flow stands for operations flow. It’s a flow of operations on tensors.

TensorFlow can be executed on different hardware platforms, from traditional CPUs to GPUs. Nevertheless, in 2016, Google has designed an application specific integrated circuit (ASIC) tailored on Artificial Intelligence (AI) applications using TensorFlow. Such an ASIC is called Tensor Processing Unit (TPU).

A virtual machine is also available to each Google account through the Google Colaboratory platform. On that virtual machine, running TensorFlow examples using CPUs, GPUs or TPUs is possible.

To understand how TensorFlow works, it is necessary a short premise on lazy and eager evaluations.

Lazy evaluation

Lazy evaluation is a programming technique consisting on postponing computations until the result is really needed. It enables a performance improvement because it avoids useless intermediate calculations.

Eager evaluation

The opposite of lazy evaluation is eager evaluation. It determines the values of a computation the earliest possible, even if those values are not really needed in that moment. This is the typical behavior of most programming languages. An advantage of eager evaluation is that it allows the programmer to specify the execution order, simplifying code debugging. However, it performs calculations that can be not really needed and forces the programmer to organize the code in an optimal way.

How does TensorFlow work?

TensorFlow uses lazy evaluation by default and exploits graphs to represent the operations flow. Like any graph, a TensorFlow graph is composed by nodes and directional edges. Each node represents a different operation. A directional edge represents the input or output data of a node.

Data are managed by TensorFlow as n-dimensional arrays representative of tensors. A tensor has three properties: Rank, Shape and Type.

  • rank: denotes the number of dimensions of a tensor (for example, if represented by a three-indices matrix, the tensor has rank 3).
  • shape: represents the size of each dimension of a tensor (for example, a tensor can have a 3x2x5 shape).
  • type: is the type of the data of a tensor.

Once the programmer has defined the operations, TensorFlow defines an operations graph. Nevertheless, the graph is an abstract entity and, once defined, no computation will actually take place. The graph must be executed to give rise to actual compotations. This is done by using TensorFlow sessions. Recapitulating, TensorFlow works in three steps:

  1. Internally construct the graph corresponding to the operations flow defined by the programmer;
  2. Create the session;
  3. Execute the operations in the graph.

Let us clarify how TensorFlow works with a simple, practical example

Let us consider one of the simples examples in TensorFlow, namely, the sum of two constants:

Since, as mentioned, TensorFlow separates the definition and the execution phases by the lazy evaluation technique, the only things that we can know about the variable s is its type (float32 ) and its size (scalar). Indeed, all that we obtain from print(s)is

Tensor(“add_1:0”, shape=(), dtype=float32)

To obtain the actual result of the calculations, we need to execute the operations from within a session:

Eager evaluation in TensorFlow

TensorFlow also enables eager evaluations on demand. Eager evaluations allow an easier debug during the execution, being it possible to use Python debugger to inspect objects like variables. To enable the eager execution, the eager module must be loaded and the dynamic modality enables using enable_eager_execution(). A short example follows:

After having clarified the working principles of TensorFlow, let us present further few notes.

Using placeholders

In the first simple example shown above, we have seen how defining and using TensorFlow constants. The constants are conceived to keep the same value at each graph execution. If we want to define the input values at runtime, we need to use placeholders.

When defining a placeholder, we do not set its value, since the latter is to be defined during the computations and not at the time the graph is defined. Opposite to that, the type of the placeholder must be specified (int16 in our case, see below).

Once defined the placeholder, the session is created and the computations are run by specifying the values of the placeholders using a dictionary. The following example illustrates the use of placeholders.

Interoperability of TensorFlow and Numpy

Many Python Users are accustomed to work with Numpy and have many legacy codes exploiting such a library. The problem then arises about the possibility of integrating TensorFlow into codes already using Numpy. The answer is that there is a full interoperability between TensorFlow and Numpy, as shown in the following code. As it can be seen, it is possible to perform mixed operations between Numpy and TensorFlow objects.

Following this post, we know TensorFlow fundamental bricks by which more complicated Machine Learning applications can be constructed.

--

--

Vitality Learning

We are teaching, researching and consulting parallel programming on Graphics Processing Units (GPUs) since the delivery of CUDA. We also play Matlab and Python.