Visual explanation of tensor operations

Daniel Angelov
4 min readApr 5, 2021

When you start your machine learning journey, you are very early on hit by new terminology. So let’s see what are tensors and what can we do with them?

If we begin our search on Wikipedia, we’ll encounter something like:

In mathematics, a tensor is an algebraic object that describes a relationship between sets of algebraic objects related to a vector space.

In other places we’ll read that they are synonymous to matrices or tables. Albeit closer, they represent a special case of a 2-dimensional tensor. Similarly, from your maths and physics classes you can remember using vector, which are, similarly, a 1D tensor.

So, from a computer science perspective you can think of them as a storage of N-dimensional data. They can be dense or sparse to indicate whether or not all items within its cells are populated or not. And you can perform standard mathematical operation on between them.

Defining a tensor

Let’s start by defining a tensor — the numbers up to 16. We can do so in NumPy using just a few lines of code:

import numpy as np
a = np.arange(16)
print(a)

We get a terminal output as follows [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] . If we want to convert it into a matrix or a 3D tensor we can do so by:

a_2d = a.reshape(4, 4)
a_3d = a.reshape(2, 2, 4)

The terminal outputs we receive by printing the values are:

a_2d = [[ 0  1  2  3]
[ 4 5 6 7]
[ 8 9 10 11]
[12 13 14 15]]
a_3d = [[[ 0 1 2 3]
[ 4 5 6 7]]
[[ 8 9 10 11]
[12 13 14 15]]]

Suddenly this becomes a bit hard to understand, so let’s visualize and inspect the future operations using Efemarai. It works both with numpy and torch tensors and can be used to see any values and rotate around the resulting object.

import efemarai as efef.inspect(a_2d)
ef.inspect(a_3d)
Inspecting a_2d and a_3d using Efemarai. Photo by author.

We can straight away see that these are 2D and 3D tensors and can confirm either by looking at their shape on the top right, or by printing a_2d.shape property.

So let’s create another random 2D tensor and perform some operations between them.

b = np.random.randn(4, 4)
The random 2D tensor, whose elements are sampled from a standard normal distribution. You can see the distribution of values on the histogram on the top right. Photo by author.

Addition

The addition operation between tensors is defined as the element-wise addition. Thus, if they are the same dimensionality, it requires the two tensors to be of the same shape.

addition = a_2d + b
ef.inspect(addition)
Addition of two tensors. Photo by author.

Multiplication

In tensors there are multiple ways to perform a multiplication. It is better described here, but broadly it can be summarized below. Note that the order of the tensors does matter — a * b is different than b * a .

Hadamard product

This is an element-wise product between tensors. In numpy it is abstracted by the symbol * .

melementwise = a_2d * b
ef.inspect(melementwise, name="Elementwise")
Hadamard product. Photo by author.

Dot

To perform the dot product, we can use the .dot operator or the a @ b shorthand.

mdot = np.dot(a_2d, c)
ef.inspect(mdot)
Dot product. Photo by author.

Cross

The cross product of the vectors X and Y is a vector, perpendicular to both. Let’s say X and Y define a plane. To find the perpendicular to that plane we can calculate

X = np.array([1, 4, 3])
Y = np.array([2, 3, 2])

mcross = np.cross(X, Y)
# output [-1 4 -5]

ReLU operation

In machine learning, it is common to use ReLUs (Rectified Linear Unit) as a non-linearity in processing input data. Neural networks can be abstracted as a sequence of tensor operations interweaved by non-linearities.

The ReLU operator has the mathematical of y=max(0, x) . Meaning, it suppresses all negative values in the tensor.

Let’s create a bigger 25x25x25 tensor and apply ReLU to it.

X = np.random.randn(25, 25, 25)
ef.inspect(X, name="Large tensor")
ReLU_X = np.maximum(0, X)
ef.inspect(ReLU_X, name="ReLU")
Before and after the application of the ReLU operation on the tensor. Photo by Author.

As you can see from the photo above, applying the operation removed all negative values and substituted them by 0s. This can also be see if we look at the distibution of values within the tensor. On the left it follows a standard normal with values ranging between [-3.89, 4.05], whereas on the right there is a large peak around 0 with values [0, 4.05].

Conclusion

From the examples above we saw that tensors are a useful method for storing information, with different operations that can be applied. Make sure when performing any operations that you inspect the data before and after to confirm your expectations either through printing, plotting moments of the tensor or using a tool like Efemarai. Special note has to be taken in the order of operations and the dimensions of the different tensors.

--

--

Daniel Angelov

PhD Machine Learning and Robotics @ University of Edinbrugh