Free tools to visualize your computational graph

Daniel Angelov
3 min readMar 19, 2021

Creating neural networks often involves debugging the structure, sequence and loss functions used in different parts of the model. What is the tool that gives the best insight? We’ll look into popular solutions with 1D, 2D and 3D options in mind!

Print statements can show you a 1D representation, Tensorboard — in 2D, with Efemarai you can explore your computational graph in 3D. Photos by author.

Building networks often involves looking through the resulting computational graph and confirming that the fundamental research idea, sketched on the whiteboard, has been transformed into the correct structure. For this experiment let’s choose a relatively complex structure — DCGAN alongside its loss function— that may show the usefulness of the different platforms for debugging large models. This is how the generator is being represented in the paper itself. Can we do better with access to the code? Here we use the PyTorch DCGAN example code.

Structure of the Generator. Image taken from the DCGAN paper.

Model printing — 1D

One of the quickest ways to obtain information about a model, regardless of the language in which it is written, is by printing the model variable.

print(model)

With the DCGAN, we can independently print the Discriminator and Generator. And obtain results as follows:

Printing the individual Discriminator and Generator models. Photo by author.

Pros: Quick, intuitive, gives the whole graph instantaneously.

Cons: No information about inputs, outputs, intermediate shapes and tensors, or representation of the loss function.

Tensorboard — 2D

A more advanced method is to use Tensorboard. It’s a platform that allows you to track different values during the training process of the model, as well as to store the model graph. It can be accessed through SummaryWritter .

from torch.utils.tensorboard import SummaryWriter
writer = SummaryWriter()
writer.add_graph(netGenerator, noise)

The resulting UI can be accessed through the browser and gives a planar 2D visualization of the model. Each of the operation boxes can be expanded to reveal more underlying tensors or computation.

Tensorboard Generator network. Gif by author.

Pros: Can inspect the whole graph. Information about the computational elements and shapes. Useful mini-map of the model when exploring.

Cons: No information about representation of the loss function or intermediate tensors. UI is clunky.

Efemarai — 3D

The most expressive method to explore the computational graph out of the 3 options is Efemarai. It’s an interactive debugger that gives users the ability to dig down in the different output and computational elements. It’s the only option out of the three that shows intermediate tensors like kernels and feature maps.

import efemarai as effor data, target in dataset:    
# Pause execution & visualize computational graph within
with ef.scan():
fake = netGenerator(noise)
lossD = lossReal + lossFake
lossG = ...
Efemarai output of the computational graph of the DCGAN, including the loss function. Gif by author.

Pros: Shows full 3D view of all of the computation, including loss function. All feature maps have shapes and can be inspected further for the information they contain.

Cons: Needs a free registration to pip install the package.

Conclusion

In day to day debugging, many ML practitioners will be faced with situations that need altering the neural network structure. Any such change has a downward impact on the rest of the network and output. For relatively easy manipulation, most can get away through printing the model structure. However, as the networks expand, multiple modalities are involved or debugging of the loss function is needed, Efemarai provides an easy way to alleviate this pain.

--

--

Daniel Angelov

PhD Machine Learning and Robotics @ University of Edinbrugh