Let’s try to take a sneak peak inside the black box of deep learning and try to build some intuition along the way.

Deep Learning Series

In this series, with each blog post we dive deeper into the world of deep learning and along the way build intuition and understanding. This series has been inspired from and references several sources and written with the hope that my passion for Data Science is contagious. The design and flow of this series is explained here.

Inside the Black box

We have covered the origins and understood a little bit about the structure of neural networks in the previous articles. However, before we further dive into the math behind the working of neural networks, we need to polish our understanding of what is going on inside the black box.

Deep learning algorithms are mostly a black box. We do not know what patterns are being observed that trigger an activation function. We can make a guess for example when it classifies a “Dog” in Cats vs Dogs dataset, it probably saw the ears or the shape of the dog’s face. But this uncertainty would not work when these algorithms are being used in self driving cars. In such use cases we need to know why the algorithm is working the way it is.

Feature Visualization

In Neural networks it is not necessary that a Neuron will be fired up for all the images. That is, a neuron will be activated only for a select features that are present in the input images.

Well, some light was shed on feature visualization by Matthew D. Zeiler and Rob Fergus in a research paper that is available here.

They put together a novel method to decode these features. First, they trained a normal CNN ( convolutional neural network a type of Neural Networks) to classify images. At the same time, they also trained a backward looking network.

Deep Learning Black box 1
Deep Learning Black box 8

To examine a given convnet activation, we set all other activations in the layer to zero and pass the feature maps as input to the attached deconvnet layer. Then we successively (i) unpool, (ii) rectify and (iii) filter to reconstruct the activity in the layer beneath that gave rise to the chosen activation. This is then repeated until input pixel space is reached.

let’s break it down.

If we pass image A into our CNN and it passes through layer K and only neuron M ends up being activated. Now, the backward looking network (Deconvnet) will be used to reconstruct the status in the previous step. That is, based on the output of layer K, we have set all other activations to 0 except M and now we are trying to revert whatever activity happened in the previous layer (neuron M).

Thus, our goal here is to understand what feature activates a neuron. Let’s say we are training our network on cats vs dogs dataset, now we start focusing on a single neuron and ignoring all other neurons with the purpose of understanding what activated that neuron. Maybe the dog’s ears are the defining feature for this neuron and that is what this neuron looks for in every image. Now using this methodology we can explore how things are working layer by layer. In the images below you will notice the actual images and what the neural network is observing:

Deep Learning Black box 2
Deep Learning Black box 9
Deep Learning Black box 3
Layer 1 Visualizing and Understanding Convolutional Networks [Zeiler and Fergus, ECCV 2014]
Deep Learning Black box 4
Layer 2 Visualizing and Understanding Convolutional Networks [Zeiler and Fergus, ECCV 2014]
Deep Learning Black box 5
Layer 3 Visualizing and Understanding Convolutional Networks [Zeiler and Fergus, ECCV 2014]

Explanation:

In layer 1 CNN is able to identify color gradients and as we look at deeper layers, more complex patterns. The patterns are emerging going from gradients to edges/shapes to complex features like eyes.

It was necessary to supply several images to the network to see what activates the selected neuron. This becomes computationally intensive. There is another way, what if we supply a image created with random pixels and try to find out what would excite one particular neuron. We use an image similar to the one shared below, and run it through a neural network with only one neuron activated. The neural network is trying to understand how to change the color of each pixel to increase the activation of that neuron. More information is available here in the paper by Jason Yosinski.

Deep Learning Black box 6
Deep Learning Black box 10

So how do these activated images look like:

Deep Learning Black box 7
Deep Learning Black box 11

Hope this gives you a sneak peak into how neural networks work especially with image data. If you wish to further explore the same, please have a look at this amazing blog post at distill pub.

What about structured Data?

Let’s say you are working on predicting the future sales of a retail store. A neuron might get activated or give higher weights to certain inputs. For example: the variables item category and season of sale. For simplicity try to think of this as weights given in linear regression. Why would these two variables cause the needle to shift?

Well, maybe there are some seasonal products in the data set which activate that particular neuron. Similarly we can develop some intuition of which variables are influencing our neurons.

What’s Next?

Mechanics of Deep Learning:

Let’s dive into the core of neural networks. Understand the concept of Gradient Descent and Back-propagation to get some idea of how Neural Networks work. Warning some math involved! Don’t worry, we will first try to explain it in an intuitive manner and then explore some math behind it.

Leave a Reply