Every superhero has an Origin story! We will take a look at the overall motive and the inspiration behind (Deep Learning) neural networks to understand what they are.
Deep Learning Series
In this series, with each blog post we dive deeper into the world of deep learning and along the way build intuition and understanding. This series has been inspired from and references several sources. It is written with the hope that my passion for Data Science is contagious. The design and flow of this series is explained here.
The Origins
Have you ever wondered how our brain works? How it is easily able to detect and recognize objects?
We will start with this analogy and slowly connect the dots to deep learning.
Walter Pitts and Warren McCulloch created a computer model based on the neural networks of the human brain in 1943. This was where it all began. The first model which was the birth of neural network algorithms.
Overtime, there have been a lot of developments but the purpose of this blog is not a history lesson but to look at their intention, they were trying to develop a model based on how the neural networks of the human brain work. That is great but how does our brain work.
Neurons are the driving force behind every memory and action.
The dendrites are the receivers of the signal and the Axon is the transmitter of the signal. Imagine several neurons where signal from the dendrite of one neuron is passed through the axon on to the dendrites of the next neuron. This is how these signals/electrical impulses are transferred from one neuron to another.
If we put the data science lens on, consider the following:
To take an example we can say, that the inputs are the senses such as smell, taste and so on. The brain makes sense of all these inputs. Similarly, Neural networks have an input layer and based on some transformations we get the output signal/prediction. The neuron is where all the action is happening. That is there is some mathematical transformation taking place. We will discuss these transformations in more detail in later posts.
Connecting Neurons
As there are so many neurons in our brain, does each and every one fire up with each input? We know, different parts of our brain are used with different types of sensory inputs. Well, that is where the activation function in neural networks comes in. It is a way to decide whether a neuron is to be fired up or not. For explanation purposes, in the image above we have an input layer followed by a single neuron. In reality, there are many neurons and many layers involved. A neuron is either getting the input data or the resultant values of other neurons (perceptrons).
Linear regression bears some resemblance to the initial state of neural networks. Frank Rosenblatt’s Perceptron was the first idea conceived specifically as a method to make machines learn.
It took binary inputs and multiplied it with continuous valued weights. The sum of the weighted values (+ bias) will be mapped to 0 or 1 based on what it is closer to. This function of weighted sum + bias is very similar to linear regression. In the image shared below, xi’s are the input values and wi’s are the respective weights.
We will discuss this in more detail in the next part of this series, the architecture of neural networks.
The key takeaways here:
- The inspiration behind Neural Networks
- Working of Neurons
- An idea of the structure of Neural Networks