A Neural Network or say an Artificial Neural Network (ANN) are computing system that try to simulate the human brain. It tries to learn from the experience. This system learn (progressively improve performance) to do tasks by considering examples. For example, in image recognition, they might learn to identify images that contain cats by analyzing example images that have been manually labeled as “cat” or “no cat” and using the analytic results to identify cats in other images. They have found most use in applications difficult to express in a traditional computer algorithm using rule-based programming.
Neural networks and deep learning are big topics in Computer Science and in the technology industry, they currently provide the best solutions to many problems in image recognition, speech recognition and natural language processing. Recently many papers have been published featuring AI that can learn to paint, build 3D Models, create user interfaces(pix2code), some create images given a sentence and there are many more incredible things being done everyday using neural networks.
The definition of a neural network, more properly referred to as an ‘artificial’ neural network (ANN), is provided by the inventor of one of the first neurocomputers, Dr. Robert Hecht-Nielsen. He defines a neural network as:
“…a computing system made up of a number of simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs.”
Or you can also think of Artificial Neural Network as computational model that is inspired by the way biological neural networks in the human brain process information.
Biological motivation and connections
The basic computational unit of the brain is a neuron. Approximately 86 billion neurons can be found in the human nervous system and they are connected with approximately 10¹⁴ — 10¹⁵ synapses. The diagram below shows a cartoon drawing of a biological neuron (left) and a common mathematical model (right).
The basic unit of a neural network is a neuron, often called a node. Each node take an input or a number of input, from other nodes or an external source. Each input has a weight associated with it. This weight determines the flow of information or importance between the two connected neurons. Then node applies a function called activation function on the weighted sum of the input.
These weights that connects the two nodes acts a gate between the two nodes, that determine the strength between the two node. The idea is really simple, these weight are able to learn and determine the strength of one node on another.
The final output is determines by the weighted sum of the input. The activation function(e.g. sigmoid) has a threshold value, and this threshold value helps in determining the output.
Neural Network Components
Input Nodes (input layer): This layer consist of the input from other nodes or from an external source, the value are simply passed on to the next layer. No operation or computation is done here.
Hidden Nodes (hidden layer): In Hidden layers is where intermediate processing or computation is done, they perform computations and then transfer the weights (signals or information) from the input layer to the following layer (another hidden layer or to the output layer). It is possible to have a neural network without a hidden layer.
Output Nodes (output layer): In this layer we apply the activation function on the final output.
Weights: In a neural network every node is connected with each other through the weights. For example, output of node i is provided as an input to the node j, the weight Wij is assigned between the node i and j.
Activation Function: The activation function determines the final output of a neural network for some given set of inputs.
Learning rule: The learning rule is a rule or an algorithm which modifies the parameters of the neural network, in order for a given input to the network to produce a favored output. This learning process typically amounts to modifying the weights and thresholds.