In machine learning & cognitive science, artificial neural networks (ANNs) are a family of models galvanised by biological neural networks (the central nervous systems of animals, particularly the brain) and are used to estimate or approximate functions that can rely upon a large range of inputs and are typically unknown. Artificial neural networks are typically bestowed as systems of interconnected “neurons” that exchange messages between one another. The connections have numeric weights that can be tuned based on experience, creating neural nets adaptive to inputs and capable of learning.

Nerve cells within the brain are known as Neurons. there’s an estimated 1010 to the power(1013) neurons within the human brain(Fairly Large). Every somatic cell can create contact with several thousand different neurons. Neurons are the single unit that the brain uses to process a single data.

A somatic cell consists of a single cell body, with varied extensions and connections from it. Most of these are branches known as dendrites. there’s one much longer method the axon. The dotted line shows the axon hillock, wherever transmission of signals starts.

The boundary of the somatic cell is known as the semipermeable membrane. there’s a voltage distinction (the membrane potential) between the inside and outside of the membrane.

If the input is massive enough, an action potential is then generated. The action potential (neuronal spike) then travels down the axon, away from the cell body.

The connections between one neuron and another are known as synapses. information continuously leaves a neuron via its axon, and is then transmitted across a synapse to the receiving neuron.

Neurons solely fire when input is larger than some threshold. It should, however, be noted that firing does not get larger as the stimulus increases, its an all or nothing arrangement.Screenshot - BrainNeuronFiring.png

Synapses can either be excitative one or repressing one.

Spikes (signals) arriving at an excitative synapse tend to cause the receiving neuron to fire. Spikes (signals) arriving at an inhibitory synapse tend to inhibit the receiving neuron from firing.

The cell body & synapses essentially compute (by employing a complicated chemical/electrical/harmonial method within the brain) the distinction between the incoming excitative and repressing inputs (spatial and temporal summation).

When this distinction is massive enough (compared to the neuron’s threshold) then the neuron will fire.

So how about artificial neural networks

Suppose that we’ve got a firing rate at every neuron. additionally suppose that a neuron connects with m different neurons and so receives m-many inputs “x1 …. … xm”, we may imagine this configuration looking something like:

What is an Artificial Neuron configuration

This configuration is actually known as a Perceptron. The perceptron (an invention of Rosenblatt [1962]), was one amongst the earliest neural network models. A perceptron models a neuron by taking the weighted sum of inputs & sending the output one, if the sum is greater than some adjustable threshold value (otherwise it sends zero – this is all or nothing spiking delineated within the biology, see neuron firing section above) conjointly known as an activation function.

The inputs (x1,x2,x3..xm) and affiliation weights (w1,w2,w3..wm) in Figure four are usually real values, both postive (+) and negative (-). If the feature of some xi tends to cause the perceptron to fire, the weight w1 will be positive; if the feature xi inhibits the perceptron, the weight w1 will be negative.

The perceptron itself, consists of weights, the summation processor, and an activation function, & also an adjustable threshold processor (called bias).

For convenience, usual practice is to treat the bias, as simply another input. The subsequent diagram illustrates the revised configuration.

The bias are often thought of as the propensity (a tendency towards a specific manner of behaving) of the perceptron to fire irrespective of its inputs. The Perceptron configuration network shown in Figure 5 fires if the weighted sum > zero, or if you are into math-type explanations

Activation Function

The activation usually uses one of the following functions.

Sigmoid Function

The stronger the input, the quicker the neuron fires (the higher the firing rates). The sigmoid is additionally very useful in multi-layer networks, because the sigmoid curve permits for differentiation (which is needed in Back Propagation training of multi layer networks).

or if your into maths type explanations

Step Function

A basic on/off type function, if 0 > x then 0, else if x >= 0 then 1

or if your into math-type explanations

Learning

A foreword on learning

Before we feature on to speak regarding perceptron learning lets contemplate a true world example :

How does one teach a toddler to recognise a Car? You show him examples, telling him, “This may be a car. That’s not a car,” until the kid learns the idea of what a car is. during this stage, the kid will scrutinize the examples we’ve shown him and answer properly once asked, “Is this object a car?”

Furthermore, if we tend to show to the kid new objects that he hasn’t seen before, we might expect him to recognise properly whether or not the new object may be a car or not, providing that we have given him enough positive and negative examples.

This is exactly the concept behind the perceptron.

Learning in Perceptrons

Is the method of modifying the weights and also the bias. A perceptron computes a binary function of its input. no matter a perceptron will compute it can learn to compute.

“The perceptron is a program that learn concepts with binary, i.e. it can learn to retort with True (1) or False (0) for inputs we tend to present thereto, by repeatedly “studying” examples conferred thereto.

The Perceptron is a single layer neural network whose weights and biases may well be trained to produce an accurate target vector once conferred with the corresponding input vector. The training technique used is termed the perceptron learning rule. The perceptron generated great interest as a result of its ability to generalise from its training vectors and work with at random distributed connections. Perceptrons are particularly suited to easy issues in pattern classification.”

The Learning Rule

The perceptron is trained to retort to every input vector with a corresponding target output of either zero or one. The learning rule has been proved to converge on an answer in finite time if an answer exists.

The learning rule are often summarised within the following 2 equations:

b = b + [ T – A ]

For all inputs i:

W(i) = W(i) + [ T – A ] * P(i)
Where W is the vector of weights, P is the input vector conferred to the network, T is the correct result that the neuron ought to have shown, A is the actual output of the neuron, and b is the bias.

Training

Vectors from a training set are conferred to the network one after another.

If the network’s output is correct, no modification is made.

Otherwise, the weights and biases are updated using the perceptron learning rule (as shown above). when every epoch (an entire pass through all of the input training vectors is termed an epoch) of the training set has occurred without error, training is complete.

At this point any input training vector may be conferred to the network and it’ll respond with the right output vector. If a vector, P is not within the training set is conferred to the network, the network can tend to exhibit generalisation by responding with an output almost like target vectors for input vectors near the antecedently unseen input vector P.

What can be achieved with neural networks

Well if we are progressing to stick with using a single layer neural network, the tasks which will be achieved are totally different from those that are often achieved by multi-layer neural networks. As this article is especially meshed towards dealing with single layer networks, let’s discuss those further.

Single layer neural networks

Single-layer neural networks (perceptron networks) are networks during which the output unit is independent of the others – every weight effects only 1 output. using perceptron networks it’s potential to attain linear dis-junction functions just like the diagrams shown below (assuming we’ve a network with a pair of inputs and one output)

It can be seen that this is equivalent to the AND / OR logic gates, shown below.

Classification tasks

So that is an easy example of what we may do with one perceptron (single neuron essentially), however what if we were to chain many perceptrons together? We may build some quite complicated functionality. Essentially we might be constructing the equivalent of an electronic circuit.

Perceptron networks do however, have limitations. If the vectors don’t seem to be linearly divisible, learning will never reach a point wherever all vectors are classified properly. The foremost famed example of the perceptron’s inability to unravel issues with linearly non-separable vectors is that the Boolean XOR drawback.

 

Multi layer neural networks

With multi-layer neural networks we are able to solve non-linear separable issues like the XOR drawback mentioned above, that isn’t possible using single layer (perceptron) networks. Consecutive part of this article series will show the way to do this using multi-layer neural networks, using the back propagation training technique.