Perceptron
Perceptron is a single layer neural network, or we can say a neural network is a multi-layer perceptron. Perceptron is a binary classifier, and it is used in supervised learning. A simple model of a biological neuron in an artificial neural network is known as Perceptron.
A function that can decide whether or not an input which is represented by a vector of number belongs to some specific class is known as binary classifiers. The binary classifier is a type of linear classifier. A linear classifier is a classification algorithm which makes its predictions based on a linear predictor function combining a set of weight with the feature vector.
The perceptron algorithm was designed to categorizing subjects into one of two types, classify visual input and separating groups with a line. Classification is a key part of image processing and machine learning. The perceptron algorithm classifies patterns, i.e., find and classify by many different means using machine learning algorithm, and groups by finding the linear separation between different objects and patterns which are received through numeric or visual input.
A normal neural network looks like the following.
Perceptron consist of four parts and which are required to understand for the implementation of the perceptron model in PyTorch.
- Input values or one input layer
The input layer of a perceptron is made of artificial input neurons and brings the initial data into the system for further processing. - Weights and bias
Weight represents the strength or dimension of the connection between units. If the weight from node 1 to node 2 has the greater quantity, then neuron 1 has greater influence over neuron 2. How much influence of the input will have on the output, is determined by weight.
Bias is similar to the intercept added in a linear equation. It is an additional parameter which task is to adjust the output along with the weighted sum of the inputs to the neuron. - Activation Function
A neuron should be activated or not, is determined by an activation function. Activation function calculates a weighted sum and further adding bias with it to give the result.
Neural Network is based on the Perceptron, so if we want to know the working of the neural network, learn how perceptron work.
The Perceptron works on three simple steps which are as follows:
a) In the first step, all the input x are multiplied with their weights denoted as K. This step is essential because the output of this step will be input for the next step.
b) Next step is to add all the multiplied value from K1 to Kn. It is known as the weighted sum. This weighted sum will be treated as an input for the next step.
c) In the next step, the weighted sum, which is calculated from the previous step, is applied to the correct activation function.
For example
A unit step activation function
Note 1: Weight shows the strength of the particular node.
Note 2: A bias value allows you to shift the activation function curve up or down.
Note 3: The activation functions are used to map the input between the required value like (0, 1) or (-1, 1)
Note 4: Perceptron is usually used to classify the data into two parts. Therefore, it is also known as a Linear Binary Classifier.