The main principle of neural network includes a collection of basic elements, i.e., artificial neuron or perceptron. It includes several basic inputs such as x1, x2….. xn which produces a binary output if the sum is greater than the activation potential.
The schematic representation of sample neuron is mentioned below −
The output generated can be considered as the weighted sum with activation potential or bias.
$$Output=\sum_jw_jx_j+Bias$$
The typical neural network architecture is described below −
The layers between input and output are referred to as hidden layers, and the density and type of connections between layers is the configuration. For example, a fully connected configuration has all the neurons of layer L connected to those of L+1. For a more pronounced localization, we can connect only a local neighbourhood, say nine neurons, to the next layer. Figure 1-9 illustrates two hidden layers with dense connections.
The various types of neural networks are as follows −
Feedforward neural networks include basic units of neural network family. The movement of data in this type of neural network is from the input layer to output layer, via present hidden layers. The output of one layer serves as the input layer with restrictions on any kind of loops in the network architecture.
Recurrent Neural Networks are when the data pattern changes consequently over a period. In RNN, same layer is applied to accept the input parameters and display output parameters in specified neural network.
Neural networks can be constructed using the torch.nn package.
It is a simple feed-forward network. It takes the input, feeds it through several layers one after the other, and then finally gives the output.
With the help of PyTorch, we can use the following steps for typical training procedure for a neural network −
rule: weight = weight -learning_rate * gradient