Python Deep Learning - Fundamentals


Advertisements

In this chapter, we will look into the fundamentals of Python Deep Learning.

Deep learning models/algorithms

Let us now learn about the different deep learning models/ algorithms.

Some of the popular models within deep learning are as follows −

  • Convolutional neural networks
  • Recurrent neural networks
  • Deep belief networks
  • Generative adversarial networks
  • Auto-encoders and so on

The inputs and outputs are represented as vectors or tensors. For example, a neural network may have the inputs where individual pixel RGB values in an image are represented as vectors.

The layers of neurons that lie between the input layer and the output layer are called hidden layers. This is where most of the work happens when the neural net tries to solve problems. Taking a closer look at the hidden layers can reveal a lot about the features the network has learned to extract from the data.

Different architectures of neural networks are formed by choosing which neurons to connect to the other neurons in the next layer.

Pseudocode for calculating output

Following is the pseudocode for calculating output of Forward-propagating Neural Network

  • # node[] := array of topologically sorted nodes
  • # An edge from a to b means a is to the left of b
  • # If the Neural Network has R inputs and S outputs,
  • # then first R nodes are input nodes and last S nodes are output nodes.
  • # incoming[x] := nodes connected to node x
  • # weight[x] := weights of incoming edges to x

For each neuron x, from left to right −

  • if x <= R: do nothing # its an input node
  • inputs[x] = [output[i] for i in incoming[x]]
  • weighted_sum = dot_product(weights[x], inputs[x])
  • output[x] = Activation_function(weighted_sum)
Advertisements