What is Fully Connected Neural Networks (fcNNs)?

 fcNNs

Let's have a look at the picture above before we go into the specifics of the various Deep learning designs. Remember the architecture of a standard artificial neural network (ANN) that we described earlier? Three hidden layers, each with nine neurons, make up the visible NN. Each neuron in a layer communicates with the neuron in the layer above it. This is why this architecture is referred to as a fully connected neural network or a densely linked neural network (fcNN).

A fully connected neural network (fcNN) model with three hidden layers as an example. The biology that prompted the creation of artificial neural networks

  • The way the brain functions has inspired the design of NNs.
  • You shouldn't take this point too far; it's only a muse.
  • The human brain is made up of a network of neurons.
  • The human brain has around 100 billion neurons, with each neuron connecting with 10,000 others on average.
Let's look at the neuron, which is the brain's most basic unit.

neuron

A biological brain cell in its purest form. The dendrites of the neuron receive signals from other neurons (shown on the left). If the accumulated signal reaches a particular threshold, the axon sends an impulse to the axon terminals (on the right), which link to other neurons.

A simplified schematic of a neuron is shown in above. Through its dendrites, it receives impulses from other neurons. Some inputs have an activating effect, while others have a deactivating effect. Within the cell body of the neuron, the incoming signal accumulates and is processed. The neuron activates when the signal is strong enough. That is, it generates a signal that is sent to the axon terminals. Each terminal axon links to a different neuron. Some connections are stronger than others, making signal transmission to the next neuron simpler. The strength of these links may be altered by experiences and learning.

 artificial neuron


The artificial neuron seen in above is a mathematical abstraction created from a biological brain cell by computer scientists. A brain cell as a mathematical abstraction (an artificial neuron). The weighted total of the input values p, x1 to xp, plus a bias term b that moves the final weighted sum of the inputs up or down, yields the value z. Using an activation function, the value y is calculated from z.

functionAn artificial neuron gets a set of numeric input values, xi, which are multiplied by a set of numeric weights, wi. Determine the weighted sum of the inputs plus a bias term b (that receives 1 as input) as to accumulate the inputs. This formula is identical to the one used in linear regression. The resultant z value can then be further transformed using a non-linear activation function, such as the sigmoid function, which converts z to a number between 0 and 1. (see below).

 function


This function is defined as follows:

High positive values of z result in values near to 1, while negative values with big absolute values result in values close to 0. As seen in figure 2.4, large positive values of z result in values close to 0. The resultant value y can be understood in this way as the likelihood that the neuron will fire. In the context of classification, it may also be seen as as a probability for a specific class.

A single neuron may be used to form a binary classifier (with 0 and 1 as potential classes) that takes many numeric features, xi, and generates the probability for class 1. If you have a background in statistics, this may appear familiar, and a network with a single neuron is also known as logistic regression in statistics. If you've never heard of logistic regression, don't worry.


sigmoid function
The sigmoid function f converts an arbitrary value z to a number between 0 and 1 (squeezing).

You may like these posts