Artificial neuron

Welcome to connectionism

A biological standard neuron

Can we produce them is mass? I think Koskenkorva spirits are killing some of mine...

Our current artificial approximation (also initially known as perceptron, and now more commonly articial neuron)

  • A series of inputs (a1, a2, a3 ... an)

  • A series of weights (w1, w2, w3 ... wn)

  • A bias (b). It's a kind of predisposition, allowing the neuron to model a constant and independent value to the inputs that affects to the neuron output

  • An activation function that it is applied to the sum of the bias and the product of each input with its corresponding weight

A basic idea on how to train it

  • We have a collection of examples and their output (supervised learning for simplification)

  • We initialize the 'w' or weight with random numbers

  • Imagine the 'a' are the unit elements of the data input of the examples (pixels of an image of a cat or a dog)

  • The neuron gives us a value from 0 to 1 (for example, 0 dog and 1 cat, intermediate values probabilities)

  • Iniatially, the neuron won't achieve any interesting result.

  • If the neuron provides the right or wrong answer, we change the weights accordingly (we are not going to describe the process here) calculating the error in the prediction

  • If we feed the neuron with many examples, little by little the weights will be changed, so finally the neuron will be able to predict good results (with some degree of accuracy)

  • The final weights, are the real values and important elements. They are the result of the taining, the values that give more or less importance to each individual input

The problem here is that with just 1 neuron, only simple linear problems can be solved, and this is not generally the case.

Last updated