Activation functions - sagr4019/ResearchProject GitHub Wiki

Activation functions

Networks receive as Input mostly linear transformations (input multiplied by weight). In our world the problems are non-linear. To convert the linear values into non-linear values, an activation function is used. Activation functions decide about the existence of a feature and return in the example of the sigmoid function values between 0 (feature non-existent) and 1 (feature existent).

The sigmoid function, also known as the logistic function, is an activation function and has an S-shaped graph. The function provides an output signal from a neuron with the output set to a specific range of values. The purpose is to limit the output of a neuron via a threshold value.

The lower figure describes some activation functions. The tanh function returns values between -1 and 1.