Your IP: Unknown · Your Status: ProtectedUnprotectedUnknown

Skip to main content

Multilayer perceptron

Multilayer perceptron

Multilayer perceptron definition

Multilayer perceptron (MLP) refers to a feedforward artificial neural network that consists of at least three layers of nodes: an input layer, one or more hidden layers, and an output layer. Each node, or neuron, in one layer connects with a certain weight to every neuron in the subsequent layer.

These connections, and their associated weights, are used to transmit and transform information. To produce outputs, MLPs use a nonlinear activation function. They are trained using algorithms like backpropagation combined with optimization techniques such as gradient descent.

See also: artificial intelligence

History of multilayer perceptron

The idea of perceptrons was developed as theories about neural networks were being developed in the 1940s and 1950s. Frank Rosenblatt first introduced the perceptron, an algorithm for supervised learning of binary classifiers. However it soon became clear that a single-layer perceptron couldn’t solve some simple problems such as the XOR problem. The perceptron criticism resulted in a steep decline in interest and funding for the theory.

In 1986, Rumelhart, Hinton, and Williams suggested a way to train deep neural networks by introducing the backpropagation algorithm for training multilayer perceptrons. While it helped the multilayer perceptron idea regain popularity, it had to compete with other machine learning techniques such as support vector machines.

Now, multilayer perceptrons are typically used for any data that can be flattened into a 1D vector such as predicting house prices or customer churn.

Ultimate digital security