Neural Networks History - ml5js/Intro-ML-Arts-IMA-F24 GitHub Wiki
Adapted from A 'Brief' History of Neural Nets and Deep Learning
- In 1943, Warren S. McCulloch, a neuroscientist, and Walter Pitts, a logician, developed the first conceptual model of an artificial neural network. In their paper, "A logical calculus of the ideas immanent in nervous activity,” they describe the concept of a neuron, a single cell living in a network of cells that receives inputs, processes those inputs, and generates an output.
- Hebb's Rule from The Organization of Behavior: A Neuropsychological Theory: "When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased."
- Invented in 1957 by Frank Rosenblatt at the Cornell Aeronautical Laboratory (original paper), a perceptron is the simplest neural network possible: a computational model of a single neuron. A perceptron consists of one or more inputs, a processor, and a single output.
- In 1969, in their book Perceptrons Marvin Minksy and Seymour Papert demonstrate the limitations of perceptrons to solve only "linearly separable" problems. AI Winter #1!
- Paul Werbos's 1974 thesis Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences proposes "backpropagation" as a solution to adjusting weights in the hidden layers of a neural network. The technique was popularized in the 1986 paper Learning representations by back-propagating errors by David Rumelhart, Geoffrey Hinton, and Ronald Williams