Schmidhuber 2015 - guillaumedescoteauxisabelle/ma-biblio GitHub Wiki
ZotWeb | article-journal | |
Src Url | Schmidhuber (2015) | |
In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarizes relevant work, much of it from the previous millennium. Shallow and Deep Learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.
Citer: (Schmidhuber, 2015)
FTag: Schmidhuber-2015
APA7: Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural Networks, 61, 85–117. https://doi.org/10.1016/j.neunet.2014.09.003
( Cache)
AE:
Autoencoder
AI:
Artificial Intelligence
ANN:
Artificial Neural Network
BFGS:
Broyden–Fletcher–Goldfarb–Shanno
BNN:
Biological Neural Network
BM:
Boltzmann Machine
BP:
Backpropagation
BRNN:
Bi-directional Recurrent Neural Network
CAP:
Credit Assignment Path
CEC:
Constant Error Carousel
CFL:
Context Free Language
CMA-ES:
Covariance Matrix Estimation ES
CNN:
Convolutional Neural Network
CoSyNE:
Co-Synaptic Neuro-Evolution
CSL:
Context Sensitive Language
CTC:
Connectionist Temporal Classification
DBN:
Deep Belief Network
DCT:
Discrete Cosine Transform
DL:
Deep Learning
DP:
Dynamic Programming
DS:
Direct Policy Search
EA:
Evolutionary Algorithm
EM:
Expectation Maximization
ES:
Evolution Strategy
FMS:
Flat Minimum Search
FNN:
Feedforward Neural Network
FSA:
Finite State Automaton
GMDH:
Group Method of Data Handling
GOFAI:
Good Old-Fashioned AI
GP:
Genetic Programming
GPU:
Graphics Processing Unit
GPU-MPCNN:
GPU-Based MPCNN
HMM:
Hidden Markov Model
HRL:
Hierarchical Reinforcement Learning
HTM:
Hierarchical Temporal Memory
HMAX:
Hierarchical Model “and X”
LSTM:
Long Short-Term Memory (RNN)
MDL:
Minimum Description Length
MDP:
Markov Decision Process
MNIST:
Mixed National Institute of Standards and Technology Database
MP:
Max-Pooling
MPCNN:
Max-Pooling CNN
NE:
NeuroEvolution
NEAT:
NE of Augmenting Topologies
NES:
Natural Evolution Strategies
NFQ:
Neural Fitted Q-Learning
NN:
Neural Network
OCR:
Optical Character Recognition
PCC:
Potential Causal Connection
PDCC:
Potential Direct Causal Connection
PM:
Predictability Minimization
POMDP:
Partially Observable MDP
RAAM:
Recursive Auto-Associative Memory
RBM:
Restricted Boltzmann Machine
ReLU:
Rectified Linear Unit
RL:
Reinforcement Learning
RNN:
Recurrent Neural Network
R-prop:
Resilient Backpropagation
SL:
Supervised Learning
SLIM NN:
Self-Delimiting Neural Network
SOTA:
Self-Organizing Tree Algorithm
SVM:
Support Vector Machine
TDNN:
Time-Delay Neural Network
TIMIT:
TI/SRI/MIT Acoustic-Phonetic Continuous Speech Corpus
UL:
Unsupervised Learning
WTA:
Winner-Take-All
AILanguage