00 Abstract - PAI-yoonsung/lstm-paper GitHub Wiki

Abstract

Long Short-Term Memory Recurrent Neural Networks (LSTM-RNN) are one of the most powerful dynamic classifiers publicly known. The network itself and the related learning algorithms are reasonably well documented to get an idea how it works.

Long Short-Term Memory Recurrent Neural Networks (LSTM-RNN) 은 맀우 κ°•λ ₯ν•˜λ‹€κ³  μ•Œλ €μ§„ 동적 λΆ„λ₯˜κΈ°μž…λ‹ˆλ‹€. λ•Œλ¬Έμ—, 이 λ„€νŠΈμ›Œν¬ ꡬ쑰와 λ”λΆˆμ–΄ κ΄€λ ¨λœ ν•™μŠ΅ μ•Œκ³ λ¦¬μ¦˜μ΄ μ–΄λ–€ μ‹μœΌλ‘œ λ™μž‘ν•˜λŠ” 가에 λŒ€ν•œ λ¬Έμ„œν™”κ°€ 잘 이루어져 μžˆλŠ” νŽΈμž…λ‹ˆλ‹€.

This paper will shed more light into understanding how LSTM-RNNs evolved and why they work impressively well, focusing on the early, ground-breaking publications.

이 λ…Όλ¬Έμ—μ„œλŠ” LSTM-RNN λͺ¨λΈλ“€μ΄ μ–΄λ–»κ²Œ μ§„ν™”ν•΄μ™”λŠ” 지와 μ™œ 잘 μž‘λ™ν•˜λŠ”μ§€λ₯Ό 초창기의 획기적인 λ°œν‘œλ¬Όλ“€μ„ μ€‘μ‹¬μ μœΌλ‘œ μ‘°λͺ…ν•΄λ³Ό κ²ƒμž…λ‹ˆλ‹€.

We significantly improved documentation and fixed a number of errors and inconsistencies that accumulated in previous publications. To support understanding we as well revised and unified the notation used.

μš°λ¦¬λŠ” μš°λ¦¬ν•œν…Œλ„ λ‚΄μš©μ΄ μ‰½κ²Œ 이해될 수 μžˆλ„λ‘ ν•˜κΈ° μœ„ν•˜μ—¬ κ°œμ •κ³Ό μ‚¬μš©ν•œ ν‘œκΈ°λ²•λ“€μ„ ν†΅μΌν•˜λŠ” κ²ƒμœΌλ‘œ λ¬Έμ„œλ₯Ό 크게 κ°œμ„ μ‹œμΌ°κ³  이전에 λ°œν‘œλ¬Όλ“€μ—μ„œ λ³Ό 수 μžˆμ—ˆλ˜ λͺ‡λͺ‡ 였λ₯˜μ™€ λͺ¨μˆœμ λ“€μ„ μˆ˜μ •ν•˜μ˜€μŠ΅λ‹ˆλ‹€.