Akten Grierson 2017.pol - guillaumedescoteauxisabelle/ma-biblio GitHub Wiki
page: 1 type: text-highlight created: 2020-10-06T01:57:02.516Z color: yellow Real-time interactive sequence generation and control with Recurrent Neural Network ensembles
page: 1 type: text-highlight created: 2020-10-06T01:58:01.974Z color: yellow Akten
page: 1 type: text-highlight created: 2020-10-06T01:58:19.867Z color: yellow Recurrent Neural Networks (RNN)
page: 1 type: text-highlight created: 2020-10-06T01:59:05.217Z color: yellow Long Short Term Memory (LSTM) RNNs, are a popular and very successful method for learning and generating sequences
page: 1 type: text-highlight created: 2020-10-06T01:59:19.622Z color: #FF6900 current generative RNN techniques do not allow real-time interactive control of the sequence generation process, thus aren’t well suited for live creative expression .
page: 1 type: text-highlight created: 2020-10-06T02:00:37.717Z color: #9900EF We propose a method of real-time continuous control and ‘steering’ of sequence generation using an ensemble of RNNs and dynamically altering the mixture weights of the models. W
page: 1 type: text-highlight created: 2020-10-06T02:00:46.971Z color: red e demonstrate the method using character based LSTM networks and a gestural interface allowing users to ‘conduct’ the generation of text.
page: 1 type: text-highlight created: 2020-10-06T02:01:24.645Z color: yellow Recurrent Neural Networks (RNN) are artificial neural networks with recurrent connections, allowing them to learn temporal regularities and model sequences.
page: 1 type: text-highlight created: 2020-10-06T02:01:44.209Z color: yellow Long Short Term Memory (LSTM) [ 16] is a recurrent architecture that overcomes the problem of gradients exponentially vanishing [ 15, 1], and allows RNNs to be trained many time-steps into the past, to learn more complex programs [ 21].
page: 5 type: text-highlight created: 2020-10-06T02:09:40.610Z color: red References
page: 1 type: text-highlight created: 2020-10-06T02:11:38.318Z color: #9900EF with increased compute power and large training sets, LSTMs and related architectures are proving successful not only in sequence classification [11, 14, 20, 12], but also in sequence generation in many domains such as music [6, 2, 19, 22], text [24, 23], handwriting [10], images [13], machine translation [25], speech synthesis [28] and even choreography [4]
page: 1 type: text-highlight created: 2020-10-06T02:11:50.700Z color: #FF6900 most current applications of sequence generation with RNNs is not a real-time, interactive process.
page: 1 type: text-highlight created: 2020-10-06T02:14:25.537Z color: #FF6900 ll does not provide real-time continuous control in the manner required for the creation of expressive interface