Author (http://research.google.com/pubs/author39086.html) of the paper here. I'm amused this is on Hacker News. The goal was to learn very long-timescale limit cycle behavior in a recurrent neural network. The chord changes are separated by many intervening melodic events (notes). As it turns out, even LSTM is pretty fragile when it comes to this. One problem is stability: if the network gets too perturbed, it can move into a space from which it never recovers. I'm not all that proud of the specific improvizations from that network, but I did enjoy learning what's possible and impossible in the space. I think now, with new ways to train larger networks on more data, it's time to revisit this challenge.
Edit: Formatting. I clearly don't post much on HN.
Hi Douglas,
I just finished my college and quite interested in RNNs and fascinated by their capability and potential. Should I go to graduate school to study it or I can play with it as a hobby. Do you have any suggestions?
I think you could play around as a hobby. You might try Theano as a place to start (for LSTM: http://deeplearning.net/tutorial/lstm.html). If you become passionate about neural networks you might find yourself in grad school simply because that's a great place for diving in more deeply. It's really really helpful to know machine learning. Andrew Ng's Coursera is a great place to start: https://www.coursera.org/course/ml
Really fascinating. I wonder if distinguishing between a motif and a random set of notes would help provide structure here. So, the model would decide "I'm going to build a motif and save it for variation later" for 4 bars, then could decide to play randomness in the turn around. Next pass through it provides randomization to the pre-established motif?
I think there's something right about that idea. It seems, to me at least, that this idea of storing and re-using motifs with variation is at the heart of improvization. (Author of paper).
It is indeed - I'd suggest the title is modified to add year as it provides context for how long LSTMs have been established before their recent popularity boom.
> The goal of these experiments were to see if LSTM could learn a fixed chord structure while in parallel learning elements of a varying melody structure. It was easier to stick with a basic melody. Note that every 12-bar segment is unique; however, because only one or two bars are changed at a time, you may have to listen for a while to hear differences. We are currently working on a much more interesting set of training melodies and chords
"We are currently working on a much more interesting set of training melodies and chords". More like "my postdoc ended in Switzerland and I started a faculty job at University of Montreal (LISA lab) and never had time to get back to LSTM and music composition. Sigh.
Edit: Formatting. I clearly don't post much on HN.