Deep Music Generation

Roland Nguyen - Senior, Mathematics and Computer Science

Chanho Lim - Senior, Mathematics and Computer Science


Abstract

Deep Music Generation is a deep learning project with two sides.

Firstly, to generate polyphonic music using deep learning models that replicate the style of composers such as Bach and Handel with variety.

Secondly, to create metrics that evaluate the quality of generated samples and the performance of the model.


This is the homepage of our Capstone project.

You can find our project poster here.

Three samples of our generated music can be found at this YouTube link.


A page displaying the preprocessing of our MIDI data can be found here.


A page displaying the training of the LSTM-RNN classifier model can be found here, trained using the following 8 preprocessed composers: Bach, Beethoven, Chopin, Handel, Haydn, Hays, Thomas, and Webster. We achieved a high of 0.803 accuracy.


A page displaying the Inception score analysis of our classifier model can be found here. We achieved an Inception score of approximately 5.8 out of 8.


You can view the repository here.