Music Generation using Time Distributed Dense Stateful Char-RNNs
Sequence generation is one of the state-of-the-art topics in recent days, where given a sequence of inputs, the aim is to generate a similar sequence of outputs in a given context. The applications range from sentence autocompletion in mail bodies to text suggestions for automatic replies, etc. A si...
Saved in:
Published in | 2022 IEEE 7th International conference for Convergence in Technology (I2CT) pp. 1 - 5 |
---|---|
Main Authors | , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
07.04.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Sequence generation is one of the state-of-the-art topics in recent days, where given a sequence of inputs, the aim is to generate a similar sequence of outputs in a given context. The applications range from sentence autocompletion in mail bodies to text suggestions for automatic replies, etc. A similar idea can therefore be utilized to generate a sequence of musical notes using some LSTM/GRU based architecture, where we train our model based on given sequences of musical notes. If we look at music of a certain genre, it's simply a time series data where the data is in frequency domain. Fortunately for musicians these frequencies have a notation, which can serve as texts to train any given time series model. Hence, in this paper we propose a Char-RNN based model which can understand the patterns in each composition or a raga and generate new piece of music based on that. The model must not simply copy paste the sequence or generate any random note at a given instant of time but be capable enough to grasp the patterns in which the given piece of music is based upon and create a similar, new piece of music out of that. |
---|---|
DOI: | 10.1109/I2CT54291.2022.9824167 |