Tradformer: A Transformer Model of Traditional Music Transcriptions
Tradformer: A Transformer Model of Traditional Music Transcriptions
Luca Casini, Bob L. T. Sturm
Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
AI and Arts. Pages 4915-4920.
https://doi.org/10.24963/ijcai.2022/681
We explore the transformer neural network architecture for modeling music,
specifically Irish and Swedish traditional dance music.
Given the repetitive structures of these kinds of music, the transformer should be as successful with fewer parameters and complexity as the hitherto most successful model, a vanilla long short-term memory network.
We find that achieving good performance with the transformer is not straightforward,
and careful consideration is needed for the sampling strategy,
evaluating intermediate outputs in relation to engineering choices,
and finally analyzing what the model learns.
We discuss these points with several illustrations,
providing reusable insights for engineering other music generation systems.
We also report the high performance of our final transformer model
in a competition of music generation systems
focused on a type of Swedish dance.
Keywords:
Methods and resources: Machine learning, deep learning, neural models, reinforcement learning
Application domains: Music
Theory and philosophy of arts and creativity in AI systems: Evaluation of artistic or creative outputs, or of systems