GETMusic: Generating Any Music Tracks with a Unified Representation and Diffusion Framework


paper     code

Authors

^ Corresponding author.

Abstract

Symbolic music generation aims to create musical notes, which can help users compose music, such as generating target instrument tracks based on provided source tracks. In practical scenarios where there’s a predefined ensemble of tracks and various composition needs, an efficient and effective generative model that can generate any target tracks based on the other tracks becomes crucial. However, previous efforts have fallen short in addressing this necessity due to limitations in their music representations and models. In this paper, we introduce a framework known as GETMusic, with “GET” standing for “GEnerate music Tracks.” This framework encompasses a novel music representation “GETScore” and a diffusion model “GETDiff.” GETScore represents musical notes as tokens and organizes tokens in a 2D structure, with tracks stacked vertically and progressing horizontally over time. At a training step, each track of a music piece is randomly selected as either the target or source. The training involves two processes: In the forward process, target tracks are corrupted by masking their tokens, while source tracks remain as the ground truth; in the denoising process, GETDiff is trained to predict the masked target tokens conditioning on the source tracks. Our proposed representation, coupled with the non-autoregressive generative model, empowers GETMusic to generate music with any arbitrary source-target track combinations. Our experiments demonstrate that the versatile GETMusic outperforms prior works proposed for certain specific composition tasks.



1. Source: Lead Melody ➡️ Target: Bass, Drum, Guitar, Piano, and String:

Due to the incorporation of RoPE (Rotary Positional Embeddings), GETMusic can extrapolate effectively during inference without experiencing significant degradation in quality, even in the 1st demo of 72 bars:



2. Piano ➡️ Drum, Guitar, and String



3. Guitar ➡️ String



4. Lead Melody ➡️ Piano, String



5. Drum and Piano ➡️ Lead Melody and String



6. Generation from Scratch 1



7. Generation from Scratch 2



8. Zero-Shot Example 1



9. Zero-Shot Example 2



10. More than Pop Music: Fine-tuning with Other Genre Data

We fine-tune the GETDiff on JSB Chorales dataset for only 2,000 steps. It is capable of generating music in Bach style.



There are more randomly picked samples:


Lead Melody ➡️ Accompaniments

Input Output


Lead Melody ➡️ Drum, Guitar, Piano

Input Output


Piano ➡️ Lead Melody

Input Output


Guitar, Piano, String ➡️ Lead Melody

Input Output


Bass, Drum ➡️ Lead Melody

Input Output


Bass ➡️ String

Input Output


String ➡️ Bass, Drum, Guitar, Piano

Input Output


Thanks for listening!! 🥁🎸🎹🎻🎤