ReLyMe: Improving Lyric-to-Melody Generation by Incorporating Lyric-Melody Relationships


Authors

* Equal contribution. ^ Corresponding author.

Abstract

Lyric-to-melody generation is one of the most important automatic music composition tasks. With the rapid development of deep learning, previous works address this task with end-to-end neural network models. However, deep learning models cannot well capture the strict but subtle relationships between lyrics and melodies, which affects the harmony between lyrics and generated melodies. In this paper, we propose ReLyMe (Relationships between Lyrics and Melodies), a method that leverages lyric-melody relationships from music theory to alleviate the dissonance between lyrics and melodies. Specifically, we first introduce several principles that lyrics and melodies should follow in terms of tone, rhythm, and structure relationships according to musicians and composers. These principles are then integrated into neural network based lyric-to-melody models by adding corresponding constraints during the decoding process to improve the harmony between lyrics and melodies. We further design a series of objective and subjective metrics to evaluate the generated melodies. Experiments on both English and Chinese song datasets show the effectiveness of ReLyMe, demonstrating the superiority of leveraging the principles (lyric-melody relationships) from music domain for neural based lyric-to-melody generation.

Demo Video

Cases

Chinese Songs

Apply to SongMASS

Song 1

SongMASS
✨ ReLyMe

Song 2

SongMASS
✨ ReLyMe

Apply to TeleMelody

Song 1

TeleMelody
✨ ReLyMe

Song 2

TeleMelody
✨ ReLyMe

Ablation

ReLyMe
(w/o tone relationship)
ReLyMe
(w/o rhythm relationship)
ReLyMe
(w/o structure relationship)
Hard Constraints
Conditioned Training
Re-ranking

English Songs

Apply to SongMASS

Song 1

SongMASS
✨ ReLyMe

Song 2

SongMASS
✨ ReLyMe

Apply to TeleMelody

Song 1

TeleMelody
✨ ReLyMe

Song 2

TeleMelody
✨ ReLyMe