Repository | Series | Buch | Kapitel
Automated soundtrack generation for fiction books backed by Lövheim's cube emotional model
pp. 161-168
Abstrakt
One of the main tasks of any work of art is transferring emotion conceived by the author to its recipient. When using several modalities a synergistic effect occurs, making the achievement of the target emotional state more likely. In reading, mostly, visual perception is involved, nevertheless, we can supplement it with an audio modality with the soundtrack's help via specially selected music that corresponds to the emotional state of a text fragment.As a base model for representing emotional state we have selected physiologically motivated Lövheim's cube model which embraces 8 emotional states instead of 2 (positive and negative) usually used in sentiment analysis.This article describes the concept of selecting special music for the "mood" of a text extract by mapping text emotional labels to tags in LastFM API, fetching music data to play and experimental validation of this approach.
Publication details
Published in:
Eismont Polina, Mitrenina Olga, Pereltsvaig Asya (2019) Language, music and computing: second international workshop, LMAC 2017, St. Petersburg, Russia, april 17–19, 2017, revised selected papers. Dordrecht, Springer.
Seiten: 161-168
DOI: 10.1007/978-3-030-05594-3_13
Referenz:
Kalinin Alexander, Kolmogorova Anastasia (2019) „Automated soundtrack generation for fiction books backed by Lövheim's cube emotional model“, In: P. Eismont, O. Mitrenina & A. Pereltsvaig (eds.), Language, music and computing, Dordrecht, Springer, 161–168.