Miguel Civit Masot
This thesis presents a comprehensive exploration of deep learning-based systems for music generation. The work focuses on improving the integration of music theory and incorporating emotion awareness into automatic music generation (AMG) through the improvement of user-centered validation methodologies. It examines the state-of-the-art in AMG systems, addressing key challenges related to style, dataset selection, and architecture. The study proposes a meta-methodology for evaluating AMG systems that combines both objective and subjective user-based assessments, emphasizing the role of human emotion in music composition and generation. This methodology is further expanded with the creation and analysis of a AMG dataset to expand on the theory-awareness of the generation systems. The evaluation approach is further tested in different real-world contexts, with a particular focus on user interaction and the usability of AI-generated music systems. Furthermore, the thesis explores the potential for applying the proposed methodologies to AI-driven music-related devices, describing future directions for integrating AI into music composition, education, and performance environments. The findings highlight the growing relevance of emotion-aware music systems in creative processes, propose solutions to current AI generation problems, and present new practical tools for evaluating and developing AI-based musical solutions.
© 2008-2025 Fundación Dialnet · Todos los derechos reservados