Musical documents may contain heterogeneous information such as music symbols, text, staff lines, ornaments, annotations, and editorial data. Before any attempt at automatically recognizing the information on scores, it is usually necessary to detect and classify each constituent layer of information into different categories. The greatest obstacle of this classification process is the high heterogeneity among music collections, which makes it difficult to propose methods that can be generalizable to a broad range of sources. In this paper we propose a novel machine learning framework that focuses on extracting the different layers within musical documents by categorizing the image at pixel level. The main advantage of our approach is that it can be used regardless of the type of document provided, as long as training data is available. We illustrate some of the capabilities of the framework by showing examples of common tasks that are frequently performed on images of musical documents, such as binarization, staff-line removal, symbol isolation, and complete layout analysis. All these are tasks for which our approach has shown promising performance. We believe our framework will allow the development of generalizable and scalable automatic music recognition systems, thus facilitating the creation of large-scale browsable and searchable repositories of music documents.
© 2008-2024 Fundación Dialnet · Todos los derechos reservados