References of "Osmalsky, Julien"
     in
Bookmark and Share    
Full Text
Peer Reviewed
See detailEnhancing Cover Song Identification with Hierarchical Rank Aggregation
Osmalsky, Julien ULg; Van Droogenbroeck, Marc ULg; Embrechts, Jean-Jacques ULg

in Proceedings of the 17th International for Music Information Retrieval Conference (2016, August)

Abstract Cover song identification involves calculating pairwise similarities between a query audio track and a database of reference tracks. While most authors make exclusively use of chroma features ... [more ▼]

Abstract Cover song identification involves calculating pairwise similarities between a query audio track and a database of reference tracks. While most authors make exclusively use of chroma features, recent work tends to demonstrate that combining similarity estimators based on multiple audio features increases the performance. We improve this approach by using a hierarchical rank aggregation method for combining estimators based on different features. More precisely, we first aggregate estimators based on global features such as the tempo, the duration, the loudness, the beats, and the average chroma vectors. Then, we aggregate the resulting composite estimator with four popular state-of-the-art methods based on chromas as well as timbre sequences. We further introduce a refinement step for the rank aggregation called “local Kemenization” and quantify its benefit for cover song identification. The performance of our method is evaluated on the Second Hand Song dataset. Our experiments show an significant improvement of the performance, up to an increase of more than 200 % of the number of queries identified in the Top-1, compared to previous results. [less ▲]

Detailed reference viewed: 47 (9 ULg)
Full Text
See detailEffects of acoustic degradations on cover song identification systems
Osmalsky, Julien ULg; Embrechts, Jean-Jacques ULg

in International Congress on Acoustics: ICA 2016 (2016)

Cover song identification systems deal with the problem of identifying different versions of an audio query in a reference database. Such systems involve the computation of pairwise similarity scores ... [more ▼]

Cover song identification systems deal with the problem of identifying different versions of an audio query in a reference database. Such systems involve the computation of pairwise similarity scores between a query and all the tracks of a database. The usual way of evaluating such systems is to use a set of audio queries, extract features from them, and compare them to other tracks in the database to report diverse statistics. Databases in such research are usually designed in a controlled environment, with relatively clean audio signals. However, in real life conditions, audio signals can be seriously modified due to acoustic degradations. For example, depending on the context, audio can be modified by room reverberation, or by added hands clapping noise in a live concert, etc. In this paper, we study how environmental audio degradations affect the performance of several state-of-the-arty cover song recognition systems. In particular, we study how reverberation, ambient noise and distortion affect the performance of the systems. We further investigate the effect of recording or playing music through a smartphone for music recognition. To achieve this, we use an audio degradation toolbox to degrade the set of queries to be evaluated. We propose a comparison of the performance achieved with cover song identification systems based on several harmonic and timbre features under ideal and noisy conditions. We demonstrate that the performance depends strongly on the degradation method applied to the source, and quantify the performance using multiple statistics. [less ▲]

Detailed reference viewed: 21 (5 ULg)
Full Text
Peer Reviewed
See detailCombining Features for Cover Song Identification
Osmalsky, Julien ULg; Embrechts, Jean-Jacques ULg; Foster, Peter et al

in 16th International Society for Music Information Retrieval Conference (2015, October)

In this paper, we evaluate a set of methods for combining features for cover song identification. We first create multiple classifiers based on global tempo, duration, loudness, beats and chroma average ... [more ▼]

In this paper, we evaluate a set of methods for combining features for cover song identification. We first create multiple classifiers based on global tempo, duration, loudness, beats and chroma average features, training a random forest for each feature. Subsequently, we evaluate standard combination rules for merging these single classifiers into a composite classifier based on global features. We further obtain two higher level classifiers based on chroma features: one based on comparing histograms of quantized chroma features, and a second one based on computing cross-correlations between sequences of chroma features, to account for temporal information. For combining the latter chroma-based classifiers with the composite classifier based on global features, we use standard rank aggregation methods adapted from the information retrieval literature. We evaluate performance with the Second Hand Song dataset, where we quantify performance using multiple statistics. We observe that each combination rule outperforms single methods in terms of the total number of identified queries. Experiments with rank aggregation me- thods show an increase of up to 23.5 % of the number of identified queries, compared to single classifiers. [less ▲]

Detailed reference viewed: 98 (24 ULg)
Full Text
See detailCover Songs Retrieval and Identification
Osmalsky, Julien ULg

Conference (2015, February 24)

Cover songs retrieval is a MIR task that has been widely studied in the recent years. The task, in its general sense, consists in retrieving covers for a given audio query. In this PhD, we focus on the ... [more ▼]

Cover songs retrieval is a MIR task that has been widely studied in the recent years. The task, in its general sense, consists in retrieving covers for a given audio query. In this PhD, we focus on the more specific task of cover songs *identification*. Given an unknown cover song query, we want to find relevant information about the original song, such as the title, the artist etc. Applications are live music recognition, plagiarism detections, query by examples, etc. Our approach is based on a pruning algorithm, whose role is to discard as quickly as possible as many tracks as possible from a search database in order to keep a smaller subset that should contain the tracks relevant to the query. We introduce the concept of *rejectors* for the pruning algorithm, which are single or combined classifier using multiple audio features for selecting relevant tracks. [less ▲]

Detailed reference viewed: 79 (15 ULg)
Full Text
Peer Reviewed
See detailPerformances of low-level audio classifiers for large-scale music similarity
Osmalsky, Julien ULg; Van Droogenbroeck, Marc ULg; Embrechts, Jean-Jacques ULg

in International Conference on Systems, Signals and Image Processing (2014, May)

This paper proposes a survey of the performances of binary classifiers based on low-level audio features, for music similarity in large-scale databases. Various low-level descriptors are used individually ... [more ▼]

This paper proposes a survey of the performances of binary classifiers based on low-level audio features, for music similarity in large-scale databases. Various low-level descriptors are used individually and then combined using several fusion schemes in a content-based audio retrieval system. We show the performances of the classifiers in terms of pruning and loss and we demonstrate that some combination schemes achieve a better performance at a minimum computational cost. [less ▲]

Detailed reference viewed: 92 (35 ULg)
Full Text
Peer Reviewed
See detailEfficient database pruning for large-scale cover song recognition
Osmalsky, Julien ULg; Pierard, Sébastien ULg; Van Droogenbroeck, Marc ULg et al

in International Conference on Acoustics, Speech, and Signal Processing (ICASSP) (2013, May)

This paper focuses on cover song recognition over a large dataset, potentially containing millions of songs. At this time, the problem of cover song recognition is still challenging and only few methods ... [more ▼]

This paper focuses on cover song recognition over a large dataset, potentially containing millions of songs. At this time, the problem of cover song recognition is still challenging and only few methods have been proposed on large scale databases. We present an efficient method for quickly extracting a small subset from a large database in which a correspondence to an audio query should be found. We make use of fast rejectors based on independent audio features. Our method mixes independent rejectors together to build composite ones. We evaluate our system with the Million Song Dataset and we present composite rejectors offering a good trade-off between the percentage of pruning and the percentage of loss. [less ▲]

Detailed reference viewed: 168 (56 ULg)
Full Text
Peer Reviewed
See detailNeural networks for musical chords recognition
Osmalsky, Julien ULg; Embrechts, Jean-Jacques ULg; Van Droogenbroeck, Marc ULg et al

in Journées d'informatique musicale (2012, May)

In this paper, we consider the challenging problem of music recognition and present an effective machine learning based method using a feed-forward neural network for chord recognition. The method uses ... [more ▼]

In this paper, we consider the challenging problem of music recognition and present an effective machine learning based method using a feed-forward neural network for chord recognition. The method uses the known feature vector for automatic chord recognition called the Pitch Class Profile (PCP). Although the PCP vector only provides attributes corresponding to 12 semi-tone values, we show that it is adequate for chord recognition. Part of our work also relates to the design of a database of chords. Our database is primarily designed for chords typical of Western Europe music. In particular, we have built a large dataset filled with recorded guitar chords under different acquisition conditions (instruments, microphones, etc), but also with samples obtained with other instruments. Our experiments establish a twofold result: (1) the PCP is well suited for describing chords in a machine learning context, and (2) the algorithm is also capable to recognize chords played with other instruments, even unknown from the training phase. [less ▲]

Detailed reference viewed: 301 (53 ULg)