References of "Haesbroeck, Gentiane"
     in
Bookmark and Share    
Full Text
See detailDetection of influential observations on the error rate based on the generalized k-means clustering procedure
Ruwet, Christel ULg; Haesbroeck, Gentiane ULg

Conference (2009, October 14)

Cluster analysis may be performed when one wishes to group similar objects into a given number of clusters. Several algorithms are available in order to construct these clusters. In this talk, focus will ... [more ▼]

Cluster analysis may be performed when one wishes to group similar objects into a given number of clusters. Several algorithms are available in order to construct these clusters. In this talk, focus will be on the generalized k-means algorithm, while the data of interest are assumed to come from an underlying population consisting of a mixture of two groups. Among the outputs of this clustering technique, a classi cation rule is provided in order to classify the objects into one of the clusters. When classi cation is the main objective of the statistical analysis, performance is often measured by means of an error rate ER(F; Fm) where F is the distribution of the training sample used to set up the classi cation rule and Fm (model distribution) is the distribution under which the quality of the rule is assessed (via a test sample). Under contamination, one has to replace the distribution F of the training sample by a contaminated one, F(eps) say (where eps corresponds to the fraction of contamination). In that case, the error rate will be corrupted since it relies on a contaminated rule, while the test sample may still be considered as being distributed according to the model distribution. To measure the robustness of classification based on this clustering proce- dure, influence functions of the error rate may be computed. The idea has already been exploited by Croux et al. (2008) and Croux et al. (2008) in the context of linear and logistic discrimination. In this setup, the contaminated distribution takes the form F(eps)= (1-eps)*Fm + eps*Dx, where Dx is the Dirac distribution putting all its mass at x: After studying the influence function of the error rate of the generalized k- means procedure, which depends on the influence functions of the generalized k-means centers derived by Garcia-Escudero and Gordaliza (1999), a diagnostic tool based on its value will be presented. The aim is to detect observations in the training sample which can be influential for the error rate. [less ▲]

Detailed reference viewed: 76 (34 ULg)
Full Text
Peer Reviewed
See detailOutlier detection with the minimum covariance determinant estimator in practice
Fauconnier, Cécile ULg; Haesbroeck, Gentiane ULg

in Statistical Methodology (2009), 6(4), 363-379

Robust statistics has slowly become familiar to all practitioners. Books entirely devoted to the subject are without any doubts responsible for the increased practice of robust statistics in all fields of ... [more ▼]

Robust statistics has slowly become familiar to all practitioners. Books entirely devoted to the subject are without any doubts responsible for the increased practice of robust statistics in all fields of applications. Even classical books often have at least one chapter (or parts of chapters) which develops robust methodology. The improvement of computing power has also contributed to the development of a wider and wider range of available robust procedures. However, this success story is now menacing to get backwards: non specialists interested in the application of robust methodology are faced with a large set of (assumed equivalent) methods and with over-sophistication of some of them. Which method should one use? How the (numerous) parameters should be optimaly tuned? These questions are not so easy to answer for non specialists! One could then argue that default procedures are available in most statistical softwares (Splus, R, SAS, Matlab,...). However, using as illustration the detection of outliers in multivariate data, it is shown that, on one hand, it is not obvious that one would feel confident with the output of default procedures, and that, on the other hand, trying to understand thoroughly the tuning parameters involved in the procedures might require some extensive research. This is not conceivable when trying to compete with the classical methodology which (while clearly unreliable) is so straightfoward. The aim of the paper is to help the practitioners willing to detect in a reliable way outliers in a multivariate data set. The chosen methodology is the Minimum Covariance Determinant estimator being widely available and intuitively appealing. [less ▲]

Detailed reference viewed: 300 (13 ULg)
Full Text
See detailImpact of contamination on empirical and theoretical error
Ruwet, Christel ULg; Haesbroeck, Gentiane ULg

Conference (2009, June 18)

Classification analysis allows to group similar objects into a given number of groups by means of a classification rule. Many classification procedures are available : linear discrimination, logistic ... [more ▼]

Classification analysis allows to group similar objects into a given number of groups by means of a classification rule. Many classification procedures are available : linear discrimination, logistic discrimination, etc. Focus in this poster will be on classification resulting from a clustering analysis. Indeed, among the outputs of classical clustering techniques, a classification rule is provided in order to classify the objects into one of the clusters. More precisely, let F denote the underlying distribution and assume that the generalized kmeans algorithm with penalty function is used to construct the k clusters C1(F), . . . ,Ck(F) with centers T1(F), . . . , Tk(F). When one feels that k true groups are existing among the data, classification might be the main objective of the statistical analysis. Performance of a particular classification technique can be measured by means of an error rate. Depending on the availability of data, two types of error rates may be computed: a theoretical one and a more empirical one. In the first case, the rule is estimated on a training sample with distribution F while the evaluation of the classification performance may be done through a test sample distributed according to a model distribution of interest, Fm say. In the second case, the same data are used to set up the rule and to evaluate the performance. Under contamination, one has to replace the distribution F of the training sample by a contaminated one, F(eps) say (where eps corresponds to the fraction of contamination). In that case, thetheoretical error rate will be corrupted since it relies on a contaminated rule but it may still consider a test sample distributed according to the model distribution. The empirical error rate will be affected twice: via the rule and also via the sample used for the evaluation of the classification performance. To measure the robustness of classification based on clustering, influence functions of the error rate may be computed. The idea has already been exploited by Croux et al (2008) and Croux et al (2008) in the context of linear and logistic discrimination. In the computation of influence functions, the contaminated distribution takes the form F(eps) = (1 − eps)*Fm + eps* Dx, where Dx is the Dirac distribution putting all its mass at x. It is interesting to note that the impact of the point mass x may be positive, i.e. may decrease the error rate, when the data at hand is used to evaluate the error. [less ▲]

Detailed reference viewed: 62 (25 ULg)
Full Text
See detailInfluence function of the error rate of classification based on clustering
Ruwet, Christel ULg; Haesbroeck, Gentiane ULg

Conference (2009, May 19)

Cluster analysis may be performed when one wishes to group similar objects into a given number of clusters. Several algorithms are available in order to construct these clusters. In this talk, focus will ... [more ▼]

Cluster analysis may be performed when one wishes to group similar objects into a given number of clusters. Several algorithms are available in order to construct these clusters. In this talk, focus will be on two particular cases of the generalized k-means algorithm : the classical k-means procedure as well as the k-medoids algorithm, while the data of interest are assumed to come from an underlying population consisting of a mixture of two groups. Among the outputs of these clustering techniques, a classification rule is provided in order to classify the objects into one of the clusters. When classification is the main objective of the statistical analysis, performance is often measured by means of an error rate. Two types of error rates can be computed: a theoretical one and a more empirical one. The first one can be written as ER(F, Fm) where F is the distribution of the training sample used to set up the classification rule and Fm (model distribution) is the distribution under which the quality of the rule is assessed (via a test sample). The empirical error rate corresponds to ER(F, F), meaning that the classification rule is tested on the same sample as the one used to set up the rule. This talk will present the results concerning the theoretical error rate. In case there are some outliers in the data, the classification rule may be corrupted. Even if it is evaluated at the model distribution, the theoretical error rate may then be contaminated. To measure the robustness of classification based on clustering, influence functions have been computed. Similar results as those derived by Croux et al (2008) and Croux et al (2008) in discriminant analysis were observed. More specifically, under optimality (which happens when the model distribution is FN = 0.5 N(μ1, σ) + 0.5 N(μ2, σ), Qiu and Tamhane 2007), the contaminated error rate can never be smaller than the optimal value, resulting in a first order influence function identically equal to 0. Second order influence functions need then to be computed. When the optimality does not hold, the first order influence function of the theoretical error rate does not vanish anymore and shows that contamination may improve the error rate achieved under the non-optimal model. The first and, when required, second order influence functions of the theoretical error rate are useful in their own right to compare the robustness of the 2-means and 2-medoids classification procedures. They have also other applications. For example, they may be used to derive diagnostic tools in order to detect observations having an unduly large influence on the error rate. Also, under optimality, the second order influence function of the theoretical error rate can yield asymptotic relative classification efficiencies. [less ▲]

Detailed reference viewed: 55 (25 ULg)
Full Text
See detailInfluence functions of the error rates of classification based on clustering
Ruwet, Christel ULg; Haesbroeck, Gentiane ULg

Poster (2009, May)

Cluster analysis may be performed when one wishes to group similar objects into a given number of clusters. Several algorithms are available in order to construct these clusters. In this poster, focus ... [more ▼]

Cluster analysis may be performed when one wishes to group similar objects into a given number of clusters. Several algorithms are available in order to construct these clusters. In this poster, focus will be on two particular cases of the generalized k-means algorithm : the classical k-means procedure as well as the k-medoids algorithm, while the data of interest are assumed to come from an underlying population consisting of a mixture of two groups. Among the outputs of these clustering techniques, a classification rule is provided in order to classify the objects into one of the clusters. When classification is the main objective of the statistical analysis, performance is often measured by means of an error rate. Two types of error rates can be computed : a theoretical one and a more empirical one. The first one can be written as ER(F, Fm) where F is the distribution of the training sample used to set up the classification rule and Fm (model distribution) is the distribution under which the quality of the rule is assessed (via a test sample). The empirical error rate corresponds to ER(F, F), meaning that the classification rule is tested on the same sample as the one used to set up the rule. In case there are some outliers in the data, the classification rule may be corrupted. Even if it is evaluated at the model distribution, the theoretical error rate may then be contaminated, while the effect of contamination on the empirical error rate is two-fold : the rule but also the test sample are contaminated. To measure the robustness of classification based on clustering, influence functions have been computed, both for the theoretical and the empirical error rates. When using the theoretical error rate, similar results as those derived by Croux et al (2008) and Croux et al (2008) in discriminant analysis were observed. More specifically, under optimality (which happens when the model distribution is FN = 0.5N(μ1, ) + 0.5N(μ2, ), Qiu and Tamhane 2007), the contaminated error rate can never be smaller than the optimal value, resulting in a first order influence function identically equal to 0. Second order influence functions would then need to be computed, as this will be done in future research. When the optimality does not hold, the first order influence function of the theoretical error rate does not vanish anymore and shows that contamination may improve the error rate achieved under the non-optimal model. Similar computations have been performed for the empirical error rate, as the poster will show. The first and, when required, second order influence functions of the theoretical and empirical error rates are useful in their own right to compare the robustness of the 2-means and 2-medoids classification procedures. They have also other applications. For example, they may be used to derive diagnostic tools in order to detect observations having an unduly large influence on the error rate. Also, under optimality, the second order influence function of the theoretical error rate can yield asymptotic relative classification efficiencies. [less ▲]

Detailed reference viewed: 54 (29 ULg)
Full Text
See detailInfluence function of the error rate of the generalized k-means
Ruwet, Christel ULg; Haesbroeck, Gentiane ULg

Scientific conference (2009, March 30)

Cluster analysis may be performed when one wishes to group similar objects into a given number of clusters. Several algorithms are available in order to construct these clusters. In this talk, focus will ... [more ▼]

Cluster analysis may be performed when one wishes to group similar objects into a given number of clusters. Several algorithms are available in order to construct these clusters. In this talk, focus will be on two particular cases of the generalized k-means algorithm: the classical k-means procedure as well as the k-medoids algorithm. Among the outputs of these clustering techniques, a classification rule is provided in order to classify the objects into one of the clusters. When classification is the main objective of the statistical analysis, performance is often measured by means of an error rate. In the clustering setting, the error rate has to be measured on the training sample while test samples are usually used in other settings like linear discrimination or logistic discrimination. This characteristic of classification resulting from a clustering implies that contamination in the training sample may not only affect the classification rule but also other parameters involved in the error rate. In the talk, influence functions will be used to measure the impact of contamination on the error rate and will show that contamination may decrease the error rate that one would expect under a given model. Moreover, a kind of second-order influence functions will also be derived to measure the bias in error rate the k-means and k-medoids procedures suffer from in finite-samples. Simulations will confirm the results obtained via the first and second-order influence functions. Future research perspectives will conclude the talk. [less ▲]

Detailed reference viewed: 14 (4 ULg)
Full Text
Peer Reviewed
See detailLogistic discrimination using robust estimators: an influence function approach
Croux, Christophe; Haesbroeck, Gentiane ULg; Joossens, Kristel

in Canadian Journal of Statistics = Revue Canadienne de Statistique (2008), 36(1), 157-174

Logistic regression is frequently used for classifying observations into two groups. Unfortunately there are often outlying observations in a data set and these might affect the estimated model and the ... [more ▼]

Logistic regression is frequently used for classifying observations into two groups. Unfortunately there are often outlying observations in a data set and these might affect the estimated model and the associated classification error rate. In this paper, the authors study the effect of observations in the training sample on the error rate by deriving influence functions. They obtain a general expression for the influence function of the error rate, and they compute it for the maximum likelihood estimator as well as for several robust logistic discrimination procedures. Besides being of interest in their own right, the influence functions are also used to derive asymptotic, classification efficiencies of different logistic discrimination rules. The authors also show how influential points can be detected by means of a diagnostic plot based on the values of the influence function. [less ▲]

Detailed reference viewed: 32 (1 ULg)
See detailPratique de la statistique descriptive
Henry, Valérie ULg; Haesbroeck, Gentiane ULg

Book published by Coédition Ferrer et Céfal (2004)

Il s'agit d'un livre d'exercices résolus qui invite le lecteur à réfléchir sur l'usage judicieux de l'outil adéquat, initie à l'analyse des données statistiques et à l'interprétation des résultats obtenus ... [more ▼]

Il s'agit d'un livre d'exercices résolus qui invite le lecteur à réfléchir sur l'usage judicieux de l'outil adéquat, initie à l'analyse des données statistiques et à l'interprétation des résultats obtenus, aborde les techniques statistiques les plus récentes, telles que l'analyse exploratoire des données ou la statistique robuste. [less ▲]

Detailed reference viewed: 217 (33 ULg)
Peer Reviewed
See detailThe case sensitivity function approach to diagnostic and robust computation: a relaxation strategy
Critchley, Frank; Schyns, Michael ULg; Haesbroeck, Gentiane ULg et al

in Antoch, Jaromir (Ed.) COMPSTAT 2004: Proceedings in Computational Statistics (2004)

Detailed reference viewed: 26 (1 ULg)
Full Text
Peer Reviewed
See detailImplementing the Bianco and Yohai estimator for logistic regression
Croux, C.; Haesbroeck, Gentiane ULg

in Computational Statistics & Data Analysis (2003), 44(1-2), 273-295

A fast and stable algorithm to compute a highly robust estimator for the logistic regression model is proposed. A criterium. for the existence of this estimator at finite samples is derived and the ... [more ▼]

A fast and stable algorithm to compute a highly robust estimator for the logistic regression model is proposed. A criterium. for the existence of this estimator at finite samples is derived and the problem of the selection of an appropriate loss function is discussed. It is shown that the loss function can be chosen such that the robust estimator exists if and only if the maximum likelihood estimator exists. The advantages of using a weighted version of this estimator are also considered. Simulations and an example give further support for the good performance of the implemented estimators. (C) 2003 Elsevier B.V. All rights reserved. [less ▲]

Detailed reference viewed: 37 (4 ULg)
Full Text
Peer Reviewed
See detailThe breakdown behavior of the maximum likelihood estimator in the logistic regression model
Croux, C.; Flandre, C.; Haesbroeck, Gentiane ULg

in Statistics & Probability Letters (2002), 60(4), 377-386

In this note we discuss the breakdown behavior of the maximum likelihood (ML) estimator in the logistic regression model. We formally prove that the ML-estimator never explodes to infinity, but rather ... [more ▼]

In this note we discuss the breakdown behavior of the maximum likelihood (ML) estimator in the logistic regression model. We formally prove that the ML-estimator never explodes to infinity, but rather breaks down to zero when adding severe outliers to a data set. An example confirms this behavior. (C) 2002 Published by Elsevier Science B.V. [less ▲]

Detailed reference viewed: 23 (3 ULg)
Full Text
Peer Reviewed
See detailLocation adjustment for the minimum volume ellipsoid estimator
Croux, C.; Haesbroeck, Gentiane ULg; Rousseeuw, P. J.

in Statistics and Computing (2002), 12(3), 191-200

Estimating multivariate location and scatter with both affine equivariance and positive breakdown has always been difficult. A well-known estimator which satisfies both properties is the Minimum Volume ... [more ▼]

Estimating multivariate location and scatter with both affine equivariance and positive breakdown has always been difficult. A well-known estimator which satisfies both properties is the Minimum Volume Ellipsoid Estimator (MVE). Computing the exact MVE is often not feasible, so one usually resorts to an approximate algorithm. In the regression setup, algorithms for positive-breakdown estimators like Least Median of Squares typically recompute the intercept at each step, to improve the result. This approach is called intercept adjustment. In this paper we show that a similar technique, called location adjustment, can be applied to the MVE. For this purpose we use the Minimum Volume Ball (MVB), in order to lower the MVE objective function. An exact algorithm for calculating the MVB is presented. As an alternative to MVB location adjustment we propose L-1 location adjustment, which does not necessarily lower the MVE objective function but yields more efficient estimates for the location part. Simulations compare the two types of location adjustment. We also obtain the maxbias curves of both L-1 and the MVB in the multivariate setting, revealing the superiority of L-1. [less ▲]

Detailed reference viewed: 30 (1 ULg)
Full Text
Peer Reviewed
See detailMaxbias curves of robust location estimators based on subranges
Croux, C.; Haesbroeck, Gentiane ULg

in Journal of Nonparametric Statistics (2002), 14(3), 295-306

A maxbias curve is a powerful tool to describe the robustness of an estimator. It tells us how much an estimator can change due to a given fraction of contamination. In this paper, maxbias curves are ... [more ▼]

A maxbias curve is a powerful tool to describe the robustness of an estimator. It tells us how much an estimator can change due to a given fraction of contamination. In this paper, maxbias curves are computed for some univariate location estimators based on subranges: midranges, trimmed means and the univariate Minimum Volume Ellipsoid (MVE) location estimators. These estimators are intuitively appealing and easy to calculate. [less ▲]

Detailed reference viewed: 14 (3 ULg)
Full Text
Peer Reviewed
See detailA note on finite-sample efficiencies of estimators for the minimum volume ellipsoid
Croux, C.; Haesbroeck, Gentiane ULg

in Journal of Statistical Computation & Simulation (2002), 72(7), 585-596

Among the most well known estimators of multivariate location and scatter is the Minimum Volume Ellipsoid (MVE). Many algorithms have been proposed to compute it. Most of these attempt merely to ... [more ▼]

Among the most well known estimators of multivariate location and scatter is the Minimum Volume Ellipsoid (MVE). Many algorithms have been proposed to compute it. Most of these attempt merely to approximate as close as possible the exact MVE, but some of them led to the definition of new estimators which maintain the properties of robustness and affine equivariance that make the MVE so attractive. Rousseeuw and van Zomeren (1990) used the (p+1)- subset estimator which was modified by Croux and Haesbroeck (1997) to give rise to the averaged (p+1)- subset estimator . This note shows by means of simulations that the averaged (p+1)-subset estimator outperforms the exact estimator as far as finite-sample efficiency is concerned. We also present a new robust estimator for the MVE, closely related to the averaged (p+1)-subset estimator, but yielding a natural ranking of the data. [less ▲]

Detailed reference viewed: 23 (4 ULg)
Full Text
Peer Reviewed
See detailMaxbias curves of robust scale estimators based on subranges
Croux, C.; Haesbroeck, Gentiane ULg

in Metrika (2001), 53(2), 101-122

A maxbias curve is a powerful tool to describe the robustness of an estimator. It is an asymptotic concept which tells how much an estimator can change due to a given fraction of contamination. In this ... [more ▼]

A maxbias curve is a powerful tool to describe the robustness of an estimator. It is an asymptotic concept which tells how much an estimator can change due to a given fraction of contamination. In this paper, maxbias curves are computed for some univariate scale estimators based on subranges: trimmed standard deviations, interquantile ranges and the univariate Minimum Volume Ellipsoid (MVE) and Minimum Covariance Determinant (MCD) scale estimators. These estimators are intuitively appealing and easy to calculate. Since the bias behavior of scale estimators may differ depending on the type of contamination (outliers or inliers), expressions for both explosion and implosion maxbias curves are given. On the basis of robustness and efficiency arguments, the MCD scale estimator with 25% breakdown point can be recommended for practical use. [less ▲]

Detailed reference viewed: 10 (0 ULg)
See detailRégression logistique robuste
Croux, Christophe ULg; Haesbroeck, Gentiane ULg

in Droesbeke, J. J.; Lejeune, M.; Saporta, G. (Eds.) Modèles statistiques pour données qualitatives (2001)

Detailed reference viewed: 29 (7 ULg)
See detailFormation mathématique par la résolution de problèmes
Bair, Jacques ULg; Haesbroeck, Gentiane ULg

Book published by De Boeck Université (2000)

Entre manuel scolaire et traité d'heuristique, illustré de centaines d'exemples variés, significatifs et souvent inédits, cet ouvrage présente une démarche originale de résolution de problèmes ... [more ▼]

Entre manuel scolaire et traité d'heuristique, illustré de centaines d'exemples variés, significatifs et souvent inédits, cet ouvrage présente une démarche originale de résolution de problèmes mathématiques. [less ▲]

Detailed reference viewed: 74 (2 ULg)
Full Text
Peer Reviewed
See detailPrincipal component analysis based on robust estimators of the covariance or correlation matrix: Influence functions and efficiencies
Croux, C.; Haesbroeck, Gentiane ULg

in Biometrika (2000), 87(3), 603-618

A robust principal component analysis can be easily performed by computing the eigenvalues and eigenvectors of a robust estimator of the covariance or correlation matrix. In this paper we derive the ... [more ▼]

A robust principal component analysis can be easily performed by computing the eigenvalues and eigenvectors of a robust estimator of the covariance or correlation matrix. In this paper we derive the influence functions and the corresponding asymptotic variances for these robust estimators of eigenvalues and eigenvectors. The behaviour of several of these estimators is investigated by a simulation study. It turns out that the theoretical results and simulations favour the use of S-estimators, since they combine a high efficiency with appealing robustness properties. [less ▲]

Detailed reference viewed: 24 (6 ULg)
Peer Reviewed
See detailEstimateurs Robustes pour les Composantes Principales
Croux, Christophe ULg; Haesbroeck, Gentiane ULg

in Proceedings des XXXII Journées de Statistique (2000)

Detailed reference viewed: 12 (3 ULg)