AUSTRIAN JOURNAL OF STATISTICS Volume34(2005),Number2,127-138 Identification of Multivariate Outliers: A Performance Study Peter Filzmoser Vienna University of Technology,Austria Abstract:Three methods for the identification of multivariate outliers(Rouss- eeuw and Van Zomeren,1990;Becker and Gather,1999;Filzmoser et al., 2005)are compared.They are based on the Mahalanobis distance that will be made resistant against outliers and model deviations by robust estimation of location and covariance.The comparison is made by means of a simulation study.Not only the case of multivariate normally distributed data.but also heavy tailed and asymmetric distributions will be considered.The simula- tions are focused on low dimensional (p=5)and high dimensional (p =30) data. Keywords:Outlier Detection,MCD Estimator,Mahalanobis Distance,Ro- bustness. 1 Introduction The increasing size of data sets makes it more and more difficult to identify common struc- tures in the data.Especially for high dimensional data it is often impossible to see data structures by visualizations even with highly sophisticated graphical tools(e.g.Swayne et al.,1998;Doleisch et al.,2003).Data mining algorithms as an answer to these difficul- ties try to fit a variety of different models to the data in order to get an idea of relations in the data,but usually another problem arises:multivariate outliers. Many papers and studies with real data have demonstrated that data without any out- liers(clean data")are rather an exception.Outliers can-and very often do-influence the fit of statistical models,and it is not desirable that parameter estimations are biased by the outliers.This problem can be avoided by either using a robust method for model fitting or by first cleaning the data from outliers and then applying classical statistical methods for model fitting. Removing outliers does not mean to throw away measured information.Outliers usu- ally include important information about certain phenomena,artifacts,or substructures in the data.The knowledge about this deviating behavior is important,although it might not always be easy for the practitioner to find the reasons for the existence of outliers in the data,or to interpret them. Multivariate outliers are not necessarily characterized by extremely high or low values along single coordinates.Rather,their univariate projection on certain directions separates them from the mass of the data(this projection approach for outlier detection was intro- duced by Gnanadesikan and Kettenring,1972).Standard methods for multivariate outlier detection are based on the robust Mahalanobis distance which is defined as MD:=((-t)TC-1(-t))2 (1)
AUSTRIAN JOURNAL OF STATISTICS Volume 34 (2005), Number 2, 127–138 Identification of Multivariate Outliers: A Performance Study Peter Filzmoser Vienna University of Technology, Austria Abstract: Three methods for the identification of multivariate outliers (Rousseeuw and Van Zomeren, 1990; Becker and Gather, 1999; Filzmoser et al., 2005) are compared. They are based on the Mahalanobis distance that will be made resistant against outliers and model deviations by robust estimation of location and covariance. The comparison is made by means of a simulation study. Not only the case of multivariate normally distributed data, but also heavy tailed and asymmetric distributions will be considered. The simulations are focused on low dimensional (p = 5) and high dimensional (p = 30) data. Keywords: Outlier Detection, MCD Estimator, Mahalanobis Distance, Robustness. 1 Introduction The increasing size of data sets makes it more and more difficult to identify common structures in the data. Especially for high dimensional data it is often impossible to see data structures by visualizations even with highly sophisticated graphical tools (e.g. Swayne et al., 1998; Doleisch et al., 2003). Data mining algorithms as an answer to these difficulties try to fit a variety of different models to the data in order to get an idea of relations in the data, but usually another problem arises: multivariate outliers. Many papers and studies with real data have demonstrated that data without any outliers (“clean data”) are rather an exception. Outliers can–and very often do– influence the fit of statistical models, and it is not desirable that parameter estimations are biased by the outliers. This problem can be avoided by either using a robust method for model fitting or by first cleaning the data from outliers and then applying classical statistical methods for model fitting. Removing outliers does not mean to throw away measured information. Outliers usually include important information about certain phenomena, artifacts, or substructures in the data. The knowledge about this deviating behavior is important, although it might not always be easy for the practitioner to find the reasons for the existence of outliers in the data, or to interpret them. Multivariate outliers are not necessarily characterized by extremely high or low values along single coordinates. Rather, their univariate projection on certain directions separates them from the mass of the data (this projection approach for outlier detection was introduced by Gnanadesikan and Kettenring, 1972). Standard methods for multivariate outlier detection are based on the robust Mahalanobis distance which is defined as MDi = ³ (xi − t) T C −1 (xi − t) ´1/2 (1)
128 Austrian Journal of Statistics,Vol.34(2005),No.2,127-138 for a p-dimensional observation and i=1,...,n.t and C are robust estimations of location and scatter,respectively.For normally distributed data(and if arithmetic mean and sample covariance matrix were used),the Mahalanobis distance is approximately chi- square distributed with p degrees of freedom(x2).Potential multivariate outliers will typically have large values MDi,and a comparison with the x2 distribution can be made. Garrett (1989)introduced the chi-square plot,which draws the empirical distribution function of the robust Mahalanobis distances against thethe x2 distribution.A break in the tail of the distributions is an indication for outliers,and values beyond this break are iteratively deleted until a straight line appears. Rousseeuw and Van Zomeren (1990)use a cut-off value for distinguishing outliers from non-outliers.This value is a certain quantile(e.g.the 97.5%quantile)of the distribution.For t and C the MVE(minimum volume ellipsoid)estimator(Rousseeuw, 1985)was used.However,several years later the MVE was replaced by the MCD(min- imum covariance determinant)estimator for this purpose which has better statistical prop- erties and because a fast algorithm exists for its computation(Rousseeuw and Van Driessen, 1999). Various other concepts for multivariate outlier detection methods exist in the literature (e.g.Barnett and Lewis,1994;Rocke and Woodruff,1996;Becker and Gather,1999:Pena and Prieto,2001)and different other robust estimators for multivariate location and scatter can be considered (e.g.Maronna,1976;Davies,1987;Tyler,1991;Maronna and Yohai, 1995;Kent and Tyler,1996). Recently,Filzmoser et al.(2005)introduced a multivariate outlier detection method that can be seen as an automation of the method of Garrett (1989).The principle is to measure the deviation of the data distribution from multivariate normality in the tails.In Section 2 we will briefly introduce this method.A comparison with other outlier identi- fication methods is done by means of simulated data in Section 3.Throughout the paper we restrict ourselves to the p-dimensional normal distribution Np(u,>)with mean u and positive definite covariance matrix >as model distribution.However,we also simulate data from other distributions in order to get an idea about the performance in different situations.Section 4 provides conclusions. 2 Methods The method of Filzmoser et al.(2005)follows an idea of Gervini(2003)for increasing the efficiency of the robust estimation of multivariate location and scatter.Let G(u)denote the empirical distribution function of the squared robust Mahalanobis distances MD?,and let G(u)be the distribution function ofx2.For multivariate normally distributed samples, Gn converges to G.Therefore the tails of Gn and G can be compared to detect outliers. The tails will be defined by the quantilefor a certain small (e.g..=0.025). and Pn(6)=sup(G(u)-Gn(u)) (2) u≥ is considered,where"+"indicates the positive differences.In this way,pn()measures the departure of the empirical from the theoretical distribution only in the tails,defined by the value of 6.If pn()is larger than a critical value perit(,n,p),it can be considered
128 Austrian Journal of Statistics, Vol. 34 (2005), No. 2, 127-138 for a p-dimensional observation xi and i = 1, . . . , n. t and C are robust estimations of location and scatter, respectively. For normally distributed data (and if arithmetic mean and sample covariance matrix were used), the Mahalanobis distance is approximately chisquare distributed with p degrees of freedom (χ 2 p ). Potential multivariate outliers xi will typically have large values MDi , and a comparison with the χ 2 p distribution can be made. Garrett (1989) introduced the chi-square plot, which draws the empirical distribution function of the robust Mahalanobis distances against the the χ 2 p distribution. A break in the tail of the distributions is an indication for outliers, and values beyond this break are iteratively deleted until a straight line appears. Rousseeuw and Van Zomeren (1990) use a cut-off value for distinguishing outliers from non-outliers. This value is a certain quantile (e.g., the 97.5% quantile) of the χ 2 p distribution. For t and C the MVE (minimum volume ellipsoid) estimator (Rousseeuw, 1985) was used. However, several years later the MVE was replaced by the MCD (minimum covariance determinant) estimator for this purpose which has better statistical properties and because a fast algorithm exists for its computation (Rousseeuw and Van Driessen, 1999). Various other concepts for multivariate outlier detection methods exist in the literature (e.g. Barnett and Lewis, 1994; Rocke and Woodruff, 1996; Becker and Gather, 1999; Pena˜ and Prieto, 2001) and different other robust estimators for multivariate location and scatter can be considered (e.g. Maronna, 1976; Davies, 1987; Tyler, 1991; Maronna and Yohai, 1995; Kent and Tyler, 1996). Recently, Filzmoser et al. (2005) introduced a multivariate outlier detection method that can be seen as an automation of the method of Garrett (1989). The principle is to measure the deviation of the data distribution from multivariate normality in the tails. In Section 2 we will briefly introduce this method. A comparison with other outlier identi- fication methods is done by means of simulated data in Section 3. Throughout the paper we restrict ourselves to the p-dimensional normal distribution Np(µ, Σ) with mean µ and positive definite covariance matrix Σ, as model distribution. However, we also simulate data from other distributions in order to get an idea about the performance in different situations. Section 4 provides conclusions. 2 Methods The method of Filzmoser et al. (2005) follows an idea of Gervini (2003) for increasing the efficiency of the robust estimation of multivariate location and scatter. Let Gn(u) denote the empirical distribution function of the squared robust Mahalanobis distances MD2 i , and let G(u) be the distribution function of χ 2 p . For multivariate normally distributed samples, Gn converges to G. Therefore the tails of Gn and G can be compared to detect outliers. The tails will be defined by the quantile δ = χ 2 p,1−β for a certain small β (e.g., β = 0.025), and pn(δ) = sup u≥δ ³ G(u) − Gn(u) ´+ (2) is considered, where “+” indicates the positive differences. In this way, pn(δ) measures the departure of the empirical from the theoretical distribution only in the tails, defined by the value of δ. If pn(δ) is larger than a critical value pcrit(δ, n, p), it can be considered
P.Filzmoser 129 as a measure of outliers in the sample.If this is not the case,the outlier measure it set to zero. The critical value perit(,n,p)depends on the quantile 6,and on the size of the data set.It is derived by simulations as follows.Since we consider multivariate normal distri- bution as model distribution,samples with size n are simulated from the p-variate standard normal distribution.Then the outlier detection method is applied and for each simulated sample p()is computed for a fixed value of 6.The critical value is then defined as a certain quantile (1-e)of all values pn()for a small value ofe,e.g.s=0.05. Filzmoser et al.(2005)provide formulas for approximating the critical values for dif- ferent n and p and for =x2.0. One goal of this paper is to get an idea about the performance of the multivariate outlier detection method of Filzmoser et al.(2005).We decided to make a comparison with methods that are also based on robust Mahalanobis distances.Moreover,since the basis for the robust Mahalanobis distance is multivariate location and scatter estimation, we decided to use the MCD estimator(Rousseeuw,1985)for this purpose One reference method for multivariate outlier detection is the method of Rousseeuw and Van Zomeren(1990)which uses fixed quantilesas cut-off values for outliers. The other method is that of Becker and Gather(1999)which works somewhat different: A so-called a outlier with respect to Np(u,>)is an element of the set out(a,u,)={c∈RP:(c-w)T∑-(x-四)>X21-a} (3) which is also called a outlier region.The size of the outlier region is adjusted to the sam- ple size n.This is done by including the condition that under the model,with probability 1-a,no observation lies in the outlier region out(,),and thus=1-(1-a)1/ An an outlier identifier is defined as a region OR(x1,...,In;an):=xERP (x-t)C(x-t)>c(an;n,p)} (4) The critical value c(an,n,p)is obtained by simulations due to the above mentioned con- dition that with probability 1-a no observation will be identified as an outlier. 3 Simulation Study In the previous section we mentioned three methods for comparison.Here we will use the abbreviations FGR for the method of Filzmoser et al.(2005),RZ for that of Rousseeuw and Van Zomeren(1990),and BG for the outlier detection method of Becker and Gather (1999). It should be noted that the concept of the methods RZ and BG for outlier detection are similar because both methods are directly identifying outliers according to their distance. FGR on the other hand tries to identify values that deviate from a majority ofobservations. In order to make the comparison useful we will generate outliers that are far away from the"clean"data.Thus,outliers will be identified in the tail of the distribution by all three methods,leading to a comparable situation. In the following we will study the behavior of the methods in a low dimensional (p =5 and n 200)and in a high dimensional (p 30 and n 1000)situation.Moreover
P. Filzmoser 129 as a measure of outliers in the sample. If this is not the case, the outlier measure it set to zero. The critical value pcrit(δ, n, p) depends on the quantile δ, and on the size of the data set. It is derived by simulations as follows. Since we consider multivariate normal distribution as model distribution, samples with size n are simulated from the p-variate standard normal distribution. Then the outlier detection method is applied and for each simulated sample pn(δ) is computed for a fixed value of δ. The critical value is then defined as a certain quantile (1 − ε) of all values pn(δ) for a small value of ε, e.g. ε = 0.05. Filzmoser et al. (2005) provide formulas for approximating the critical values for different n and p and for δ = χ 2 p,0.975. One goal of this paper is to get an idea about the performance of the multivariate outlier detection method of Filzmoser et al. (2005). We decided to make a comparison with methods that are also based on robust Mahalanobis distances. Moreover, since the basis for the robust Mahalanobis distance is multivariate location and scatter estimation, we decided to use the MCD estimator (Rousseeuw, 1985) for this purpose. One reference method for multivariate outlier detection is the method of Rousseeuw and Van Zomeren (1990) which uses fixed quantiles χ 2 p,1−ϕ as cut-off values for outliers. The other method is that of Becker and Gather (1999) which works somewhat different: A so-called α outlier with respect to Np(µ, Σ) is an element of the set out(α, µ, Σ) := {x ∈ IRp : (x − µ) >Σ −1 (x − µ) > χ2 p,1−α} (3) which is also called α outlier region. The size of the outlier region is adjusted to the sample size n. This is done by including the condition that under the model, with probability 1−α, no observation lies in the outlier region out(αn, µ, Σ), and thus αn = 1−(1−α) 1/n . An αn outlier identifier is defined as a region OR(x1, . . . , xn; αn) := {x ∈ IRp : (x − t) >C −1 (x − t) ≥ c(αn, n, p)} (4) The critical value c(αn, n, p) is obtained by simulations due to the above mentioned condition that with probability 1 − α no observation will be identified as an outlier. 3 Simulation Study In the previous section we mentioned three methods for comparison. Here we will use the abbreviations FGR for the method of Filzmoser et al. (2005), RZ for that of Rousseeuw and Van Zomeren (1990), and BG for the outlier detection method of Becker and Gather (1999). It should be noted that the concept of the methods RZ and BG for outlier detection are similar because both methods are directly identifying outliers according to their distance. FGR on the other hand tries to identify values that deviate from a majority of observations. In order to make the comparison useful we will generate outliers that are far away from the “clean” data. Thus, outliers will be identified in the tail of the distribution by all three methods, leading to a comparable situation. In the following we will study the behavior of the methods in a low dimensional (p = 5 and n = 200) and in a high dimensional (p = 30 and n = 1000) situation. Moreover
130 Austrian Journal of Statistics,Vol.34(2005),No.2,127-138 several data configurations will be considered.The critical values needed for the methods FGR and BG result from simulations with 1000 replications for the corresponding n and p and for the parameters(,e;a)being used. 3.1 Normal Data with Shift Normal Outliers In this first experiment we generate n-nout data points from the p-variate standard normal distribution N.(0,I)and nout samples from the "outlier distribution"Np(n.1,I) (shift outliers).The proportion of outliers is varied as nout/n =0.05,0.10,...,0.45. We compute the proportions of identified outliers on the samples generated from the outlier distribution(percentage of correct identified outliers)and the proportion of identi- fied outliers from the "clean data"distribution(percentage of wrong identified outliers). The proportions are averaged over 100 replications of the simulation.The parameter choices are: .for the method FGR:B=0.025(therefore =x2.0.97)and =0.05 for the method RZ:=0.025(therefore cut-offx) for the method BG:a =0.05 The results are presented in Figure 2,using the legend of Figure 1.For the low di- mensional data(left picture)the distance of the outliers was chosen by the value n=3, and for the high dimensional data we took n 1.5.Compared to other studies (e.g. Rousseeuw and Van Driessen,1999;Pena and Prieto,2001)this outlier distance is very low,and in fact there is a significant overlap of the data points from both distributions (more details below).It can be seen that the methods FGR and RZ have similar behavior, except for small outlier fractions for the low dimensional data where FGR does not work well.The method BG performs rather poor in this situation for detecting the outliers. Note that all three methods break down for high outlier percentages.This,however,is due to the properties of the algorithm for computing the MCD estimator:Rousseeuw and Van Driessen(1999)used the same setup-except the distance of the outliers was much higher with n=10-and for n =1000 and p=30 the MCD gave the correct solution for a maximum of 24%outliers in the data.For the wrongly identified outliers the method BG gives the smallest percentages,followed by FGR and RZ. FGR correct FGR wrong RZ correct RZ wrong BG correct BG wrong Figure 1:Legend to Figures 2,4 and 5. It should be noted that for a larger outlier distance,e.g.by taking n=10,the three methods would yield essentially the same(good)results
130 Austrian Journal of Statistics, Vol. 34 (2005), No. 2, 127-138 several data configurations will be considered. The critical values needed for the methods FGR and BG result from simulations with 1000 replications for the corresponding n and p and for the parameters (δ, ε; α) being used. 3.1 Normal Data with Shift Normal Outliers In this first experiment we generate n − nout data points from the p-variate standard normal distribution Np(0, I) and nout samples from the “outlier distribution” Np(η ·1, I) (shift outliers). The proportion of outliers is varied as nout/n = 0.05, 0.10, . . . , 0.45. We compute the proportions of identified outliers on the samples generated from the outlier distribution (percentage of correct identified outliers) and the proportion of identi- fied outliers from the “clean data” distribution (percentage of wrong identified outliers). The proportions are averaged over 100 replications of the simulation. The parameter choices are: • for the method FGR: β = 0.025 (therefore δ = χ 2 p,0.975) and ε = 0.05 • for the method RZ: φ = 0.025 (therefore cut-off χ 2 p,0.975) • for the method BG: α = 0.05 The results are presented in Figure 2, using the legend of Figure 1. For the low dimensional data (left picture) the distance of the outliers was chosen by the value η = 3, and for the high dimensional data we took η = 1.5. Compared to other studies (e.g. Rousseeuw and Van Driessen, 1999; Pena and Prieto, 2001) this outlier distance is very ˜ low, and in fact there is a significant overlap of the data points from both distributions (more details below). It can be seen that the methods FGR and RZ have similar behavior, except for small outlier fractions for the low dimensional data where FGR does not work well. The method BG performs rather poor in this situation for detecting the outliers. Note that all three methods break down for high outlier percentages. This, however, is due to the properties of the algorithm for computing the MCD estimator: Rousseeuw and Van Driessen (1999) used the same setup–except the distance of the outliers was much higher with η = 10–and for n = 1000 and p = 30 the MCD gave the correct solution for a maximum of 24% outliers in the data. For the wrongly identified outliers the method BG gives the smallest percentages, followed by FGR and RZ. FGR correct FGR wrong RZ correct RZ wrong BG correct BG wrong Figure 1: Legend to Figures 2, 4 and 5. It should be noted that for a larger outlier distance, e.g. by taking η = 10, the three methods would yield essentially the same (good) results
P.Filzmoser 131 n=200,p=5 n=1000,p=30 品 8 8 导 8 275小 10203040 1020 40 Percentage of simulated outliers Percentage of simulated outliers Figure 2:Multivariate standard normal distribution with normally distributed shift outliers with n=3(left)and n =1.5(right),respectively.For the legend see Figure 1. Remark 1:With respect to the value n of the shift outliers it is interesting to know the overlap"of the non-outlier and the outlier distribution for growing dimension.We have computed the overlap for n=1.5 by simulations as follows.105 data points have been generated according to Np(1.5.1,I),and their Mahalanobis distances with respect to center 0 and covariance I have been computed.Then we counted the number of samples with distance smaller thanIn this way we can estimate the probability mass of the outlier distribution intersecting the p-variate standard normal distribution at the quantile 0.975.The result is shown in the left picture of Figure 3 for dimensions p 2,...,30.Since this value n 1.5 was used in the previous simulation for p=30,it is surprising that the overlap of the outlier distribution is indeed very small.This leads to the situation that with increasing dimension the classification problem of identifying shift outliers should in principle become easier,but due to several studies(e.g.Rousseeuw and Van Driessen,1999;Rocke and Woodruff,1999)the computational problems in higher dimensions become larger. On the other hand we can fix the overlap and ask for the distance of the outlier distri- bution.The result is shown in the right picture of Figure 3 for a fixed overlap of 10%.We used a log scale on both axes,and it turns out that the relation between (log-)dimension and (log-)distance is almost linear. 3.2 Ta Distributed Data with Shift Normal Outliers We take the same simulation setup as in the first experiment,only the"clean"data distri- bution is changed from multivariate standard normal distribution to multivariate t distri- bution with 3 degrees of freedom(T3)(see e.g.Genz and Bretz,1999).The T3 distribution has heavier tails and we thus expect more overlap with the outlier distribution.Here we choose the distance of the shift normal outliers due to n=3 in both cases p =5 and p=30.The results are shown in Figure 4.Compared to Figure 2 it is clearly visible that the percentage of wrongly identified outliers is much higher in general which is a conse-
P. Filzmoser 131 0 10 20 30 40 50 0 20 40 60 80 100 n=200, p=5 Percentage of simulated outliers Percentage of identified outliers 0 10 20 30 40 50 0 20 40 60 80 100 n=1000, p=30 Percentage of simulated outliers Percentage of identified outliers Figure 2: Multivariate standard normal distribution with normally distributed shift outliers with η = 3 (left) and η = 1.5 (right), respectively. For the legend see Figure 1. Remark 1: With respect to the value η of the shift outliers it is interesting to know the “overlap” of the non-outlier and the outlier distribution for growing dimension. We have computed the overlap for η = 1.5 by simulations as follows. 105 data points have been generated according to Np(1.5 · 1, I), and their Mahalanobis distances with respect to center 0 and covariance I have been computed. Then we counted the number of samples with distance smaller than χ 2 p,0.975. In this way we can estimate the probability mass of the outlier distribution intersecting the p-variate standard normal distribution at the quantile 0.975. The result is shown in the left picture of Figure 3 for dimensions p = 2, . . . , 30. Since this value η = 1.5 was used in the previous simulation for p = 30, it is surprising that the overlap of the outlier distribution is indeed very small. This leads to the situation that with increasing dimension the classification problem of identifying shift outliers should in principle become easier, but due to several studies (e.g. Rousseeuw and Van Driessen, 1999; Rocke and Woodruff, 1999) the computational problems in higher dimensions become larger. On the other hand we can fix the overlap and ask for the distance of the outlier distribution. The result is shown in the right picture of Figure 3 for a fixed overlap of 10%. We used a log scale on both axes, and it turns out that the relation between (log-)dimension and (log-)distance is almost linear. 3.2 T3 Distributed Data with Shift Normal Outliers We take the same simulation setup as in the first experiment, only the “clean” data distribution is changed from multivariate standard normal distribution to multivariate t distribution with 3 degrees of freedom (T3) (see e.g. Genz and Bretz, 1999). The T3 distribution has heavier tails and we thus expect more overlap with the outlier distribution. Here we choose the distance of the shift normal outliers due to η = 3 in both cases p = 5 and p = 30. The results are shown in Figure 4. Compared to Figure 2 it is clearly visible that the percentage of wrongly identified outliers is much higher in general which is a conse-