1 Introduction

One of the main topics that has focused the research of Agustín over a long period of time is seasonality. However, this is not his only topic of interest. Agustín’s contributions to the Econometric Time Series literature are much broader and include, among others, the treatment of outliers in time series; see, for example, Maravall and Peña (1986), Peña and Maravall (1991), Gómez et al. (1999) and Kaiser and Maravall (2003). In these papers, Agustín and his coauthors consider the effects and treatment of outliers in macroeconomic data and, consequently, deal primarily with linear time series models. However, outliers are also present in the context of financial time series mainly when they are observed over long periods of time. It is important to note that, in this framework, the interest shifts from conditional means to conditional variances and, consequently, to non-linear models. Agustín has also contributions in this area; see Fiorentini and Maravall (1996) for an analysis of the dynamic dependence of second order moments.

When dealing with financial data, many series of returns are conditionally heteroscedastic with volatilities responding asymmetrically to negative and positive past returns. In particular, the volatility is higher in response to past negative shocks (‘bad’ news) than to positive shocks (‘good’ news) of the same magnitude. Following Black (1976) this feature is commonly referred to as leverage effect. Incorporating the leverage effect into conditionally heteroscedastic models is important to better capture the dynamic behaviour of financial returns and improve the forecasts of future volatility; see Bollerslev et al. (2006) for an extensive list of references and Hibbert et al. (2008) for a behavioral explanation of the negative asymmetric return–volatility relation. The identification of conditional heteroscedasticity is often based on the sample autocorrelations of squared returns. Carnero et al. (2007) show that the presence of outliers biases these autocorrelations with misleading effects on the identification of time-varying volatilities. On the other hand, the identification of leverage effect is often based on the sample cross-correlations between past and squared returns. Negative values of these cross-correlations indicate potential asymmetries in the volatility; see, for example, Bollerslev et al. (2006), Zivot (2009), Rodríguez and Ruiz (2012) and Tauchen et al. (2012). In this paper, we analyse how the identification of asymmetries, when based on the sample cross-correlations, can also be affected by the presence of outliers.

This paper has two main contributions. First, we derive the asymptotic biases caused by large outliers on the sample cross-correlation of order h between past and squared observations generated by uncorrelated stationary processes. We show that k large consecutive outliers bias such correlations towards zero for \(h\ge k\), rendering the detection of genuine leverage effect difficult. In particular, one isolated large outlier biases all the sample cross-correlations towards zero and so it could hide true leverage effect. Moreover, the presence of two big consecutive outliers biases the first-order sample cross-correlation towards 0.5 (\(-0.5\)) if the first outlier is positive (negative) and so it could lead to identify either spurious asymmetries or asymmetries of the wrong sign.

The second contribution of this paper is to address the problem of robust estimation of serial cross-correlations by extending several popular robust estimators of pairwise correlations and autocorrelations. In the context of bivariate Gaussian variables, there are several proposals to robustify the pairwise sample correlation; see Shevlyakov and Smirnov (2011) for a review of the most popular ones. However, the literature on robust estimation of correlations for time series is scarce and mainly focused on autocovariances and autocorrelations. For example, Hallin and Puri (1994) propose to estimate the autocovariances using rank-based methods. Ma and Genton (2000) introduce a robust estimator of the autocovariances based on the robust scale estimator of Rousseeuw and Croux (1992, 1993). More recently, Lévy-Leduc et al. (2011) establish its asymptotic and finite sample properties for Gaussian processes. Ma and Genton (2000) also suggest a possible robust estimator of the autocorrelation function but they do not further discuss its properties neither apply it in their empirical application. Finally, Teräsvirta and Zhao (2011) propose two robust estimators of the autocorrelations of squares based on the Huber’s and Ramsay’s weighting schemes. The theoretical and empirical evidence from all these papers strongly suggests using robust estimators to measure the dependence structure of time series.

We analyse and compare the finite sample properties of the proposed robust estimators of the cross-correlations between past and squared observations of stationary uncorrelated series. As expected, these estimators are resistant against outliers remaining the same regardless of the size and the number of outliers. Moreover, even in the presence of consecutive large outliers, the robust estimators considered estimate the true sign of the cross-correlations although they underestimate their magnitudes. Among the robust cross-correlations considered, the modified version of the Ramsay-weighted serial autocorrelation suggested by Teräsvirta and Zhao (2011) provides the best resistance against outliers and the lowest bias.

To illustrate the results, we compute the sample cross-correlations and their robust counterparts of a real series of daily financial returns. We show how consecutive extreme observations bias the usual sample cross-correlations and could lead to wrongly identifying potential leverage effect. These empirical results enhance the importance of using robust measures of serial correlation to identify both conditional heteroscedasticity and leverage effect.

The rest of the paper is organized as follows. Section 2 is devoted to the analysis of the effects of additive outliers on the sample cross-correlations between past and squared observations of stationary uncorrelated time series that could be either homoscedastic or heteroscedastic. Section 3 considers four robust measures of cross-correlation and compares their finite sample properties in the presence of outliers. The difficulty of extending the Ma and Genton (2000) proposal to the estimation of serial cross-correlation is discussed in Sect. 4. The empirical analysis of a time series of daily Dow Jones Industrial Average index is carried out in Sect. 5. Section 6 concludes the paper with a summary of the main results and proposals for further research.

2 Effects of outliers on the identification of asymmetries

In this section, we derive analytically the effect of large additive outliers on the sample cross-correlations between past and squared observations generated by uncorrelated stationary processes that could be either homoscedastic or heteroscedastic. The main results are illustrated with some Monte Carlo experiments.

2.1 Asymptotic effects

Let \(y_{t}\), \(t=1,...,T\), be a stationary series with finite fourth-order moment that is contaminated from time \(\tau \) onwards by k consecutive outliers with the same sign and size, \(\omega \). The observed series is then given by

$$\begin{aligned} z_{t}= {\left\{ \begin{array}{ll} y_{t}+\omega &{} \text {if}\quad t=\tau ,\tau +1,\ldots ,\tau +k-1 \\ y_{t} &{} \text {otherwise.} \end{array}\right. } \end{aligned}$$
(1)

Denote by \(r_{12}(h)\) the sample cross-correlation of order h, \(h\ge 1,\) between past and squared observations of \(z_{t}\), which is given by

$$\begin{aligned} r_{12}(h)=\frac{\sum \nolimits _{t=h+1}^{T}\left( z_{t-h}-\overline{Z}\right) (z_{t}^{2}-\overline{Z^{2}})}{\sqrt{\sum \nolimits _{t=1}^{T}\left( z_{t}- \overline{Z}\right) ^{2}\sum \nolimits _{t=1}^{T}(z_{t}^{2}-\overline{Z^{2}} )^{2}}} \end{aligned}$$
(2)

where \(\overline{Z}=\dfrac{1}{T}\sum \nolimits _{t=1}^{T}z_{t}\) and \(\overline{ Z^{2}}=\dfrac{1}{T}\sum \nolimits _{t=1}^{T}z_{t}^{2}\). The most pernicious impact of outliers on \(r_{12}(h)\) happens when they are huge and do not come up in the very extremes of the sample but on such a position that they affect the two factors of the cross-products in (2). In order to derive the impact of these outliers, we compute the limiting behaviour of \(r_{12}(h)\) when \(h+1\le \tau \le T-h-k+1\) and \(\left| \omega \right| \rightarrow \infty \).

The denominator of \(r_{12}(h)\) in (2) can be written in terms of the original uncontaminated series \(y_{t}\), as follows

$$\begin{aligned}&\sqrt{\sum \limits _{t\in \mathbb {T}_{0,0}}y_{t}^{2}+\sum \limits _{i=0}^{k-1}(y_{\tau +i}+\omega )^{2}-\dfrac{1}{T}\left[ \sum \limits _{t\in \mathbb {T}_{0,0}}y_{t}+\sum \limits _{i=0}^{k-1}(y_{\tau +i}+\omega )\right] ^{2}}\nonumber \\&\quad \times \sqrt{\sum \limits _{t\in \mathbb {T}_{0,0}}y_{t}^{4}+\sum \limits _{i=0}^{k-1}(y_{\tau +i}+\omega )^{4}-\dfrac{1}{T}\left[ \sum \limits _{t\in \mathbb {T}_{0,0}}y_{t}^{2}+\sum \limits _{i=0}^{k-1}(y_{\tau +i}+\omega )^{2}\right] ^{2}} \end{aligned}$$
(3)

where \(\mathbb {T}_{h,s}=\left\{ t\in \{h+1,...,T\} \text { such that }t\ne \tau +s,\tau +s+1,\ldots ,\tau +s+k-1\right\} \). Since we are concerned with the limit as \(\left| \omega \right| \rightarrow \infty \), we focus our attention on the terms with the maximum power of \(\omega \). Then, it turns out that (3) is equal to \(\left( k-\frac{k^{2}}{T}\right) \left| \omega \right| ^{3}+o(\omega ^{3})\).

In order to make the calculations simpler, we consider the following alternative expression of the numerator in (2), which is asymptotically equivalent if the sample size, T,  is large relative to the cross-correlation order, h,

$$\begin{aligned} \sum \limits _{t=h+1}^{T}z_{t}^{2}z_{t-h}-\dfrac{1}{T}\left( \sum \limits _{t=h+1}^{T}z_{t}^{2}\right) \left( \sum \limits _{t=h+1}^{T}z_{t-h}\right) . \end{aligned}$$
(4)

When h is smaller than the number of consecutive outliers, i.e. \(h<k\), expression (4) can be written in terms of the original uncontaminated series \(y_{t}\), as follows

$$\begin{aligned}&\sum _{t\in \mathbb {T}_{h,0}\cap \mathbb {T}_{h,h}}y_{t}^{2}y_{t-h}+\sum \limits _{i=0}^{h-1}(y_{\tau +i}+\omega )^{2}y_{\tau +i-h}+\sum \limits _{i=h}^{k-1}(y_{\tau +i}+\omega )^{2}(y_{\tau +i-h}+\omega )\nonumber \\&\quad +\sum \limits _{i=k}^{k+h-1}y_{\tau +i}^{2}(y_{\tau +i-h}+\omega ) \nonumber \\&\quad -\dfrac{1}{T}\left[ \sum \limits _{t\in \mathbb {T}_{h,0}}y_{t}^{2} +\sum \limits _{i=0}^{k-1}(y_{\tau +i}+\omega )^{2}\right] \left[ \sum \limits _{t\in \mathbb {T}_{h,h}}y_{t-h}+\sum \limits _{i=0}^{k-1}(y_{\tau +i}+\omega )\right] . \end{aligned}$$
(5)

In expression (5), the terms with the maximum power of \(\omega \) are the third and the fifth ones, which contain \(k-h\) and \(k^{2}\) terms in \(\omega ^{3}\), respectively. Therefore, expression (5) is equal to \(\left( k-h-\frac{ k^{2}}{T}\right) \omega ^{3}+o(\omega ^{3})\).

On the other hand, when the order of the cross-correlation is larger than the number of outliers, i.e. \(h\ge k\), expression (4) can be written as follows

$$\begin{aligned}&\sum _{t\in \mathbb {T}_{h,0}\cap \mathbb {T}_{h,h}}y_{t}^{2}y_{t-h}+\sum \limits _{i=0}^{k-1}(y_{\tau +i}+\omega )^{2}y_{\tau +i-h}+\sum \limits _{i=h}^{k+h-1}y_{\tau +i}^{2}(y_{\tau +i-h}+\omega )+ \nonumber \\&\quad -\dfrac{1}{T}\left[ \sum \limits _{t\in \mathbb {T}_{h,0}}y_{t}^{2}+\sum \limits _{i=0}^{k-1}(y_{\tau +i}+\omega )^{2}\right] \left[ \sum \limits _{t\in \mathbb {T}_{h,h}}y_{t-h}+\sum \limits _{i=0}^{k-1}(y_{\tau +i}+\omega )\right] . \end{aligned}$$
(6)

In this case, the term with the maximum power of \(\omega \) is the fourth one, which contains \(k^{2}\) terms in \(\omega ^{3}\). Therefore, expression (6) is equal to \(-\frac{k^{2}}{T}\omega ^{3}+o(\omega ^{3})\).

Consequently, since the product in expression (3) is always positive, the sign of the limit of the cross-correlations in (2) is given by the sign of its numerator, which in turn depends on the sign of \(\omega ,\) and we get the following result:

$$\begin{aligned} \lim _{\left| \omega \right| \rightarrow \infty }r_{12}(h)=\left\{ \begin{array}{ll} sign(\omega )\times \left( 1-\frac{h}{k(1-\frac{k}{T})}\right) &{} \text { if } h<k \\ sign(\omega )\times \frac{k}{k-T} &{} \text { if }h\ge k. \end{array} \right. \end{aligned}$$
(7)

Equation (7) shows that the effect of outliers on the sample cross-correlations depends on: (1) whether the outliers are consecutive or isolated and (2) their sign. In particular, one single large outlier (\(k=1\) ) biases \(r_{12}(h)\) towards zero for all lags regardless of its sign. Thus, if a heteroscedastic time series with leverage effect is contaminated by a large single outlier, the detection of genuine leverage effect will be difficult, as it was the detection of genuine heteroscedasticity; see Carnero et al. (2007). On the other hand, a patch of k large consecutive outliers always biases \(r_{12}(h)\) towards zero for lags \(h\ge \) k and, for smaller lags, it generates positive or negative cross-correlations depending on whether the outliers are positive or negative. For example, if T is large, two huge positive (negative) consecutive outliers generate a first order cross-correlation tending to 0.5 (\(-0.5\)), being all the others close to zero; see Maronna et al. (2006) and Carnero et al. (2007) for a similar result in the context of sample autocorrelations of levels and squares, respectively. Therefore, if a heteroscedastic time series without leverage effect or an uncorrelated homoscedastic series is contaminated by several large negative consecutive outliers, the negative cross-correlations generated by the outliers can be confused with asymmetric conditional heteroscedasticity.Footnote 1 In practice, we will not face such huge outliers as to reach the limiting values of \(r_{12}(h)\) in (7), but the result is still useful because it provides a clue on the direction of the bias of the cross-correlations.

So far, we have assumed that the consecutive outliers have the same magnitude and sign. However, it could also be interesting to analyse the effects of outliers of different signs on the sample cross-correlations. For instance, one isolated positive (negative) outlier in the price of an asset at time \(\tau \), implies a doublet outlier in the corresponding return series, i.e. a positive (negative) outlier at time \(\tau \) followed by a negative (positive) outlier at time \(\tau +1.\) In this case, we will have \( k=2\) consecutive outliers of opposite signs, that will be assumed, for the moment, to have equal magnitude, i.e. \(\omega _{\tau }=|\omega |sign(\omega _{\tau })\) and \(\omega _{\tau +1}=|\omega |sign(\omega _{\tau +1})\). Then, if \(h=1\) and the outlier size, \(|\omega |\), goes to infinity, the largest contribution to the limit of the numerator of \(r_{12}(1)\) given in (5) is due to the following term

$$\begin{aligned} \left( y_{\tau +1}+\omega _{\tau +1}\right) ^{2}\left( y_{\tau }+\omega _{\tau }\right) \end{aligned}$$

and this is equal to \(|\omega |^{3}sign(\omega _{\tau }).\) Therefore, the sign of the limit of the cross-correlation is the sign of the first outlier: if this is positive and the second is negative, the limit of \(r_{12}(1)\) as \( |\omega |\rightarrow \infty \) will be positive and equals to 0.5, while if the first outlier is negative and the second is positive, the limit of \( r_{12}(1)\) as \(|\omega |\rightarrow \infty \) will be negative and equals to \( -0.5\). For \(h\ge 2,\) all the cross-correlations \(r_{12}(h)\) will go to zero. A similar analysis can be carried out if the series is contaminated by \(k=3\) consecutive outliers of the same size but different signs to know whether the limit of the cross-correlations is positive or negative.

Note also that the results above are still valid if the outliers have different sizes. In this case, we can write \(\omega _{t}=\omega +\delta _{t}\) in (1) instead of \(\omega \) and the results will be the same when \(|\omega |\rightarrow \infty .\)

2.2 Finite sample effects

To further illustrate the results in the previous subsection, we generate 1000 artificial series of size \(T=1000\) by a homoscedastic Gaussian white noise process with unit variance and by the EGARCH model proposed by Nelson (1991). The EGARCH model generates asymmetric conditionally heteroscedastic time series and, according to Rodríguez and Ruiz (2012), it is more flexible than other asymmetric GARCH-type models, to simultaneusly represent the dynamics of financial returns and satisfy the conditions for positive volatilities, covariance stationarity and finite kurtosis. The particular EGARCH model chosen to generate the data is given by

$$\begin{aligned} \begin{aligned} y_{t}&=\sigma _{t}\varepsilon _{t}\\ \log (\sigma _{t}^{2})&=-0.006+0.98\log (\sigma _{t-1}^{2})+0.2(|\varepsilon _{t-1}|-E(|\varepsilon _{t-1}|))-0.1\varepsilon _{t-1} \end{aligned} \end{aligned}$$
(8)

where \(\varepsilon _{t}\) is a Gaussian white noise process with unit variance and, consequently, \(E(|\varepsilon _{t-1}|)=\sqrt{2/\pi }\); see Nelson (1991) for the properties of EGARCH models. The parameters in (8) have been chosen to imply a marginal variance of \(y_{t}\) equal to one and to resemble the values usually encountered in real empirical applications; see, for instance, Hentschel (1995) and Bollerslev and Mikkelsen (1999).Footnote 2

Each simulated series is contaminated first, with a single negative outlier of size \(\omega =-50\) at time \(t=500\), and second, with two consecutive outliers of the same size but opposite signs, the first negative (\(\omega =-50\)) at time \(t=500\) and the second positive (\(\omega =50\)) at time \(t=501\). For each replicate, we compute the sample cross-correlations up to order 50 and then, for each lag, h, we compute their average over all replicates. The first row of Fig. 1 plots the average sample cross-correlations from the uncontaminated white noise process (left panel) and for the uncontaminated EGARCH process (right panel). The average sample cross-correlations computed from the corresponding contaminated series with one and two outliers are plotted in the second and third rows, respectively. In all cases, the red solid line represents the true cross-correlations.

Fig. 1
figure 1

Monte Carlo means of sample cross-correlations in simulated white noise (first column) and EGARCH (second column) series without outliers (first row), with a single negative outlier (second row) and with two consecutive outliers, negative and positive (third row) of size \(|\omega |=50\). The red solid linerepresents the true cross-correlations

As we can see, when a series generated by the EGARCH model is contaminated with one single large negative outlier, we may wrongly conclude that there is not leverage effect since all the cross-correlations become nearly zero. On the other hand, when the series is contaminated with two consecutive outliers of different sign, being the first one negative, only the first cross-correlation will be different from zero and approximately equal to \( -0.5\) regardless of whether the series is homoscedastic or heteroscedastic. Therefore, in this case, we can identify either a negative leverage effect when there is none (the series is truly a Gaussian white noise) or a much more negative leverage effect than the actual one (as in the case of the EGARCH model). Similar results would be obtained if the two outliers were positive, but in this case the first cross-correlation would be biased towards 0.5. Consequently, we could wrongly identify asymmetries in a series that is actually white noise or we could identify a positive leverage effect when it is truly negative as in the EGARCH process.

We now analyse how fast the limit in (7) is reached as the size of the outliers increases. In order to do that, we contaminate the same 1000 artificial series simulated before first with one isolated outlier of size \( \{-\omega \}\) at time \(t=500\) and second with two consecutive outliers of sizes \(\{-\omega ,\omega \}\) located at times \(t=\{500,501\}\), where \(\omega \) could take several values, namely \(\omega =\{1,2,...,50\}\). We then compute the average of the first and second order sample cross-correlations from these contaminated series over the 1000 replicates. Figure 2 plots the average of \(r_{12}(1)\) (first row) and \(r_{12}(2)\) (second row) against the size of the outlier, \(\omega \), for the two simulated processes and the two types of contamination considered. The values of the theoretical cross-correlations for the uncontaminated processes are also displayed with a red solid line.

Fig. 2
figure 2

Monte Carlo means of sample cross-correlations of order 1 (first row) and of order 2 (second row) for white noise (first column) and EGARCH series (second column) contaminated with a single negative outlier and with two consecutive outliers, negative and positive, as a function of the outlier size. The red solid line represents the true cross-correlation

As we can see, the sample cross-correlations start being distorted when the outliers are larger (in absolute value) than 5 standard deviations. Furthermore, when the size of the outliers is over 20, the corresponding sample cross-correlations are already quite close to their limiting values (\( -0.5\) in the first order cross-correlation and 0 in the second order cross-correlation). Moreover, the size of two consecutive outliers does not need to be very large to distort the first order sample cross-correlation. However, a single outlier needs to be of larger magnitude to bias this correlation towards zero. In homoscedastic series, two consecutive outliers have a tremendous effect on the first order sample cross-correlation, even if they are not very big, and could lead to wrongly identify asymmetries in a series that is actually white noise. On the other hand, the first cross-correlations of a heteroscedastic series contaminated with one single outlier as big as 15 or 20 could be confused with those of a white noise. Similar results would be obtained if the series were contaminated with positive outliers but they are not reported here to save space.

3 Robust cross-correlations

In the previous section we have shown that the sample cross-correlations between past and squared observations of a stationary uncorrelated series are very sensitive to the presence of outliers and could lead to a wrong identification of asymmetries. In this section we consider robust cross-correlations to overcome this problem. In particular, we generalize some of the robust estimators for the pairwise correlations described in Shevlyakov and Smirnov (2011) and one of the robust autocorrelations proposed by Teräsvirta and Zhao (2011). We discuss their finite sample properties and compare them to the properties of the sample cross-correlations.

3.1 Extensions of robust correlations

A direct way of robustifying the pairwise sample correlation coefficient between two random variables is to replace the averages by their corresponding nonlinear robust counterparts, the medians; see Falk (1998). By doing so in the sample cross-correlation \(r_{12}(h)\) in (2) we get the following expression, that is called the sample cross-correlation median estimator:

$$\begin{aligned} r_{12,COMED}(h)=\frac{med_{t\in \{h+1,...,T\}}\{(z_{t-h}-med(z))(z_{t}^{2}-med(z^{2}))\}}{MAD(z)MAD(z^{2})} \end{aligned}$$
(9)

where med(x) stands for the sample median of x and MAD denotes the sample median absolute deviation, i.e. \(MAD(x)=med(|x-med(x)|)\). Unless otherwise stated, the median is calculated over the whole sample. When the median is calculated over a subsample, this is specifically stated, as in (9), where \(med_{t\in \{h+1,...,T\}}\) denotes the sample median calculated over the subsample indexed by \(t\in \{h+1,...,T\}\).

Another popular robust estimator of the pairwise correlation is the Blomqvist quadrant correlation coefficient. The extension of this coefficient to cross-correlations yields the following expression, that will be called the Blomqvist cross-correlation coefficient:

$$\begin{aligned} r_{12,B}(h)=\frac{1}{T}\sum \limits _{t=h+1}^{T}sign(z_{t-h}-med(z))sign(z_{t}^{2}-med(z^{2})). \end{aligned}$$
(10)

Estimation of the correlation between two random variables X and Y, denoted by \(\rho \), can also be based on a scale approach, by means of the following identity:

$$\begin{aligned} \rho =\frac{Var(U)-Var(V)}{Var(U)+Var(V)} \end{aligned}$$
(11)

where

$$\begin{aligned} U=\left( \frac{X}{\sigma _{X}}+\frac{Y}{\sigma _{Y}}\right) \quad \text { and }\quad V=\left( \frac{X}{\sigma _{X}}-\frac{Y}{\sigma _{Y}}\right) \end{aligned}$$
(12)

are called the principal variables and \(\sigma _{X}\) and \(\sigma _{Y}\) are the standard deviations of X and Y, respectively. In order to get robust estimators for \(\rho \), Gnanadesikan and Kettenring (1972) propose replacing the variances and standard deviations in (11) and (12), respectively, by robust estimators as follows

$$\begin{aligned} \widehat{\rho }=\frac{\widehat{S}^{2}(U)-\widehat{S}^{2}(V)}{\widehat{S} ^{2}(U)+\widehat{S}^{2}(V)} \end{aligned}$$
(13)

where \(\widehat{S}\) is a robust scale estimator. Depending on the robust estimator \(\widehat{S}\) used in (13), different robust estimators of the correlation may arise. For instance, Shevlyakov (1997) considers \(\widehat{S} \) as the Hampel’s median of absolute deviations and gets the median correlation coefficient. This estimator extended to compute cross-correlations, called median cross-correlation coefficient, would be:

$$\begin{aligned} r_{12,MED}(h)=\frac{\left( med_{t\in \{h+1,...,T\}}(|u_{t}|)\right) ^{2}-\left( med_{t\in \{h+1,...,T\}}(|v_{t}|)\right) ^{2}}{\left( med_{t\in \{h+1,...,T\}}(|u_{t}|)\right) ^{2}+\left( med_{t\in \{h+1,...,T\}}(|v_{t}|)\right) ^{2}} \end{aligned}$$
(14)

where

$$\begin{aligned} u_{t}=\frac{z_{t-h}-med(z)}{MAD(z)}+\frac{z_{t}^{2}-med(z^{2})}{MAD(z^{2})}\quad \text { and }\quad v_{t}=\frac{z_{t-h}-med(z)}{MAD(z)}-\frac{ z_{t}^{2}-med(z^{2})}{MAD(z^{2})}. \end{aligned}$$

Finally, in the context of time series, Teräsvirta and Zhao (2011) propose robust estimators of the autocorrelations based on applying the Huber’s and Ramsay’s weights to the sample variances and autocovariances. We extend this idea to the cross-correlations where the two series involved are the lagged levels, \(z_{t-h\text {,}}\) and their squares, \(z_{t}^{2}\). In particular, we focus on the weighted correlation with the Ramsay’s weights using a slight modification to cope with squares. The resulting weighted estimator of the cross-correlation of order h proposed is given by

$$\begin{aligned} r_{12,W}(h)=\frac{\widetilde{\gamma }_{12}(h)}{\sqrt{\widetilde{\gamma } _{1}(0)\widetilde{\gamma }_{2}(0)}} \end{aligned}$$
(15)

where

$$\begin{aligned} \widetilde{\gamma }_{12}(h)= & {} \frac{\sum \nolimits _{t=h+1}^{T}w_{t-h}\left( z_{t-h}-\overline{Z}_{w}\right) w_{t}^{2}(z_{t}^{2}-\overline{Z_{w}^{2}})}{ \sum \nolimits _{t=h+1}^{T}w_{t-h}w_{t}^{2}},\text { } \\ \widetilde{\gamma }_{1}(0)= & {} \frac{\sum \nolimits _{t=1}^{T}w_{t}\left( z_{t}- \overline{Z}_{w}\right) ^{2}}{\sum \nolimits _{t=1}^{T}w_{t}}\quad \text { and }\quad \widetilde{\gamma }_{2}(0)=\frac{\sum \nolimits _{t=1}^{T}w_{t}^{2}\left( z_{t}^{2}-\overline{Z_{w}^{2}}\right) ^{2}}{\sum \nolimits _{t=1}^{T}w_{t}^{2}} \end{aligned}$$

with

$$\begin{aligned} \overline{Z}_{w}= & {} \frac{\sum \nolimits _{t=1}^{T}w_{t}z_{t}}{\sum \nolimits _{t=1}^{T}w_{t}},\quad \text { }\overline{Z_{w}^{2}}=\frac{\sum \nolimits _{t=1}^{T}w_{t}^{2}z_{t}^{2}}{\sum \nolimits _{t=1}^{T}w_{t}^{2}}, \\ \quad w_{t}= & {} \exp \left( -a\frac{|z_{t}-\overline{Z}|}{\widehat{\sigma }_{z}} \right) \quad \text { and }\quad \widehat{\sigma }_{z}=\sqrt{\frac{1}{T-1}\sum \nolimits _{t=1}^{T}\left( z_{t}-\overline{Z}\right) ^{2}}\text {.} \end{aligned}$$

Following Teräsvirta and Zhao (2011), we use \(a=0.3\). By applying the weights \(w_{t}\) to the series in levels, every observation will be downweighted except those equal to the sample mean. Note that when the weighting scheme is applied to squared observations, the weights are squared so that bigger squared observations are more downward weighted than their corresponding observations in levels.

3.2 Monte Carlo experiments

In order to analyse the finite sample properties of the four robust cross-correlations introduced above, we consider the same Monte Carlo simulations described in Sect. 2.2. For each replicate, the robust cross-correlations are computed up to lag 50. The first row of Fig. 3 plots the corresponding Monte Carlo averages for the uncontaminated white noise process (left panel) and for the uncontaminated EGARCH process (right panel). The second and third rows of Fig. 3 depict the averages of the robust cross-correlations computed for the same series contaminated with one and two outliers, respectively. In all cases, the true cross-correlations are also displayed.

Fig. 3
figure 3

Monte Carlo means of robust cross-correlations in simulated white noise (first column) and EGARCH (second column) series without outliers (first row), with a single negative outlier (second row) and with two consecutive outliers, negative and positive (third row) of size \(|\omega |=50\). The red solid line represents the true cross-correlations

Several conclusions emerge from Fig. 3. First, as expected, robust measures of cross-correlations are resistant to the presence of outliers, either isolated or in patches; note that the plots displayed in the first row are nearly the same to those displayed in the other two rows. Second, in EGARCH processes, the robust cross-correlations estimate the sign of the true cross-correlations properly but they underestimate their magnitude. In fact, the first three robust cross-correlations (\(r_{12,COMED}\), \(r_{12,B}\) and \(r_{12,MED}\)) estimate asymmetries which are much weaker than the true ones. However, the weighted cross–correlation, \(r_{12,W},\) performs quite well because its bias is much lower than those of the other robust measures even in the presence of two big consecutive outliers. Actually, the values of \(r_{12,W}\) are very close to their theoretical counterparts. This could be due to the fact that the first three robust measures considered are direct extensions of the corresponding robust estimators originally designed to estimate the pairwise correlation coefficient for a bivariate Gaussian distribution. In such framework, some of these measures, like the Blomqvist quadrant correlation and the median correlation coefficient are asymptotically minimax with respect to bias or variance. However, in time series data, and, in particular, in conditional heteroscedastic time series, none of these assumptions hold and hence the behaviour of these measures is not that good as postulated for the bivariate Gaussian case. Unlike, the Ramsay-weighted autocorrelation estimator proposed by Teräsvirta and Zhao (2011) was already designed to cope with time series data, and this could be the reason for the good performance of \(r_{12,W}\) in estimating cross-correlations.

We also perform a similar analysis to that in Sect. 2.2, by studying the effect of the size of the outliers on the four robust cross-correlations for the two types of contamination, namely contamination with one isolated outlier of size \(\{-\omega \}\) and with two consecutive outliers of sizes \( \{-\omega ,\omega \}\), where \(\omega =\{1,2,...,50\}.\) The results, which are not displayed here to save space but are available upon request, are as expected. Robust cross-correlations remain the same regardless of the size and the number of outliers. Moreover, they all subestimate the magnitude of the leverage effect, but the bias in the weighted cross-correlation, \( r_{12,W},\) is negligible as compared to the alternative robust cross-correlations considered.

So far, we have analysed the Monte Carlo mean cross-correlograms for different lags and sizes of the outliers. In order to complete these results, we next study the whole finite sample distribution of the cross-correlations considered focusing on the first lag.Footnote 3 Table 1 reports the Monte Carlo means and standard deviations (in parenthesis) of the first order sample cross-correlation, as defined in (2), and of the four robust cross-correlations introduced in Sect. 3.1, for the two processes, Gaussian white noise and EGARCH, and for the two types of contamination; Fig. 4 displays the corresponding box-plots.

Table 1 Monte Carlo means and standard deviations of several estimators of the first-order cross-correlation between past and current squared observations from uncorrelated stationary processes
Fig. 4
figure 4

Monte Carlo distribution of estimated first-order cross-correlations using different estimators: the SAMPLE cross-correlation; the cross-correlation median estimator (COMED); the median cross-correlation coefficient (MED); the Blomqvist cross-correlation coefficient (B) and the weighted cross-correlation estimator (W)

As expected, when the series is a homoscedastic Gaussian white noise and there are no outliers or there is one isolated outlier, all estimators behave similarly and the sample cross-correlations perform very well. Note that, in this case, the sample correlation is the maximum likelihood estimator of its theoretical counterpart and therefore it is consistent and asymptotically unbiased and efficient. Unlike, the robust estimators have, in general, slightly larger dispersion since they are not as efficient as maximum likelihood estimators. However, when there are two consecutive outliers, the sample cross-correlation breaks down and it becomes unreliable: its distribution is completely pushed downwards and it would be estimating a large negative asymmetry when there is none. Unlike, all the robust estimators considered perform very well in terms of bias and \( r_{12,COMED}(1)\) also performs quite well in terms of variance.

When the simulated process is an EGARCH, another picture comes up. When the series is not contaminated, either the sample cross-correlation, \(r_{12}(1)\) , or the weighted cross–correlation, \(r_{12,W}(1)\), performs better than any of the other robust measures originally designed to estimate pairwise correlations in bivariate Normal distributions. However, when the EGARCH series is contaminated by one single negative outlier, the sample cross-correlation is pushed upwards towards zero, as postulated from the theoretical results in Sect. 2, and it would be unable to detect the true leverage effect in the data. The situation becomes even worse in the presence of two consecutive outliers, where the sample cross-correlation becomes completely unreliable due to its huge negative bias. As expected, the distribution of all the robust cross-correlations remain nearly the same regardless of the presence of outliers. However, the estimators \( r_{12,COMED}(1)\), \(r_{12,B}(1)\) and \(r_{12,MED}(1)\), in spite of their robustness, are upwards biased towards zero and so they will underestimate the true leverage effect. Unlike, the weighted sample cross-correlation with the modified Ramsay’s weights, \(r_{12,W}(1)\), performs surprisingly well in terms of bias, even in the presence of two big outliers. As it happened with the simulated white noise process, the estimator \(r_{12,MED}(1)\) has the largest standard deviation of all the estimators considered; see Table 1. Therefore, it seems that the robust cross-correlation \(r_{12,W}(1)\) is preferable to any other measure considered in this section for the identification of asymmetries in conditionally heteroscedastic models.

4 Discussion

In the previous section, we analyse the finite sample performance of several robust estimators of the cross-correlations, including the estimator in (13) with \(\widehat{S}\) defined as the Hampel’s median of absolute deviations. Other possible choices for \(\widehat{S}\) are the robust scale estimators \( S_{n}\) and \(Q_{n}\) proposed by Rousseeuw and Croux (1993). Shevlyakov and Smirnov (2011) show that the robust estimator of the pairwise correlation between bivariate Gaussian variables based on \(Q_{n}\) performs better than other robust correlation estimators. Ma and Genton (2000) suggest bringing this approach to estimate the autocorrelation of Gaussian time series. In this section, we show that this extension is not so straight when the processes involved are non-Gaussian.

Let us consider replacing the scale estimator \(\widehat{S}\) in Eq. (13) by the highly efficient robust scale estimator \(Q_{n}\) proposed by Rousseeuw and Croux (1992, 1993). Given the sample observations \(\mathbf {x=} (X_{1},...,X_{n})\) from a distribution function \(F_{X}\), the scale estimator \(Q_{n}\) is based on an order statistic of all \(\left( {\begin{array}{c}n\\ 2\end{array}}\right) \) pairwise distances and it is defined as follows:

$$\begin{aligned} Q_{n}(\mathbf {x})=c(F_{X})\left\{ \left| X_{i}-X_{j}\right| ,i<j\right\} _{(k)}, \end{aligned}$$
(16)

where \(\{X\}_{(k)}\) denotes the k-th order statistic of X\( k\approx \left( {\begin{array}{c}n\\ 2\end{array}}\right) /4\) for large n and \(c(F_{X})\) is a constant, that depends on the shape of the distribution function \(F_{X},\) introduced to achieve Fisher consistency. In particular, if \(F_{X}\) belongs to the location-scale family \(F_{\mu ,\sigma }(x)=F((x-\mu )/\sigma )\), the constant is chosen as follows

$$\begin{aligned} c(F)=1/(K_{F}^{-1}(5/8)), \end{aligned}$$

where \(K_{F}\) is the distribution function of \(X-X^{\prime }\), being X and \(X^{\prime }\) independent random variables with distribution function F; see Rousseeuw and Croux (1993). In particular, in the Gaussian case \((F=\Phi )\), the constant is:

$$\begin{aligned} c(\Phi )=1/(\sqrt{2}\Phi ^{-1}(5/8))=2.21914. \end{aligned}$$

Although \(c(F_{X})\) can also be computed for various other distributions, the FORTRAN code provided by Croux and Rousseeuw (1992) and the MATLAB Library for Robust Analysis (https://wis.kuleuven.be/stat/robust/LIBRA) developed at ROBUST@Leuven, compute the estimator \(Q_{n}\) with the Gaussian constant \(c(\Phi )\).Footnote 4

In the time series setting, Ma and Genton (2000) propose the following robust estimator of the serial autocorrelation. Let \(\mathbf {y=} (Y_{1},...,Y_{T})\) be the observations of a stationary process \(Y_{t}\) and let \(\rho (h)=Corr(Y_{t-h},Y_{t})\) be the corresponding autocorrelation function. In this case, the variables X and Y in (12) represent two variables, \(Y_{t-h}\) and \(Y_{t}\), with the same model distribution and, consequently, \(\sigma _{X}=\) \(\sigma _{Y}\). Therefore, using identity (12) with \(\sigma _{X}=\) \(\sigma _{Y}\), plugging the scale estimator \(Q_{n}\) in (13) and taking into account that \(Q_{n}\) is affine equivariant, i.e. \( Q_{n}(aX+b)=|a|Q_{n}(X)\), the robust estimator of \(\rho (h)\) would be:

$$\begin{aligned} \widehat{\rho }_{Q}(h)=\frac{Q_{T-h}^{2} (\mathbf {u})-Q_{T-h}^{2}(\mathbf {v})}{Q_{T-h}^{2} (\mathbf {u})+Q_{T-h}^{2}(\mathbf {v})} \end{aligned}$$
(17)

where \(\mathbf {u}\) is a vector of length \(T-h\) defined as \(\mathbf {u} =(Y_{1}+Y_{h+1},,...,Y_{T-h}+Y_{T})\) and \(\mathbf {v}\) is another vector of length \(T-h\) defined as \(\mathbf {v}=(Y_{1}-Y_{h+1},,...,Y_{T-h}-Y_{T})\). Ma and Genton (2000) argue that the estimator \(\widehat{\rho }_{Q}(h)\) is independent of the choice of the constants needed to compute the scale estimators \(Q_{n}\) involved in (17). In another framework, Fried and Gather (2005) use the estimator \(\widehat{\rho }_{Q}(1)\) and also state that such constants cancel out. However, as we show bellow, the constants do not cancel out in general and this simplification only applies for Gaussian variables. By rewriting \(Q_{n}(\mathbf {x})=c(F_{X})Q_{n}^{*}(\mathbf {x})\) where \(Q_{n}^{*}(\mathbf {x})=\left\{ \left| X_{i}-X_{j}\right| ,i<j\right\} _{(k)}\), the estimator \(\ \widehat{ \rho }_{Q}(h)\) in (17) becomes:

$$\begin{aligned} \widehat{\rho }_{Q}(h)=\frac{c^{2}(F_{U})Q_{T-h}^{*2}(\mathbf {u} )-c^{2}(F_{V})Q_{T-h}^{*2}(\mathbf {v})}{c^{2}(F_{U})Q_{T-h}^{*2}( \mathbf {u})+c^{2}(F_{V})Q_{T-h}^{*2}(\mathbf {v})} \end{aligned}$$
(18)

where \(F_{U}\) and \(F_{V}\) denote the cumulative distribution functions of \( Y_{t-h}+Y_{t}\) and \(Y_{t-h}-Y_{t}\), respectively. Note that this estimator requires computing \(c(F_{U})\) and \(c(F_{V})\). As Lévy-Leduc et al. (2011) point out, this is easily done in the Gaussian case, where \( c(F_{U})=c(F_{V})=2.21914\); otherwise, the problem could become unfeasible. Moreover, the constants only cancel out if \(c(F_{U})=c(F_{V})\), but this condition is rarely achieved, unless the process \(Y_{t}\) is Gaussian.

Turning back to the estimation of cross-correlations between past and squared observations of uncorrelated stationary processes, the situation becomes even more tricky. In this case, the two series involved are the lagged levels, \(Y_{t-h\text {,}}\) and its own squares, \(Y_{t}^{2}\), which are not equally distributed neither Gaussian. Furthermore, the variables X and Y in (12) will stand for \(Y_{t-h}\) and \(Y_{t}^{2}\) and so the constraint \( \sigma _{X}=\) \(\sigma _{Y}\) no longer holds. Hence, the first step to compute the robust estimator of \(\rho _{12}(h)=Corr(Y_{t-h},Y_{t}^{2}),\) based on identities (11) and (12), will be to ‘standardize’ the two series involved. Let \(\widetilde{Y_{t}}=Y_{t}/Q_{T}(\mathbf {y})\) and \(\widetilde{ Y_{t}^{2}}=Y_{t}^{2}/Q_{T}(\mathbf {y}^{2})\) denote the robust ‘standardized’ forms of the series \(Y_{t}\) and \(Y_{t}^{2}\), respectively, where \(Q_{T}( \mathbf {y})=c(F_{Y})Q_{T}^{*}(\mathbf {y})\) and \(Q_{T}(\mathbf {y} ^{2})=c(F_{Y^{2}})Q_{T}^{*}(\mathbf {y}^{2})\). The second step will be to form the vector of sums and the vector of differences:

$$\begin{aligned} \widetilde{\mathbf {u}}= & {} (\widetilde{Y}_{1}+\widetilde{Y_{h}^{2}}_{+1},,..., \widetilde{Y}_{T-h}+\widetilde{Y_{T}^{2}}), \\ \widetilde{\mathbf {v}}= & {} (\widetilde{Y}_{1}-\widetilde{Y_{h}^{2}}_{+1},,..., \widetilde{Y}_{T-h}-\widetilde{Y_{T}^{2}}). \end{aligned}$$

The third step will be to calculate the robust variance estimates of these two vectors, \(Q_{T-h}^{2}(\widetilde{\mathbf {u}})\) and \(Q_{T-h}^{2}( \widetilde{\mathbf {v}})\), respectively, and finally, replace these variance estimators in (11) and obtain the following estimator of \(\rho _{12}(h)\):

$$\begin{aligned} r_{12,Q}(h)=\frac{c^{2}(F_{\widetilde{U}})Q_{T-h}^{*2}(\widetilde{ \mathbf {u}})-c^{2}(F_{\widetilde{V}})Q_{T-h}^{*2}(\widetilde{\mathbf {v}}) }{c^{2}(F_{\widetilde{U}})Q_{T-h}^{*2}(\widetilde{\mathbf {u}})-c^{2}(F_{ \widetilde{V}})Q_{T-h}^{*2}(\widetilde{\mathbf {v}})} \end{aligned}$$
(19)

where \(F_{\widetilde{U}}\) and \(F_{\widetilde{V}}\) denote the cumulative distribution functions of \(Y_{t-h}+Y_{t}^{2}\) and \(Y_{t-h}-Y_{t}^{2}\), respectively.

Therefore, it is clear that the estimator (19) will require computing four constants, \(c(F_{Y})\), \(c(F_{Y^{2}})\), \(c(F_{\widetilde{U}})\) and \(c(F_{ \widetilde{V}})\), but this task seems to be unfeasible. Note that, even if Y were Gaussian, \(Y^{2}\) would be no longer Gaussian, neither \(Y+Y^{2}\) nor \(Y-Y^{2}\) would be. Moreover, even if we could compute the constants in such case, the assumption of Gaussianity for Y would be unsuitable, because the distribution of financial returns is known to be heavy-tailed.

To further illustrate what would it happen if we ignored the constants and proceed as in the Gaussian setting, we repeat the same Monte Carlo experiment described in previous sections for the EGARCH model, computing for each replicate the estimator \(r_{12,Q}(1)\) in (19) with all constants equal to \(c(\Phi )\). The Monte Carlo means and standard deviations are reported in the last row of Table 1. As expected, the results are disappointing: the estimator \(r_{12,Q}\) turns out to be the most biased among the robust estimators considered and it also has larger variance than both \(r_{12,COMED}(1)\) and \(r_{12,B}(1)\).

Hence, one should be very cautious before implementing robust estimators originally designed for bivariate Gaussian distributions in a time series setting with potential non-Gaussian variables.

5 Empirical application

In this section we illustrate the previous results by analyzing a series of daily Dow Jones Industrial Average (DJIA) returns observed from October 2, 1928 to August 30, 2013, comprising 21409 observations. This is the series considered by Charles and Darné (2014). Figure 5 plots the data. As expected, the returns exhibit the usual volatility clustering, along with some occasional extreme values that could be regarded as outliers, the largest one corresponding to October 19, 1987, when the index collapsed by \(-22.6\) %. Charles and Darné (2014) apply the procedure proposed by Laurent et al. (2014) to detect and correct additive outliers in this return series and show that large shocks in the volatility of the DJIA are mainly due to particular events (financial crashes, US elections, wars, monetary policies, etc.), but they also find that some shocks are not identified as outliers due to their occurring during high volatility periods.

Fig. 5
figure 5

Daily returns of Dow Jones Industrial Average (DJIA) index observed from October 2, 1928 to August 30, 2013

Fig. 6
figure 6

Sample and robust first order cross-correlations computed with subsamples of size T \(=\) 1000 using a rolling window of both the DJIA daily returns (top panel) and its outlier-corrected counterpart (bottom panel). The dates in the x-axis refer to the end-of-window dates

In order to show how the potential outliers can mislead the detection of the leverage effect, as measured by the cross-correlations between past and squared returns, we use a rolling window scheme, where the sample size used to compute the cross-correlations is \(T=1000.\) Therefore, we first estimate the cross-correlations over the period from 2 October 1928 to 28 September 1932. When a new observation is added to the sample, we delete the first observation and re-estimate the cross-correlations. This process is repeated until we reach the last 1000 observations in the sample, from November 2, 2009 to August 30, 2013. This amounts to considering 20, 410 different subsamples covering periods of different volatility levels and different types and sizes of outliers. For instance, the first subsample, runing from 2 October 1928 to 28 September 1932, includes outliers associated with the 1929 Stock market crash, while the 13900th subsample, corresponding to observations from 21 March 1984 to 4 March 1988, includes outliers due to the 1987 Stock market crash. For each subsample considered, we compute the first order sample cross-correlation, \(r_{12}(1),\) using Eq. (2) and the corresponding robust weighted cross-correlation, \(r_{12,W}(1),\) as defined in Eq. (15), for both the original return series and the outlier-adjusted return series of Charles and Darné (2014)Footnote 5. Figure 6 displays the values of these cross-correlations for the 20410 subsamples considered. Note that the dates in the x-axis refer to the end-of-window dates. Figure 6 also displays the 95 % confidence bands based on the asymptotic distribution of the sample cross-correlations under the null of zero cross-correlations; see Fuller (1996). These bands are only shown for guidance, since not all the conditions for the asymptotic results to hold are fulfilled in our setting. Nevertheless, it is worth noting that the standard deviation predicted by the asymptotic theory for samples of size \(T=1000\) is aproximately 0.032, which is just the value of the standard deviation of \(r_{12,W}(1)\) in our Monte Carlo experiments with a white noise process; see Table 1.

Several conclusions emerge from Fig. 6. First, this figure clearly reveals how extreme observations can bias the sample cross-correlation and could lead to a wrong identification of asymmetries. As expected, the 1st order sample cross-correlation, \(r_{12}(1)\), presents several sharp drops and rises when it is computed for the original returns (top panel) and it is quite different from its robust counterpart. These changes are generally associated with the entrance and/or exit of outlying observations in the corresponding subsample. For instance, the entrance of the “Black Monday” October 19, 1987, where the DJIA sustained its largest 1-day drop (\( y_{14804}=-22.61\)), following another large negative return (\( y_{14804}=-4.60 \)), conveys a sudden fall in the value of \(r_{12}(1)\) from nearly zero to a negative value around \(-0.17\). Unlike, the next sudden rise in the value of \(r_{12}(1),\) from nearly \(-0.11\) to a positive value around 0.10, is due to the consecutive exit from the corresponding subsamples of the “Black Monday” and two adjacent extreme observations, \( y_{14805}=5.88\) (19/10/1987) and \(y_{14806}=10.15\) (21/10/1987). When these three observations, the first one being negative and the other two positive, are in the subsample, the value of \(r_{12}(1)\) is pushed downwards to a negative value, but when the first of these observations leaves the sample and only the positive outliers remain, \(r_{12}(1)\) is pushed upwards to a value even larger than zero, as postulated by our theoretical result in Sect. 2. Moreover, the bunch of lowest negative values of \(r_{12}(1)\), ranging from \(-0.25\) to \(-0.3\), is related to the entrance/exit in the corresponding subsamples of two consecutive extreme observations, namely \( y_{8422}=-5.71\) (28/5/1962) and \(y_{8423}=4.68\) (29/5/1962), the former being identified as an outlier in Charles and Darné (2014). According to our theoretical result in Sect. 2, the entrance of these two observations, the first one being negative and the second positive, biases downwards the first-order sample cross-correlation, but when the first of these observations leaves the sample and only the positive outlier remains, \( r_{12}(1)\) is again pushed to a value closer to zero. Similarly, the following sharp rise in \(r_{12}(1)\) from around \(-0.17\) to \(-0.05\) is due to an isolated positive outlier, namely \(y_{8799}=4.50\) (26/11/1963).

Another remarkable feature from Fig. 6 is the difference between the values of the sample cross-correlation in the top and bottom panels, enhancing the little resistance of \(r_{12}(1)\) to the presence of outliers. Unlike, the weighted cross-correlation, \(r_{12,W}(1)\), is robust to the presence of potential outliers: its values remain nearly the same in the two panels, indicating that the leverage effect suggested by the sample cross-correlation could be misleading in some cases.

Noticeable, the weighted robust and the sample 1st order cross-correlations are quite similar when computed for the outlier-corrected series (bottom panel), but the latter still exhibits some breaks even in this case. These breaks are associated with extreme observations that were not identified as outliers neither corrected in Charles and Darné (2014). For instance, the first sharp drop in \(r_{12}(1)\) from around \(-0.10\) to \(-0.24\) and its immediate rise again to \(-0.10\), have to do with the presence/absence of two couples of outliers: a doublet positive outlier made up of \(y_{1130}=9.03\) (19/4/1933) and \(y_{1131}=5.80\) (20/4/1933) and a doublet negative outlier made up of \(y_{1194}=-7.07\) (20/7/1933) and \(y_{1195}=-7.84\) (21/7/1933). A similar situation arises at one of the last subsamples, where the value of \( r_{12}(1)\) decays towards \(-0.22\); such a big drop is associated with the entrance of three consecutive extreme observations at the end of the subsample, namely \(y_{20162}=-5.07\) (19/11/2008),  \(y_{20163}=-5.56\) (20/11/2008) and \(y_{20164}=6.54\) (21/11/2008), which, according to our theoretical result in Sect. 2, will bias downwards the first-order sample cross-correlation.

Finally, Fig. 6 highlights that the value of the robust cross-correlation, \(r_{12,W}(1)\), does not remain constant across all the subsamples considered. This feature suggests time-varying leverage effects, with periods where \(r_{12,W}(1)\) is nearly zero (possibly indicating no leverage) followed by periods where \(r_{12,W}(1)\) clearly takes negative values (leverage effect). In particular, there seems to be three sample periods where the leverage effect, as measured by \(r_{12,W}(1)\), seems to be stronger: a first period at the beginning of the sample, from April 1933 till June 1936, a second long period that spans from July 1940 till April 1971, aproximately, and a final period from around September 1989 till the end of the sample. Notice that only along these periods the robust sample cross-correlations are outside the approximated 95 % asymptotic confidence bands. Obviously, this feature requires further investigation; see, for instance, the recent papers of Bandi and Renò (2012), Yu (2012) and Jensen and Maheu (2014) dealing with time-varying leverage effects.

6 Conclusions

This paper shows that outliers can severely affect the identification of the asymmetric response of volatility to shocks of different signs when this is performed based on the sample cross-correlations between past and squared returns. In particular, the presence of one isolated outlier biases such cross-correlations towards zero and hence could hide true leverage effect while the presence of two big outliers could lead to detect either spurious asymmetries or asymmetries of the wrong sign. As a way to protect against the pernicious effects of outliers, we suggest using robust cross-correlations. Our Monte Carlo experiments show that, among the robust measures considered in this paper, the weighted cross-correlation based on a slight modification of the serial correlation with Ramsay’s weights proposed by Teräsvirta and Zhao (2011), seems to be the more appropriate when dealing with conditionally heteroscedastic models. These results are further illustrated in the empirical application. It is shown that the first order sample cross-correlation between past and squared daily DJIA returns is harmfully affected by the presence of outliers, while its robust counterpart is not. In fact, depending on which measure of cross-correlation is used, the detection of asymmetries could be misleading. It is also shown that some observations which are not identified as outliers may still have a distorting effect on the identification of asymmetries in the volatility, enhancing the advantages of using robust methods as a protection against outliers rather than detecting and correcting them. The empirical application also prompts to the existence of possible time-varying leverage effects. We leave this topic for further research along with the problem of robust estimation of asymmetric GARCH models.