References of "Hambuckers, julien"
     in
Bookmark and Share    
Peer Reviewed
See detailWhat are the determinants of the operational losses severity distribution ? A multivariate analysis based on a semiparametric approach.
Hambuckers, julien ULg; Heuchenne, Cédric ULg; Lopez, Olivier

Poster (2015, June)

In this paper, we analyse a database of around 41,000 operational losses from the European bank UniCredit. We investigate three kinds of covariates: firm-specific, fi- nancial and macroeconomic covariates ... [more ▼]

In this paper, we analyse a database of around 41,000 operational losses from the European bank UniCredit. We investigate three kinds of covariates: firm-specific, fi- nancial and macroeconomic covariates and we study their relationship with the shape parameter of the severity distribution. To do so, we introduce a semiparametric approach to estimate the shape parameter of the severity distribution, conditionally to large sets of covariates. Relying on a single index assumption to perform a dimension reduction, this approach avoids the curse of dimensionality of pure multivariate nonparametric techniques as well as too restrictive parametric assumptions. We show that taking into account variables measuring the economic well being of the bank could cause the required Operational Value-at-Risk to vary drastically. Especially, high pre-tax ROE, efficiency ratio and stock price are associated with a low shape parameter of the severity distribution, whereas a high market volatility, leverage ratio and unemployment rate are associated with higher tail risks. Finally, we discuss the fact that the considered approach could be an interesting tool to improve the estimation of the parameters in a Loss Distribution Approach and to offer an interesting methodology to study capital requirements variations throughout scenario analyses. [less ▲]

Detailed reference viewed: 21 (1 ULg)
Full Text
See detailNonparametric and bootstrap techniques applied to financial risk modeling
Hambuckers, julien ULg

Doctoral thesis (2015)

For the purpose of quantifying financial risks, risk managers need to model the behavior of financial variables. However, the construction of such mathematical models is a difficult task that requires ... [more ▼]

For the purpose of quantifying financial risks, risk managers need to model the behavior of financial variables. However, the construction of such mathematical models is a difficult task that requires careful statistical approaches. Among the important choices that must be addressed,we can list the error distribution, the structure of the variance process, the relationship between parameters of interest and explanatory variables. In particular, one may avoid procedures that rely either on too rigid parametric assumptions or on inefficient estimation procedures. In this thesis, we develop statistical procedures that tackle some of these issues, in the context of three financial risk modelling applications. In the first application, we are interested in selecting the error distribution in a multiplicative heteroscedastic model without relying on a parametric volatility assumption. To avoid this uncertainty, we develop a set of model estimation and selection tests relying on nonparametric volatility estimators and focusing on the tails of the distribution. We illustrate this technique on UBS, BOVESPA and EUR/USD daily stock returns. In the second application, we are concerned by modeling the tail of the operational losses severity distribution, conditionally to several covariates. We develop a flexible conditional GPD model, where the shape parameter is an unspecified link function (nonparametric part) of a linear combination of covariates (single index part), avoiding the curse of dimensionality. We apply successfully this technique on two original databases, using macroeconomic and firm-specific variables as covariates. In the last application, we provide an efficient way to estimate the predictive ability of trading algorithms. Instead of relying on subjective and noisy sample splitting techniques, we propose an adaptation of the .632 bootstrap technique to the time series context. We apply these techniques on stock prices to compare 12,000 trading rules parametrizations and show that none can beat a simple buy-and-hold strategy. [less ▲]

Detailed reference viewed: 24 (5 ULg)
Full Text
See detailA semiparametric model for Generalized Pareto regression based on a dimension reduction assumption
Hambuckers, julien ULg; Heuchenne, Cédric ULg; Lopez, Olivier

E-print/Working paper (2015)

In this paper, we consider a regression model in which the tail of the conditional distribution of the response can be approximated by a Generalized Pareto distribution. Our model is based on a ... [more ▼]

In this paper, we consider a regression model in which the tail of the conditional distribution of the response can be approximated by a Generalized Pareto distribution. Our model is based on a semiparametric single-index assumption on the conditional tail index; while no further assumption on the conditional scale parameter is made. The underlying dimension reduction assumption allows the procedure to be of prime interest in the case where the dimension of the covariates is high, in which case the purely nonparametric techniques fail while the purely parametric ones are too rough to correctly fit to the data. We derive asymptotic normality of the estimators that we define, and propose an iterative algorithm in order to perform their practical implementation. Our results are supported by some simulations and a practical application on a public database of operational losses. [less ▲]

Detailed reference viewed: 19 (3 ULg)
Peer Reviewed
See detailIdentifying the best technical trading rule: a .632 bootstrap approach.
Hambuckers, julien ULg; Heuchenne, Cédric ULg

Conference (2014, December 07)

In this paper, we estimate the out-of-sample predictive ability of a set of trading rules. Usually, this ability is estimated using a rolling-window sample-splitting scheme, true out-of-sample data being ... [more ▼]

In this paper, we estimate the out-of-sample predictive ability of a set of trading rules. Usually, this ability is estimated using a rolling-window sample-splitting scheme, true out-of-sample data being rarely available. We argue that this method makes a poor use of the available information and creates data mining possibilities. Instead, we introduce an alternative bootstrap approach, based on the .632 bootstrap principle. This method enables to build in-sample and out-of-sample bootstrap data sets that do not overlap and exhibit the same time dependencies. We illustrate our methodology on IBM and Microsoft daily stock prices, where we compare 11 trading rules specifications. For the data sets considered, two different filter rule specifications have the highest out-of-sample mean excess returns. However, all tested rules cannot beat a simple buy-and-hold strategy when trading at a daily frequency. [less ▲]

Detailed reference viewed: 41 (7 ULg)
Full Text
Peer Reviewed
See detailEstimating the out-of-sample predictive ability of trading rules: a robust bootstrap approach
Hambuckers, julien ULg; Heuchenne, Cédric ULg

E-print/Working paper (2014)

In this paper, we estimate the out-of-sample predictive ability of a set of trading rules. Usually, this ability is estimated using a rolling-window sample-splitting scheme, true out-of-sample data being ... [more ▼]

In this paper, we estimate the out-of-sample predictive ability of a set of trading rules. Usually, this ability is estimated using a rolling-window sample-splitting scheme, true out-of-sample data being rarely available. We argue that this method makes a poor use of the available information and creates data mining possibilities. Instead, we introduce an alternative bootstrap approach, based on the .632 bootstrap principle. This method enables to build in-sample and out-of-sample bootstrap data sets that do not overlap and exhibit the same time dependencies. We illustrate our methodology on IBM and Microsoft daily stock prices, where we compare 11 trading rules specifications. For the data sets considered, two different filter rule specifications have the highest out-of-sample mean excess returns. However, all tested rules cannot beat a simple buy-and-hold strategy when trading at a daily frequency. [less ▲]

Detailed reference viewed: 40 (17 ULg)
Full Text
Peer Reviewed
See detailA new methodological approach for error distributions selection in Finance
Hambuckers, julien ULg; Heuchenne, Cédric ULg

E-print/Working paper (2014)

In this article, we propose a robust methodology to select the most appropriate error distribution candidate, in a classical multiplicative heteroscedastic model. In a first step, unlike to the ... [more ▼]

In this article, we propose a robust methodology to select the most appropriate error distribution candidate, in a classical multiplicative heteroscedastic model. In a first step, unlike to the traditional approach, we don't use any GARCH-type estimation of the conditional variance. Instead, we propose to use a recently developed nonparametric procedure (Mercurio and Spokoiny, 2004): the Local Adaptive Volatility Estimation (LAVE). The motivation for using this method is to avoid a possible model misspecification for the conditional variance. In a second step, we suggest a set of estimation and model selection procedures (Berk-Jones tests, kernel density-based selection, censored likelihood score, coverage probability) based on the so-obtained residuals. These methods enable to assess the global fit of a given distribution as well as to focus on its behavior in the tails. Finally, we illustrate our methodology on three time series (UBS stock returns, BOVESPA returns and EUR/USD exchange rates). [less ▲]

Detailed reference viewed: 55 (25 ULg)
Full Text
Peer Reviewed
See detailA new methodological approach for error distributions selection in Finance
Hambuckers, julien ULg; Heuchenne, Cédric ULg

Conference (2014, April)

In this article, we propose a robust methodology to select the most appropriate error distribution candidate, in a classical multiplicative heteroscedastic model. In a first step, unlike to the ... [more ▼]

In this article, we propose a robust methodology to select the most appropriate error distribution candidate, in a classical multiplicative heteroscedastic model. In a first step, unlike to the traditional approach, we don't use any GARCH-type estimation of the conditional variance. Instead, we propose to use a recently developed nonparametric procedure (Mercurio and Spokoiny, 2004): the Local Adaptive Volatility Estimation (LAVE). The motivation for using this method is to avoid a possible model misspecification for the conditional variance. In a second step, we suggest a set of estimation and model selection procedures (Berk-Jones tests, kernel density-based selection, censored likelihood score, coverage probability) based on the so-obtained residuals. These methods enable to assess the global fit of a given distribution as well as to focus on its behavior in the tails. Finally, we illustrate our methodology on three time series (UBS stock returns, BOVESPA returns and EUR/USD exchange rates). [less ▲]

Detailed reference viewed: 66 (29 ULg)
Peer Reviewed
See detailA new methodological approach for error distributions selection
Hambuckers, julien ULg; Heuchenne, Cédric ULg

Conference (2013, December 15)

Since 2008 and its financial crisis, an increasing attention has been devoted to the selection of an adequate error distribution in risk models, in particular for Value-at-Risk (VaR) predictions. We ... [more ▼]

Since 2008 and its financial crisis, an increasing attention has been devoted to the selection of an adequate error distribution in risk models, in particular for Value-at-Risk (VaR) predictions. We propose a robust methodology to select the most appropriate error distribution candidate, in a classical multiplicative heteroscedastic model. In a first step, unlike to the traditional approach, we do not use any GARCH-type estimation of the conditional variance. Instead, we propose to use a recently developed nonparametric procedure: the Local Adaptive Volatility Estimation (LAVE). The motivation for using this method is to avoid a possible model misspecification for the conditional variance. In a second step, we suggest a set of estimation and model selection procedures tests based on the so-obtained residuals. These methods enable to assess the global fit of a given distribution as well as to focus on its behaviour in the tails. Finally, we illustrate our methodology on three time series (UBS stock returns, BOVESPA returns and EUR/USD exchange rates). [less ▲]

Detailed reference viewed: 26 (8 ULg)
See detailA new methodological approach for error distributions selection
Hambuckers, julien ULg; Heuchenne, Cédric ULg

Scientific conference (2013, November)

Since 2008 and its financial crisis, an increasing attention has been devoted to the selection of an adequate error distribution in risk models, in particular for Value-at-Risk (VaR) predictions. We ... [more ▼]

Since 2008 and its financial crisis, an increasing attention has been devoted to the selection of an adequate error distribution in risk models, in particular for Value-at-Risk (VaR) predictions. We propose a robust methodology to select the most appropriate error distribution candidate, in a classical multiplicative heteroscedastic model. In a first step, unlike to the traditional approach, we do not use any GARCH-type estimation of the conditional variance. Instead, we propose to use a recently developed nonparametric procedure: the Local Adaptive Volatility Estimation (LAVE). The motivation for using this method is to avoid a possible model misspecification for the conditional variance. In a second step, we suggest a set of estimation and model selection procedures tests based on the so-obtained residuals. These methods enable to assess the global fit of a given distribution as well as to focus on its behaviour in the tails. Finally, we illustrate our methodology on three time series (UBS stock returns, BOVESPA returns and EUR/USD exchange rates). [less ▲]

Detailed reference viewed: 26 (5 ULg)
See detailNew issues for the Goodness-of-fit test of the error distribution : a comparison between Sinh-arscinh and Generalized Hyperbolic distribution
Hambuckers, julien ULg; Heuchenne, Cédric ULg

Scientific conference (2013, April 30)

In this article, we consider a multiplicative heteroskedastic structure of financial returns and propose a methodology to study the goodness-of-fit of the error distribution. We use non-conventional ... [more ▼]

In this article, we consider a multiplicative heteroskedastic structure of financial returns and propose a methodology to study the goodness-of-fit of the error distribution. We use non-conventional estimation and model selection procedures (Berk-Jones (1978) tests, Sarno and Valente (2004) hypothesis testing, Diks et al. (2011) weighting method), based on the local volatility estimator of Mercurio and Spokoiny (2004) and the bootstrap methodology to compare the fit performances of candidate density functions. In particular, we introduce the sinh-arcsinh distributions (Jones and Pewsey, 2009) and we show that this family of density functions provides better bootstrap IMSE and better weighted Kullback-Leibler distances. [less ▲]

Detailed reference viewed: 48 (20 ULg)
See detailNew issues for the Goodness-of-fit test of the error distribution : a comparison between Sinh-arcsinh and Generalized Hyperbolic distributions
Hambuckers, julien ULg; Heuchenne, Cédric ULg

Scientific conference (2013, April 19)

In this article, we consider a multiplicative heteroskedastic structure of financial returns and propose a methodology to study the goodness-of-fit of the error distribution. We use non-conventional ... [more ▼]

In this article, we consider a multiplicative heteroskedastic structure of financial returns and propose a methodology to study the goodness-of-fit of the error distribution. We use non-conventional estimation and model selection procedures (Berk-Jones (1978) tests, Sarno and Valente (2004) hypothesis testing, Diks et al. (2011) weighting method), based on the local volatility estimator of Mercurio and Spokoiny (2004) and the bootstrap methodology to compare the fit performances of candidate density functions. In particular, we introduce the sinh-arcsinh distributions (Jones and Pewsey, 2009) and we show that this family of density functions provides better bootstrap IMSE and better weighted Kullback-Leibler distances. [less ▲]

Detailed reference viewed: 29 (12 ULg)
See detailComments to 'The time inconsistency factor: how banks adapt to their savers mix' (C. Laureti and A. Szafarz, working paper, 2012)
Hambuckers, julien ULg

Scientific conference (2012, October 23)

Comments about 'The time-Inconsistency Factor: How banks adapt to their savers Mix' by C. Laureti and A. Szafarz (working paper, 2012).

Detailed reference viewed: 39 (6 ULg)
Full Text
See detailModélisation d'évènements rares à l'aide de distributions non normales : application en finance avec la fonction sinh-arcsinh
Hambuckers, julien ULg

Master's dissertation (2011)

In 2008, the financial crisis put forward the relative inaccuracy of the market risk forecasting models in the financial industry. In particular, extreme events were shown to be regularly underestimated ... [more ▼]

In 2008, the financial crisis put forward the relative inaccuracy of the market risk forecasting models in the financial industry. In particular, extreme events were shown to be regularly underestimated. This problematic, initially developed in the seminal work of Mandelbrot (1963), is mainly due to financial models using the normal law while empirical evidence show strong leptokurticity in financial time series. This stylized effect is particularly damaging the forecasting of indicators like Value-at-Risk (VAR). In this study, we try to tackle problem by testing a newly-developed probability distribution, never used in finance: sinh-arcsinh function. By creating different datasets from non-parametric and GARCH models, we adjust common functions (normal, t location-scale, GED, gen. hyperbolic) and sinh-arcsinh function on the data. We show that, regarding the leptokurtic datasets extracted from the DJA and the NIKKEI 225, the sinh-arcsinh function performs a better adjustment than any other function tested. We also tested simple VAR models using normal laws, Student’s t or sinh-arcsinh functions, to assess the operational efficiency of the sinh-arcsinh function. We show that models using sinh-arcsinh functions provide more accurate and better in-sample and out-of-sample VAR forecasts than any other model using the normal laws. [less ▲]

Detailed reference viewed: 103 (33 ULg)