Ir al contenido

Documat


New approaches to stochastic frontier analysis

  • Autores: Ahmed Mohammed Hussein Shatla
  • Directores de la Tesis: Norberto Octavio Corral Blanco (dir. tes.) Árbol académico, Enrique Artime Carlos (codir. tes.) Árbol académico
  • Lectura: En la Universidad de Oviedo ( España ) en 2017
  • Idioma: inglés
  • Tribunal Calificador de la Tesis: Leandro Pardo Llorente (presid.) Árbol académico, María Angeles Gil Alvarez (secret.) Árbol académico, Joaquín Muñoz García (voc.) Árbol académico
  • Texto completo no disponible (Saber más ...)
  • Resumen
    • Stochastic frontier analysis (SFA) is extensively utilized to study production functions and to estimate the efficiencies of individuals (e.g., producers, enterprises). The framework is similar to that of the standard mixed effect models, but with the particularity that the effects of inefficiency have a negative asymmetry.

      There is an interest to check the adequacy of the frontier model by quantifying the significance of the inefficiency. From our study of maximum likelihood (ML) and the method of moments, the results showed that hypothesis tests based on them have very poor properties with respect to the size of the test for small and medium sample sizes, when the error components follow normal-half normal, normal-exponential and normal-gamma models. The classical method based on ML (e.g. Wald test) produces poor results because of the low speed of convergence to the asymptotic distribution. We propose several non-parametric hypothesis tests based on skewness, and study their properties. Besides, Monte Carlo simulations suggest that these new methods provide competitive results and are robust in the presence of outliers. The new methods take into account the lack of independence between the ML or least squares residuals.

      Several estimation methods based on the ML produce really poor results and they only work properly when both the sample size and the inefficiency term are large. To confront this issue, we introduce new estimator based on the common area method. This method relies on the kernel density estimation and we apply it on a clever transformation of the model residuals. Our results show that ML is very sensitive to outliers, while the common area estimator is robust. This is especially noticeable when the positive asymmetry problem occurs, that is, the sample shows positive skewness while the theoretical model assumes negative skewness.

      In order to improve the estimation under the presence of outliers, we propose algorithm to selectively eliminate extreme points, by controlling the skewness of the remaining residuals. The combination of this algorithm with the common area estimator gives the best results for estimation with cross-sectional data.

      Segregating the heterogeneity from the inefficiency is a main issue in the SFA, which is related to the characterization of the errors structure. In this scope, we have developed mathematical theory in order to characterize the inefficiency component. In this sense, we deal with several approaches of longitudinal data analysis, in which that component is modeled dynamically, which is more realistic than a static approach in panel stochastic frontier models (PSFM) estimation.

      The difference transformation which is based on analyzing the differences among the time periods rather than raw data sets can be utilized to dispose of the incidental parameters problem. Therefore, it can avoid the harmful effect of specific individual heterogeneity, so improving the precision of parameter estimates. We study the most common asymmetric probability distributions through studying the features of exponential distribution in characterizing the inefficiency term, as well as deriving their inefficiency index estimators. In this sense, we introduce the difference and dummy individuals estimators in PSFM. Our results show that the difference model with exponential inefficiency provides lesser standard errors and generates more precise estimates than the dummy model, especially with small sample size. After those achievements, the exponential distribution was generalized by introducing new model in which the inefficiency component is gamma distributed. The gamma-difference model performs well particularly with small sample size and small inefficiency effect.

      Keywords. Estimators properties of maximum likelihood and method of moments; Hypothesis testing; Correction of dependent residuals; Contaminated distribution; Confidence intervals; Conditional maximum likelihood; Robust estimation; Common area estimator; Correction of outlier residuals; Difference model; Dummy individuals model; Inefficiency estimators; Exponential distribution; Confluent hypergeometric function; Gamma distribution; Stochastic frontier analysis.


Fundación Dialnet

Mi Documat

Opciones de tesis

Opciones de compartir

Opciones de entorno