Ir al contenido

Documat


Best Subset, Forward Stepwise or Lasso?: Analysis and Recommendations Based on Extensive Comparisons

  • Trevor Hastie [1] ; Robert Tibshirani [1] ; Ryan Tibshirani [2]
    1. [1] Stanford University

      Stanford University

      Estados Unidos

    2. [2] Carnegie Mellon University

      Carnegie Mellon University

      City of Pittsburgh, Estados Unidos

  • Localización: Statistical science, ISSN 0883-4237, Vol. 35, Nº. 4, 2020, págs. 579-592
  • Idioma: inglés
  • DOI: 10.1214/19-STS733
  • Texto completo no disponible (Saber más ...)
  • Resumen
    • In exciting recent work, Bertsimas, King and Mazumder (Ann. Statist. 44 (2016) 813–852) showed that the classical best subset selection problem in regression modeling can be formulated as a mixed integer optimization (MIO) problem. Using recent advances in MIO algorithms, they demonstrated that best subset selection can now be solved at much larger problem sizes than what was thought possible in the statistics community. They presented empirical comparisons of best subset with other popular variable selection procedures, in particular, the lasso and forward stepwise selection. Surprisingly (to us), their simulations suggested that best subset consistently outperformed both methods in terms of prediction accuracy. Here, we present an expanded set of simulations to shed more light on these comparisons. The summary is roughly as follows:

      •neither best subset nor the lasso uniformly dominate the other, with best subset generally performing better in very high signal-to-noise (SNR) ratio regimes, and the lasso better in low SNR regimes;

      •for a large proportion of the settings considered, best subset and forward stepwise perform similarly, but in certain cases in the high SNR regime, best subset performs better;

      •forward stepwise and best subsets tend to yield sparser models (when tuned on a validation set), especially in the high SNR regime;

      •the relaxed lasso (actually, a simplified version of the original relaxed estimator defined in Meinshausen (Comput. Statist. Data Anal. 52 (2007) 374–393)) is the overall winner, performing just about as well as the lasso in low SNR scenarios, and nearly as well as best subset in high SNR scenarios.


Fundación Dialnet

Mi Documat

Opciones de artículo

Opciones de compartir

Opciones de entorno