Big data analytics: integrating penalty strategies


Ahmed S. E., YÜZBAŞI B.

INTERNATIONAL JOURNAL OF MANAGEMENT SCIENCE AND ENGINEERING MANAGEMENT, cilt.11, sa.2, ss.105-115, 2016 (ESCI) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 11 Sayı: 2
  • Basım Tarihi: 2016
  • Doi Numarası: 10.1080/17509653.2016.1153252
  • Dergi Adı: INTERNATIONAL JOURNAL OF MANAGEMENT SCIENCE AND ENGINEERING MANAGEMENT
  • Derginin Tarandığı İndeksler: Emerging Sources Citation Index (ESCI), Scopus
  • Sayfa Sayıları: ss.105-115
  • Anahtar Kelimeler: Sparse regression models, penalty and shrinkage estimation, estimation strategies, Monte Carlo simulation, MODEL SELECTION, VARIABLE SELECTION, ORACLE PROPERTIES, LINEAR-MODELS, LASSO, REGRESSION, SHRINKAGE
  • İnönü Üniversitesi Adresli: Evet

Özet

We present efficient estimation and prediction strategies for the classical multiple regression model when the dimensions of the parameters are larger than the number of observations. These strategies are motivated by penalty estimation and Stein-type estimation procedures. More specifically, we consider the estimation of regression parameters in sparse linear models when some of the predictors may have a very weak influence on the response of interest. In a high-dimensional situation, a number of existing variable selection techniques exists. However, they yield different subset models and may have different numbers of predictors. Generally speaking, the least absolute shrinkage and selection operator (Lasso) approach produces an over-fitted model compared with its competitors, namely the smoothly clipped absolute deviation (SCAD) method and adaptive Lasso (aLasso). Thus, prediction based only on a submodel selected by such methods will be subject to selection bias. In order to minimize the inherited bias, we suggest combining two models to improve the estimation and prediction performance. In the context of two competing models where one model includes more predictors than the other based on relatively aggressive variable selection strategies, we plan to investigate the relative performance of Stein-type shrinkage and penalty estimators. The shrinkage estimator improves the prediction performance of submodels significantly selected from existing Lasso-type variable selection methods. A Monte Carlo simulation study is carried out using the relative mean squared error (RMSE) criterion to appraise the performance of the listed estimators. The proposed strategy is applied to the analysis of several real high-dimensional data sets.