Studies in software effort estimation (SEE) have explored the use of hyper-parameter tuning for machine learning algorithms (MLA) to improve the accuracy of effort estimates. In other contexts random search (RS) has shown similar results to grid search, while being less computationally-expensive. In this paper, we investigate to what extent the random search hyper-parameter tuning approach affects the accuracy and stability of support vector regression (SVR) in SEE. Results were compared to those obtained from ridge regression models and grid search-tuned models. A case study with four data sets extracted from the ISBSG 2018 repository shows that random search exhibits similar performance to grid search, rendering it an attractive alternative technique for hyper-parameter tuning. RS-tuned SVR achieved an increase of 0.227 standardized accuracy (SA) with respect to default hyper-parameters. In addition, random search improved prediction stability of SVR models to a minimum ratio of 0.840. The analysis showed that RS-tuned SVR attained performance equivalent to GS-tuned SVR. Future work includes extending this research to cover other hyper-parameter tuning approaches and machine learning algorithms, as well as using additional data sets.
Tipo de publicación: Conference Paper
Publicado en: Proceedings of the 16th ACM International Conference on Predictive Models and Data Analytics in Software EngineeringAutores
- Leonardo Villalobos-Arias
- Christian Quesada-López
- Jose Guevara-Coto
- Alexandra Martinez
- Marcelo Jenkins
Proyecto asociado a la publicación
Evaluación empírica de una metodología para la automatización de la medición del tamaño funcional del software.