A procedure to continuously evaluate predictive performance of just-in-time software defect prediction models during software development

Liyan Song, Leandro Minku

Research output: Contribution to journalArticlepeer-review

79 Downloads (Pure)

Abstract

Just-In-Time Software Defect Prediction (JIT-SDP) uses machine learning to predict whether software changes are defect-inducing or clean. When adopting JIT-SDP, changes in the underlying defect generating process may significantly affect the predictive performance of JIT-SDP models over time. Therefore, being able to continuously track the predictive performance of JIT-SDP models during the software development process is of utmost importance for software companies to decide whether or not to trust the predictions provided by such models over time. However, there has been little discussion on how to continuously evaluate predictive performance in practice, and such evaluation is not straightforward. In particular, labeled software changes that can be used for evaluation arrive over time with a delay, which in part corresponds to the time we have to wait to label software changes as ‘clean’ (waiting time). A clean label assigned based on a given waiting time may not correspond to the true label of the software changes. This can potentially hinder the validity of any continuous predictive performance evaluation procedure for JIT-SDP models. This paper provides the first discussion of how to continuously evaluate predictive performance of JIT-SDP models over time during the software development process, and the first investigation of whether and to what extent waiting time affects the validity of such continuous performance evaluation procedure in JIT-SDP. Based on 13 GitHub projects, we found that waiting time had a significant impact on the validity. Though typically small, the differences in estimated predicted performance were sometimes large, and thus inappropriate choices of waiting time can lead to misleading estimations of predictive performance over time. Such impact did not normally change the ranking between JIT-SDP models, and thus conclusions in terms of which JIT-SDP model performs better are likely reliable independent of the choice of waiting time, especially when considered across projects.
Original languageEnglish
JournalIEEE Transactions on Software Engineering
Early online date15 Mar 2022
DOIs
Publication statusE-pub ahead of print - 15 Mar 2022

Keywords

  • Delays
  • Estimation
  • Performance evaluation
  • Predictive models
  • Software
  • Software reliability
  • Training

ASJC Scopus subject areas

  • Software

Fingerprint

Dive into the research topics of 'A procedure to continuously evaluate predictive performance of just-in-time software defect prediction models during software development'. Together they form a unique fingerprint.

Cite this