IEICE Transactions on Information and Systems
Online ISSN : 1745-1361
Print ISSN : 0916-8532

This article has now been updated. Please use the final version.

The Impact of Defect (Re) Prediction on Software Testing
Yukasa MURAKAMIYuta YAMASAKIMasateru TSUNODAAkito MONDENAmjed TAHIRKwabena Ebo BENNINKoji TODAKeitaro NAKASAI
Author information
JOURNAL FREE ACCESS Advance online publication

Article ID: 2024MPL0002

Details
Abstract

Cross-project defect prediction (CPDP) aims to use data from external projects as historical data may not be available from the same project. In CPDP, deciding on a particular historical project to build a training model can be difficult. To help with this decision, a Bandit Algorithm (BA) based approach has been proposed in prior research to select the most suitable learning project. However, this BA method could lead to the selection of unsuitable data during the early iteration of BA (i.e., early stage of software testing). Selecting an unsuitable model can reduce the prediction accuracy, leading to potential defect overlooking. This study aims to improve the BA method to reduce defects overlooking, especially during the early testing stages. Once all modules have been tested, modules tested in the early stage are re-predicted, and some modules are retested based on the re-prediction. To assess the impact of re-prediction and retesting, we applied five kinds of BA methods, using 8, 16, and 32 OSS projects as learning data. The results show that the newly proposed approach steadily reduced the probability of defect overlooking without degradation of prediction accuracy.

Content from these authors
© 2024 The Institute of Electronics, Information and Communication Engineers
feedback
Top