Multi-objective test case selection techniques are widely investigated with the goal of devising novel solutions to increase the cost-effectiveness of verification processes. When evaluating such approaches the entire Pareto-frontier of the algorithm needs to be considered. To do so, several quality indicators exist. The \textit{hypervolume} (HV) is one of the most well-known and applied quality indicator. However, in the context of test case selection, this metric has certain limitations. For instance, two different fitness function combinations are not comparable if this metric is used at the search algorithm's objective function level. Consequently, researchers proposed the revisited HV ($rHV$) indicator. To compute the $rHV$, each solution of the search algorithm is individually assessed through two external utility functions: the cost and the fault detection capability (FDC). However, this increases the risk of having dominated solutions, which in practice may lead a {decision maker} (DM) to select such dominated solution. In this paper we assess whether the $rHV$ is an appropriate quality indicator to assess multi-objective test case selection algorithms. To do so, we empirically assess whether the results between the $rHV$ and the FDC of the different DM instances hold. Long story short, the $rHV$ is an appropriate quality indicator.
© 2008-2025 Fundación Dialnet · Todos los derechos reservados