Evaluating Testing Methods by Delivered ReliabilityBy Lorenzo Strigini, Bev Littlewood, Phyllis Frankl, and Dick Hamlet; IEEE Transactions on Software Engineering, vol. SE-24, no. 8, pp.586-601, 1998. ABSTRACT This paper examines the relationship between the two testing goals, using a probabilistic analysis. We dene simple models of programs and their testing, and try to answer theo- retically the question of how to attain program reliability: Is it better to test by probing for defects as in debug testing, or to assess reliability directly as in operational testing, uncovering defects by accident, so to speak? There is no simple answer, of course. Testing methods are compared in a model where program failures are detected and the software changed to eliminate them. The \better" method delivers higher reliability after all test failures have been eliminated. This comparison extends previous work, where the measure was the probability of detecting a failure. Revealing special cases are exhibited in which each kind of testing is superior. Preliminary analysis of the distribution of the delivered reliability indicates that even simple models have unusual statistical properties, suggesting caution in interpreting theoretical comparisons. The full text of this paper is available in .pdf and .ps format. |
The documents distributed by this server have been provided by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a noncommercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder. |
Page maintained by: Lorenzo Strigini