The size and complexity of software is continuously growing, and testing is one of the most important strate- gies for improving software reliability, quality, and design. Unit testing, in particular, forms the foundation of the test- ing process and it is effectively supported by automated testing frameworks. Manual unit-test creation is difficult, monotonous and time-consuming. In order to reduce the ef- fort spent on this task, several tools have been developed. Many of them can almost automatically produce unit tests for regression avoidance or failure detection. This paper presents a practical comparison methodol- ogy to analyze different unit-testing creation tools and tech- niques. It validates the effectiveness of tools and spots their weaknesses and strengths. The validity of this methodol- ogy is confirmed through a real case experiment, in which both the manual implementation and different automatic test generation tools (based on random testing) are used. In addition, in order to integrate and exploit the benefits of each technique, which result from the comparison process, a testing procedure based on “best practices” is developed.

How to compare and exploit different techniques for unit-test generation

CIANCARINI, PAOLO;ROSSI, DAVIDE
2009

Abstract

The size and complexity of software is continuously growing, and testing is one of the most important strate- gies for improving software reliability, quality, and design. Unit testing, in particular, forms the foundation of the test- ing process and it is effectively supported by automated testing frameworks. Manual unit-test creation is difficult, monotonous and time-consuming. In order to reduce the ef- fort spent on this task, several tools have been developed. Many of them can almost automatically produce unit tests for regression avoidance or failure detection. This paper presents a practical comparison methodol- ogy to analyze different unit-testing creation tools and tech- niques. It validates the effectiveness of tools and spots their weaknesses and strengths. The validity of this methodol- ogy is confirmed through a real case experiment, in which both the manual implementation and different automatic test generation tools (based on random testing) are used. In addition, in order to integrate and exploit the benefits of each technique, which result from the comparison process, a testing procedure based on “best practices” is developed.
A. Bacchelli; P. Ciancarini; D. Rossi
File in questo prodotto:
Eventuali allegati, non sono esposti

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: http://hdl.handle.net/11585/79215
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact