Uniform TitleChange impact analysis for Java programs and applications
NameRen, Xiaoxia (author), Ryder, Barbara (chair), Borgida, Alexander (internal member), Kremer, Ulrich (internal member), Elbaum, Sebastian (outside member), Rutgers University, Graduate School - New Brunswick,
Java (Computer program language)
DescriptionSmall changes can have major and nonlocal effects in object oriented languages, due to the extensive use of subtyping and dynamic dispatch. This makes it difficult to understand value flow through a program and complicates life for maintenance programmers. Change impact analysis provides feedback on the semantic impact of a set of program changes.
The change impact analysis method presented in this thesis presumes the existence of a suite of regression tests associated with a Java program and access to the original and edited versions of the code. The primary goal of our research is to provide programmers with tool support that can help them understand why a test is suddenly failing after a long editing session by isolating the changes responsible for the failure. The tool analyzes two versions of an application and decomposes their difference into a set of atomic changes. Change impact is then reported in terms of affected tests whose execution behavior may have been modified by the applied changes. For each affected test, it also determines a set of affecting changes that were responsible for the test's modified behavior.
The first contribution of this thesis is the demonstration of the utility of the basic change impact analysis framework of , by implementing a proof-of-concept prototype, Chianti, and applying it to Daikon, for an experimental validation.
The second contribution is the definition and implementation of the dependences between atomic changes. Extensive experiments show that our dependences can help build the intermediate programs automatically in most cases.
Another contribution is the heuristics for ranking the atomic changes for fault localization. This thesis proposes a heuristic that ranks method changes that might have affected a failed test, indicating the likelihood that they may have contributed to a test failure. Our results indicate that when a failure is caused by a single method change, our heuristic ranked the failure-inducing change as number 1 or number 2 of all the method changes in 67% of the delegate tests (i.e., representatives of all failing tests).
NoteIncludes bibliographical references (p. 109-114).
CollectionGraduate School - New Brunswick Electronic Theses and Dissertations
Organization NameRutgers, The State University of New Jersey
RightsThe author owns the copyright to this work.