View Item 
        •   Utrecht University Student Theses Repository Home
        • UU Theses Repository
        • Theses
        • View Item
        •   Utrecht University Student Theses Repository Home
        • UU Theses Repository
        • Theses
        • View Item
        JavaScript is disabled for your browser. Some features of this site may not work without it.

        Browse

        All of UU Student Theses RepositoryBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

        Symbolic Inference of Partial Specifications to Augment Regression Testing

        Thumbnail
        View/Open
        thesis.pdf (449.6Kb)
        Publication date
        2017
        Author
        Wermer, K.
        Metadata
        Show full item record
        Summary
        A large part of the costs (50-80%) of software is due to maintenance. A big part of these costs are due to retesting the software to discover newly introduced bugs (also known as regression testing). We experiment with an approach that may reduce the cost of regression testing by detecting a part of these bugs fully automatically, by generating partial specifications in the form of Hoare triples for both versions of the program (the old version and the new one) and comparing them to each other. We use a static approach to generate these Hoare triples from source code. We chose Java as the target language for our analysis, because it is widely used and because a lot of the language constructs of Java pose practical challenges that theoretical languages, such as the language considered by Hoare, do not. Because regression testing is a practical problem, it is only natural to perform our experiments on a practical language. Regression testing, or testing in general, never fully guarantees that the software does not contain errors, so any method that can find errors may find errors that hand written tests do not find. Therefore, our program may increase the number of errors detected without requiring additional work from the programmer. The research question we try to answer is: ”Can we detect a decent amount of introduced mistakes without generating too many false positives by comparing partial specifications of the original program to those of the updated version?” The terms ”decent” and ”too many” are defined by comparing our approach to the tool Daikon. By doing so we’ll also find an answer to the question: ”Can our approach be used in combination with Daikon to achieve better results?”
        URI
        https://studenttheses.uu.nl/handle/20.500.12932/25870
        Collections
        • Theses
        Utrecht university logo