Skip to main content
    Israel [change]    Terms of use
    Home    Products    Services & solutions    Support & downloads    My account    
IBM Research & CRI

Haifa verification conference 2006

IBM Haifa Labs

Invitation Registration

IBM Software Testing Track 2006
October 25, 2006
Organized by IBM Research Lab in Haifa, Israel


ExpliSAT: Guiding SAT-Based Software Verification with Explicit States

Sharon Barner, Cindy Eisner, Ziv Glazberg, Daniel Kroening, and Ishai Rabinovitz, IBM Haifa Research Lab

We present a hybrid method for software model checking that combines explicit-state and symbolic techniques. Our method traverses the control flow graph of the program explicitly, and encodes the data values in a CNF formula, which we solve using a SAT solver. To avoid traversing control flow paths that do not correspond to a valid execution of the program, we introduce the idea of a representative of a control path.

We present favorable experimental results, which show that our method scales well both with regard to the non-deterministic data and the number of threads.

Evolutionary Testing: A Case Study

Stella Levin and Amiram Yehudai, Tel Aviv University

The paper presents a case study of applying genetic algorithms (GAs) to the automatic test data generation problem. It explains the basic techniques implemented in our prototype test generation system. The goal of the system is to get branch coverage of the program under testing. We experiment with simple programs, programs that are known to be used for test strategies benchmarking, and the UNIX utility uniq. The effectiveness of GA-based testing system is compared with a Random testing system. We found that for simple programs both testing systems work fine, but as the complexity of the program or the complexity of input domain grows, the GA-based testing system significantly outperforms Random testing.

Invited speaker: Dynamic Analysis for Self-Healing Software

Mauro Pezze, Universita degli Studi di Milano

Dynamic analysis of software programs can identify subtle faults and derive useful information about the runtime behavior of software systems, but impact software performance. Thus, dynamic analysis techniques are traditionally applied during testing, but the increasing availability of processing resources at affordable costs opens new opportunities. In this talk, we suggest how dynamic analysis can be effectively used to design self-healing software.

We first summarize the main ideas behind dynamic analysis and describe some recent results in the field. We then motivate towards the need of self-healing software, and discuss the problems related to the design of efficient self-healing mechanisms. We conclude by illustrating how dynamic analysis techniques can support self diagnosis of software faults to trigger self repairing mechanisms.

Invited speaker: Transactional Locking

Nir Shavit, Tel Aviv University and Sun Labs

There has been a flurry of recent work on the design of high performance software and hybrid hardware/software transactional memories (STMs and HyTMs). This talk reexamines the design decisions behind several of these state-of-the-art algorithms, adopting some ideas and rejecting others, in an attempt to make STMs faster.

The results of our evaluation led us to develop the Transactional Locking approach to STM design, and in particular the Transactional Locking II (TL2) algorithm, which we believe to be the simplest overall best performing STM/HyTM to date. Unlike prior STMs, it works only on coherent memory states and combines seamlessly with hardware transactions and with any system's memory life-cycle, making it an ideal candidate for multi-language deployment today, long before hardware transactional support becomes commonly available.

A Race-Detection and Flipping Algorithm for Automated Testing of Multi-Threaded Programs

Koushik Sen, University of California Berkeley, and Gul Agha, University of Illinois at Urbana-Champaign

Testing concurrent programs that accept data input is notoriously hard because, besides the large number of possible data inputs, non-determinism results in an exponentially large number of interleavings of concurrent events. To test shared-memory multi-threaded programs efficiently, we develop an algorithm based on race-detection and flipping and illustrate how it can be combined with concolic execution (a simultaneous symbolic and concrete execution method) to test multi-threaded programs with data input.

The goal of our algorithm is to minimize redundant executions while ensuring that all reachable statements in a program are executed. To achieve this, our algorithm explores all distinct causal structures of a multi-threaded program (i.e., the partial order among events generated during an execution). Because our algorithm is based on race-detection, it enables us to report potential data races and deadlocks. We have implemented our algorithm in a tool called jCUTE. We describe the results of applying jCUTE to real-world multi-threaded Java applications and libraries. In particular, we discovered several undocumented potential concurrency-related bugs in the widely used Java collection framework distributed with Sun Microsystems JDK 1.4.

Debugging Concurrent Java Programs by Minimizing Scheduling Noise

Yaniv Eytani and Timo Latvala, University of Illinois at Urbana Champaign

A noise maker is a tool for testing multi-threaded programs. It seeds shared memory accesses and synchronization events (concurrent events) with conditional context switches and timeouts during run-time, to increase the likelihood that a concurrent bug manifests. However, an instrumented program with many seeded events may not be useful for debugging; events have been seeded all over the source code and provide almost no information regarding the bug. We argue that for many bug patterns there is a small set of critical events associated with them. Based on this observation, i.e., that bugs involve only a few relevant context switches, we develop a randomized algorithm to reduce the scheduling noise and discover these events related to the bug. To evaluate the effectiveness of this approach, we experiment with debugging industrial code, known open source code software, and programs representing known concurrent bugs. Our results demonstrate that this simple technique is in many cases very powerful, and significantly helps the user when locating and understanding a concurrent bug.

Keynote: Testing the Machine in the World

Michael Jackson, University of Newcastle

Software describes a machine needed to solve some problem by interacting with its environment--that is, with the world where the problem is located. In a realistic system, the machine is an assemblage of the solutions of many sub-problems, each sub-problem interacting with those parts of the problem world that are relevant to its particular functionality. Devising sub-problem solutions and composing them into a coherent system can be understood as distinct concerns in software development.

This view suggests that testing of software functionality must be intimately concerned with the problem world. Unit testing may concern itself with the individual sub-problems, and integration testing with their composition and interactions. A parallel can be drawn with the apparent practice of some traditional branches of engineering, based on normal, as opposed to radical, design. A vital benefit of normal design is the slowly accumulated knowledge of the respects to which particular designs are likely to be at risk, and hence of possible faults to be detected by testing.

Choosing a Test Modeling Language: A Survey

Alan Hartman, IBM Haifa Research Lab, and Mika Katara and Sergey Olvovsky, Tampere University of Technology

Deployment of model-based testing involves many difficulties that have slowed down its industrial adoption. The leap from traditional scripted testing to model-based testing seems as hard as moving from manual to automatic test execution. Two key factors in the deployment are the language used to define the test models, and the language used for defining the test objectives. Based on our experience, we survey the different types of languages and sketch solutions based on different approaches, considering the testing organization, the system under test, etc.

The types of languages we cover include, amongst others, domain-specific and test-specific, as well as generic design languages. We note that there are no best practices, but provide general guidelines for various cases.

Making Model-based Testing More Agile: a Use Case Driven Approach

Mika Katara and Antti Kervinen, Tampere University of Technology

We address the problem of misalignment of artifacts developed in agile software development projects and those required by model-based test generation tools. Our solution is domain specific and relies on the existence of domain experts to design the test models. The testers interface the test generation systems with use cases that are converted into sequences of so-called action words corresponding to user events at a high level of abstraction. To support this scheme, we introduce a coverage language and an algorithm for automatic test generation.


Related Conference Links
Visitors information  
Formal Verification in HRL  
Verification and Testing Solutions in HRL  
Simulation based methods in HRL  
IBM Verification Conference 2005  
IBM Verification Seminar 2004  
IBM Verification Seminar 2003  
IBM Verification Seminar 2002  

Proceedings publication: Springer Lecture Notes in Computer Science

    About IBMPrivacyContact
Caesarea Rothschild Institute (CRI) IBM Research