Skip to main content


HVC 2009
Haifa Verification Conference 2009

October 19-22, 2009
Organized by IBM R&D Labs in Israel

image: IBM and Haifa


Abstracts

Invited Session - The History of Verification

Session Chair: Karen Yorav, IBM
Speakers: Andreas Zeller, University of Saarland
Janick Bergeron, Synopsys
Orna Kupferman, Hebrew University


This special session will attempt to draw a picture of the field and how it came to be. Divided into three talks, the session will track the development of the different verification domains. Andreas Zeller will talk about the development of software testing, Janick Bergeron will present the development of simulation-based verification, and Orna Kupferman will conclude with a presentation about formal verification.
This session lays the background to the "Future of Verification" panel, to be held later on the same day.


Dataflow Analysis for Properties of Aspect Systems

Yevgenia Alperin and Shmuel Katz

In this work, data and control flow analysis is used to detect aspects that are guaranteed to maintain some classes of linear temporal logic properties. This is, when the aspects are added to a system that satisfied the desired properties, these properties will remain true. Cate- gories of advices (code segments) and introduced methods of aspects are defined. These categories are shown to be effective, in that they both provide real aid in verification of properties, and are automatically de- tectable using data and control flow. An implemented automatic data and control flow tool is described to detect the category of each advice and introduced method of an aspect. The results of applying the tool to aspect systems are summarized.


Stacking-based Context-Sensitive Points-to Analysis for Java

Xin Li and Mizuhito Ogawa

Points-to analysis for Java infers heap objects that a reference variable can point to. Existing context-sensitive points-to analyses are mostly cloning-based, which have an inherent limit to handle recursive procedure calls and hard to scale under deep cloning. This paper presents an alternative \emph{stacking-based} context-sensitive points-to analysis for Java, based on weighted pushdown model checking. To generate a tractable model, instead of passing global variables as parameters along procedure calls and returns usually, we model the heap memory as a global data structure which provides global references with synchronized data flows.
To accelerate the analysis, we propose a two-staged iterative procedure that combines local exploration (LE) for boosting the the convergence and global update (GU) for guaranteeing completeness. In particular, summary transition rules that carries cached data flows are carefully introduced to trigger each LE iteration. Empirical studies show that our analysis scales well to Java benchmarks of significant size, and achieved 2.5X speedup in the two-staged iterations.


Reduction of Interrupt Handler Executions for Model Checking Embedded Software

Bastian Schlich, Thomas Noll, Joerg Brauer and Lucas Brutschy

Interrupts play an important role in embedded software. Unfortunately, they aggravate the state-explosion problem that model checking is suffering from. Therefore, we propose a new abstraction technique based on partial order reduction that minimizes the number of locations where interrupt handlers need to be executed during model checking. This significantly reduces state spaces while the validity of the verification results is preserved. The paper details the underlying static analysis which is employed to annotate the programs before verification. Moreover, it introduces a formal model which is used to prove that the presented abstraction technique preserves the validity of the branching-time logic CTL*-X by establishing a stutter bisimulation equivalence between the abstract and the concrete transition system. Finally, the effectiveness of this abstraction is demonstrated in a case study.


Functional Test Generation with Distribution Constraints

Anna Moss and Boris Gutkovich

In this paper, we extend the Constraint Programming (CP) based functional test generation framework with a novel concept of distribution constraints. The proposed extension is motivated by requirements arising in the functional validation field, when a validation engineer needs to stress an interesting architectural event following some special knowledge of design under test or a specific validation plan. In such cases there arises the need to generate a sequence of test instructions or a collection of tests according to user-given distribution requirements which specify desired occurrence frequencies for interesting events. The proposed extension raises the expressive power of the CP based framework and allows specifying distribution requirements on a collection of Constraint Satisfaction Problem (CSP) solutions. We formalize the notion of distribution requirements by defining the concept of distribution constraints. We present two versions of problem definition for CP with distribution constraints, both of which arise in the context of functional test generation. The paper presents algorithms to solve each of these two problems. One family of the proposed algorithms is based on CP, while the other one makes use of both CP and the linear programming (LP) technology. All of the proposed algorithms can be efficiently parallelized taking advantage of the multi core technology. Finally, we present experimental results to demonstrate the effectiveness of proposed algorithms with respect to performance and distribution accuracy.


An Explanation-Based Constraint Debugger

Aaron Rich, Giora Alexandron and Reuven Naveh

Constraints have become a central feature of advanced simulation-based verification. Effective debugging of constraints is an ongoing challenge. In this paper we present GenDebugger - an innovative GUI-based tool for debugging constraints. GenDebugger shows how a set of constraints is solved in a step by step flow, allowing the user to easily pick out steps that are key to the buggy behavior. It is unique in that it is designed for dealing with various types of constraint-related problems, not only constraint-conflicts or performance issues. GenDebugger utilizes principles from the explanation-based debug paradigm, in that it displays information justifying each step of the solution. A unique contribution is that it enables the user to easily combine separate explanations of individual steps into a comprehensive full explanation of the constraint-solver's behavior. GenDebugger was lately released for full production, after an extensive process of evaluation with early access users. Our own experience and our users feedback indicate that GenDebugger provides overwhelming help in resolving problems found in constraints.


Evaluating Workloads Using Multi-Comparative Functional Coverage

Yoram Adler, Dale Blue and Shmuel Ur

There are two main difficulties in measuring coverage for large systems, The first is that system test, being a very complex envi- ronment causes many difficulties to the collection of the coverage data, most technical but some organizational. The second reason is that there is a huge amount of data and it is difficult to figure out how to present it at the right granularity so that the information will be consumed by the correct people and have the desired impact on the testing process. In previous papers we have shown how to present system test coverage data in a very succinct way. In this paper we show how we adapted the process, and modified the coverage tooling, to make it easier for the organization to make use of the coverage data. Once coverage is in place that data is useful for additional practices such as regression testing and test selection.


Synthesizing Solutions to the Leader Election Problem using Model Checking and Genetic Programming

Gal Katz and Doron Peled

In a series of recent papers [13-15], we demonstrated a methodology for developing correct-by-design programs from temporal logic specification using genetic programming. Model checking the temporal specification is used to calculate the fitness function for candidate solutions, which directs the search from initial randomly generated programs towards correct solutions. This method was successfully demonstrated by constructing solutions for the mutual exclusion problem; later, we also imposed some realistic constraints on access to variables. While the results were encouraging for using the genetic synthesis method, the mutual exclusion example includes some limitations that fit well with the constraints of model checking: the goal was finding a fixed finite state program, and its state space was moderately small. Here, in a more realistic setting, we challenge the problem of synthesizing a solution for the well known "leader election" problem; under this problem, a circular, unidirectional network with message passing is seeking the identity of a process with a maximal value. This identity, once found, can be used for synchronization, breaking symmetry and other network applications. The problem is challenging since it is parametric, and the state space of the solutions grows up exponentially with the number of processes. The constructed solutions also have to be correct for every ordering of the processes ids. This paper provides a short description of our model checking based genetic synthesis method and our experimental system, describes the problems tackled for solving the current synthesis challenge along with the solutions that were used, and provides details of the experimental results.


Reasoning about Finite-State Switched Systems

Dana Fisman and Orna Kupferman

A switched system is composed of components. The components do not interact with one another. Rather, they all interact with the same environment, which switches one of them on at each moment in time. In standard concurrency, a component restricts the environment of the other components, thus the concurrent system has fewer behaviors than its components. On the other hand, in a switched system, a component suggests an alternative to the other components, thus the switched system has richer behaviors than its components.
We study finite-state switched systems, where each of the underlying components is a finite-state transducer. While the main challenge, namely compositionality, is similar in standard concurrent systems and in switched systems, the problems and solutions are different. In the verification front, we suggest and study an assumeguarantee paradigm for switched systems, and study formalisms in which satisfaction of a specification in all components imply its satisfaction in the switched system. In the synthesis front, we show that while compositional synthesis and design are undecidable, the problem of synthesizing a switching rule with which a given switched system satisfies an LTL specification is decidable.


Bisimulation Minimisations for Boolean Equation Systems

Jeroen Keiren and Tim Willemse

Boolean equation systems (BESs) have been used to encode several complex verification problems, including model checking and equivalence checking. We introduce the concepts of strong bisimulation and oblivious bisimulation for BESs, and we prove that these can be used for minimising BESs prior to solving these. Our results show that large reductions of the BESs may be obtained efficiently. Minimisation is rewarding for BESs with non-trivial alternations: the time required for solving the original BES exceeds the time required for quotienting plus the time for solving the quotient. Furthermore, we provide a verification example that demonstrates that bisimulation minimisation of a process prior to encoding the verification problem on that process as a BES can be arbitrarily less effective than minimising the BES that encodes the verification problem.


Diagnosability of Pushdown Systems

Christophe Morvan and Sophie Pinchinat

Partial observation of discrete-event systems features a setting where events split into observable and unobservable ones. In this context, the diagnosis of a discrete-event system consists in detecting defects from the (partial) observation of its executions. Diagnosability is the property that any defect is eventually detected. Not surprisingly, it is a major issue in practical applications. We investigate diagnosability for classes of pushdown systems: it is undecidable in general, but we exhibit reasonably large classes of visibly pushdown systems where the problem is decidable. For these classes, we furthermore prove the decidability of a stronger property: the bounded latency, which guarantees the existence of a uniform bound on the respond delay after the defect has occurred. We also explore a generalization of the approach to higher-order pushdown systems.


An Interpolating Decision Procedure for Transitive Relations with Uninterpreted Functions

Daniel Kroening and Georg Weissenbacher

We present a proof-generating decision procedure for the quantifier-free fragment of first-order logic with the relations =, !=, >=, and > and argue that this logic, augmented with a set of theory-specific rewriting rules, is adequate for bit-level accurate verification. We describe our decision procedure from an algorithmic point of view and explain how it is possible to efficiently generate Craig interpolants for this logic. Furthermore, we discuss the relevance of the logical fragment in software model checking and evaluate its applicability using an interpolation-based program analyser.


Contact Information

Proceedings Publication

Springer Lecture Notes in Computer Science