IBM Israel
Skip to main content
 
Search IBM Research
   Home  |  Products & services  |  Support & downloads  |  My account
Select a Country Select a country
IBM Research Home IBM Research Home
IBM Haifa Labs Homepage IBM Haifa Labs Home
IBM Haifa Labs Homepage IBM Haifa Labs Leadership Seminars

Verification Seminar 2002

Introduction
Program
Visitors information
Confirmed participants
Verification in HRL
Photos from the seminar
News
Feedback


Abstracts
Formal Specification Using Sugar 2.0 (Abstract)

Sugar is a language for the formal specification of hardware, and is used to describe properties that are required to hold of the design under verification. One important use of Sugar is for documentation, either in place of, or in conjunction with, an English specification. Another important use of Sugar is as input to verification tools. A Sugar specification can be used as input to a formal verification tool, such as a model checker or a theorem prover, which can automatically determine whether a design obeys its Sugar specification. A Sugar specification can also be used to automatically generate simulation checkers, which can then be used to "informally" check the design using simulation.


Distributed Symbolic Model Checking (Abstract)

We present a method for distributed symbolic model checking. This method copes with the well-known state-explosion problem by pooling memories of a collection of workstations, such that virtually, the collection of machines provides the appearance of a single very large machine.

To present our method, we focus on the subproblem of symbolic model checking, known as reachability analysis (RA). Given an FSM, the RA problem involves computing the set of FSM states that are reachable from the set of initial states. In the classical formulation, the RA problem is viewed as a graph problem, where the vertices represent states and the edges represent FSM transitions.

The sequential RA algorithm uses a variant of Breadth First Search (BFS) that represents the set of states/vertices using BDDs. Our work introduces a distributed BFS algorithm that also includes BDDs, such that none of its processes need to hold the entire set of states/vertices.

Another part of our work presented in this talk is a system, called the Division System, which is intended to serve as a platform for applied research in the area of distributed symbolic model checking. The purpose of the Division System is to provide an infrastructure to implement new algorithms in distributed symbolic models, such that new ideas can be explored with relative ease.

Finally we conclude with future work we envision in this area.


Full Cycle Functional Coverage: Coverage Success Stories (Abstract)

Functional coverage is a known technique for monitoring the verification process in order to ascertain that all the important aspects of the design functionality have been thoroughly tested. In recent years, the popularity of functional coverage has increased significantly, together with tool support for coverage model definition, measurement, and analysis. Still, using functional coverage the "right" way is more an art than a science.

In this presentation, we use coverage success stories to describe our functional coverage methodology and the benefits it can provide to the verification process. We follow the full life cycle of the coverage process, starting from the definition of the coverage models, through the continuous process of measurement, analysis, and refinement, until the coverage goals are met. The success stories we use in this presentation help illustrate some good and bad practices whose lessons can be used to build a successful methodology for the use of functional coverage.


Coverage Directed Test Generation for Functional Verification Using Bayesian Networks (Abstract)

Functional verification is the process that ensures conformance of a design to its specification, and is widely acknowledged as the bottleneck of a hardware design.

Graphical models are a marriage between probability theory and graph theory. They provide a natural tool for dealing with two problems that occur throughout applied mathematics and engineering -- uncertainty and complexity. In particular, they play an increasingly important role in the design and analysis of machine learning algorithms. Many of the classical multivariate probabilistic systems studied in fields such as statistics, systems engineering, information theory, pattern recognition, and statistical mechanics are special cases of the general graphical model formalism. The graphical model framework provides a way to view all of these systems as instances of a common underlying formalism.

One particular type of graphical models is Bayesian networks, in which the underlying graph contains only direct edges. In this talk, I briefly review the basic elements of Bayesian networks, and describe how they can be used to address one of the main challenges of simulation based verification: closing a feedback loop from the domain of covered events back to a generator that produces new stimuli to the tested design. The automated feedback from coverage analysis to test generation, termed Coverage Driven test Generation (CDG), should increase the efficiency and quality of the verification process.

The application of Bayesian networks to the CDG framework was tested on an abstract model of the pipeline of a PowerPC processor, and exhibited encouraging results.


Automatic Discovery of Mutual Exclusion Algorithms (Abstract)

We present a methodology for automatic generation of synchronization algorithms.
We built a tool and used it to automatically generate hundreds of new algorithms for the well-known problem of mutual exclusion. The methodology is rather simple, and the fact that it is computationally feasible is surprising. Our brute force approach may require (even for very short algorithms) the mechanical verification of hundreds of millions of incorrect algorithms before a correct one is found. Although many new interesting algorithms have been found, we think the main contribution of this work is in demonstrating that the approach suggested for automatic generation of (correct) synchronization algorithms is feasible.


Verifying the Discovery (TM) System Controllers Family - A Case Study (Abstract)

Marvell® Discovery™, family of system controllers, has started a new era in Marvell® SoC designs. Discovery™ devices integrate high-performance system peripherals and LAN / WAN communication ports with the industry's highest performance 64-bit RISC system controller. The first member of the Discovery™ family, the Discovery™ GT-64260, was fully functional in the first silicon. This achievement was accomplished by a dedicated verification team that applied modern verification techniques. We present the technologies and the efforts used in the pre-silicon verification process and their results.


From Aerospace to C. elegans: Modeling and Analyzing the Behavior of Complex Reactive Systems (Abstract)

An interesting duality is exhibited between intra-object, state-based behavior (e.g., using statecharts) and inter-object, scenario-based behavior (using live sequence charts; LSCs). In the latter approach, the behavior is "played in" directly from the system's external or internal GUI, and can then be "played out" freely, similar to statechart modeling or code. Play-out is thus a rather surprising way of working with a fully operational system, directly from its inter-object requirements. This approach is supported and illustrated by a tool we developed, called the play-engine. In the tool, we incorporated a "smart" play-out mode, whereby parts of the plan for execution are computed using model-checking. Thus, we employ formal verification techniques for driving the execution. The entire approach appears to be useful in many stages in the development of reactive systems, and can possibly pave the way to systems that are constructed directly from their requirements, without the need for intra-object or intra-component modeling or coding. If time permits, I will also describe a rather ambitious biological application.


Improved Abstraction/Refinement Verification Algorithms with Backtracking and Layering (Abstract)

Localization reduction is an abstraction-refinement scheme for model checking, which was introduced by Bob Kurshan as a means for tackling state explosion. It is completely automatic, but despite the work that has been done related to this scheme, it still suffers from computational complexity. In this work, we present algorithmic improvements to localization reduction that enabled us to overcome some of these problems. Namely, we present a new symbolic algorithm for path reconstruction, including incremental refinement and backtracking. We implemented these improvements and compared them to previous work on a large number of our industrial examples. In some cases, the improvement was dramatic. Using these improvements, we were able to verify circuits that we were previously unable to address.


Information Flow Models for Shared Memory with an Application to the PowerPC Architecture (Abstract)

This paper introduces a generic framework for defining instructions and programs, and the semantics of their instantiation by operations in a multiprocessor environment. The frame-work captures the information flow between operations in a multiprocessor program by means of a reads-from mapping from read operations to write operations. Two fundamental relations are defined on the operations: a program order, between operations, which instantiate the program of some processor; and view orders, which are specific to each shared memory model. An operation cannot read from the ``hidden'' past or from the future; the future and the past causality can be examined either relative to the program order or relative to the view orders.
A shared memory model specifies, for a given program, the permissible transformation of resource states. The memory model should reflect the programmer's view by citing the guaranteed behavior of the multiprocessor in the interface visible to the programmer. The model should refrain from dictating the design practices that should be followed by the implementation. Our framework allows an architect to reveal the programming view induced by a shared memory architecture; it serves programmers exploring the limits of the programming interface and guides architecture-level verification. The framework is applicable for complex, commercial architectures since it can capture subtle programming interface details, exposing the underlying aggressive micro architecture mechanisms. As an illustration, we use our framework to define the shared memory model supported by the PowerPC architecture.
Index terms: shared memory, multi processor systems, PowerPC architecture, models, specification, consistency, synchronization instructions, out of order execution PowerPC is a trademark of International Business Machines Corporation.


What�s Between Constraint Satisfaction and Random Test Program Generation (Abstract)

Automatic generation of test programs plays a major role in the verification of modern processors and hardware systems. In this paper, we formulate the generation of test programs as a Constraint Satisfaction Problem (CSP) and develop techniques for dealing with the challenges we face, most notably: huge variable domains (e.g., magnitude of 2 ^64), and the need to randomly generate �well distributed� samplings of the solution space. We describe several applications of our method, which include specific test generators targeted at various parts of a design or stages of the verification process.


Banias - Design and Verification of a Mobile-specific CPU (Abstract)

Banias is a CPU, designed at Intel Israel, specifically for the mobile market. The goal of the design was to provide maximum performance in a given power envelope. The talk describes the functional validation methodology used to validate Banias. It outlines the various methods used for functional validation, including dynamic validation methods, formal verification methods, performance validation, and power validation. We outline the reasoning behind each method and where/when each method can be applied. We focus on the advantages and disadvantages of each method and discuss what can be done to improve validation of similar CPUs in the future.






























  About IBM  |  Privacy  |  Terms of use  |  Contact