Skip to main content


HVC 2011
Haifa Verification Conference 2011

December 6-8, 2011
Tutorials: December 5, 2011
Organized by IBM R&D Labs in Israel

image: IBM and Haifa


Abstracts

Estimating and Modeling VVT Cost, Time and Risks of Engineered Systems
Avner Engel

Systems' testing is carried out, one way or another, in all development and manufacturing projects, but seldom is this done in a truly comprehensive and organized manner. Along the same line of thinking, virtually no systems Verification, Validation, and Testing (VVT) research is conducted throughout the academic world. This is especially odd, since some 50 percent of systems engineering cost is expended on either performing VVT activities or correcting system defects during the development process or during product's lifetime.

Therefore, the objective of this tutorial is to learn and exercise:

  • The extant of quality costs in project development
  • Estimation of VVT resources in a group setting
  • Qualitative and quantitative quality cost and time models

Speaker Bio
Avner Engel holds a Ph.D. from the Industrial Engineering department of the Tel-Aviv University, an M.Sc. in Computer Systems from the University of New York and a B.Sc. in Electrical Engineering from the University of Maryland.

His 40 year professional career spans the areas of programming, systems and software engineering, and technical management with several large companies in the US and Israel. For the past twenty years, he has worked for the Israel Aerospace Industries (IAI), where he has managed large software projects. He led an international, European Commission (EC) funded research project, SysTest, aiming at developing a methodology and a process model for Systems Verification, Validation, and Testing. Consequently, he authored the book "Verification, Validation and Testing of Engineered Systems" published by Wiley in 2010.

Currently he is leading the EC supported project AMISA (Architecting Manufacturing Industries and systems for Adaptability) at the Faculty of Engineering, Tel-Aviv University, Israel. In addition, he teaches at the Faculty of Technology Management, Holon Institute of Technology (HIT), Israel.


Regression Verification: Verifying the Equivalence of Similar Programs
Ofer Strichman

Given two similar programs (e.g., consecutive versions of the same program), the regression verification problem is to indicate those pairs of functions from the two programs that are equivalent under every context. This capability is useful for proving backward compatibility, analyzing the impact of changes, and more. Regression verification has the potential of being easier in practice than functional verification for several reasons: First, it circumvents the problem of specifying what the program should do; Second, inmany cases it is computationally easier, because it offers variousopportunities for abstraction and decomposition that are only relevant in this context; Finally, there is an automated method to derive loop and recursion invariants that are typically sufficient for the proof.I will begin the tutorial by presenting alternative definitions of equivalence between programs, and then show methods for proving them. Specifically, I will focus on partial equivalence (Input/Output equivalence), Mutual termination (the two programs terminate on exactly the same inputs), and on equivalence of multithreaded programs.I will demo two regression-verification prototype tools, RVT (Technion) and SymDiff (Microsoft). I will also survey some other projects around the world that are focused on variants of this problem.

Speaker Bio
Prof. Strichman has graduated from the Weizmann Institute in 2001, where he completed his PhD under the supervision of the late Amir Pnueli. He then continued for two years of Post-Doc at Carnegie-Mellon, and then joined the Technion as a senior lecturer. He has published a book on decision procedures together with Daniel Kroening, and over 70 peer-reviewed articles in formal verification and operations research.


Combinatorial Test Design
Rachel Tzoref-Brill

Combinatorial test design (CTD) is a well-known effective test planning technique that reveals faults that are a result of feature interactions in a given system. The test space is manually modeled by a set of parameters, their respective values, and restrictions on the value combinations. A subset of the test space is then automatically constructed by a CTD algorithm, so that it covers all valid value combinations of every t parameters, where t is a user input. The reasoning behind CTD is the observation that most bugs depend on the interaction between a small number of parameter values. This observation is also supported by statistical evidence.

A lot of research was invested in constructing efficient CTD algorithms and tools that construct a small as possible test plan. However, when applying CTD to real-life cases, many additional challenges rise, such as ones related to the manual process of the test space modeling. In this tutorial we will go through the basic principles of CTD and the combinatorial algorithms that solve it, as well as challenges and solutions for real-life deployment of CTD.

Speaker Bio
Rachel Tzoref-Brill is a research staff member at IBM Haifa Research Lab since 2001. Her main areas of work have been research and development of technology for test planning, coverage analysis, multi-threaded programs testing and debugging, and Formal Verification. In recent years she has focused on the application of combinatorial testing to real-life problems, including input space modeling, algorithmic issues, and combinatorial analysis of test suites. She holds an M.Sc and B.Sc from the Technion, Israel Institute of Technology.


Invited Speaker: Pioneering the Future of Verification: A Spiral of Technological and Business Innovation
Kathryn Kranen, President and Chief Executive Officer, Jasper Design Automation

Changing the way the world verifies semiconductors and systems takes far more than algorithmic or methodological breakthroughs. Over the past two decades, there have been four or five great verification breakthroughs, while many other promising technologies have been relegated to the dust bin. Bringing a nascent EDA technology to mainstream use and commercial success requires alternating technological and business innovations to accelerate adoption.

In this session, you'll learn key concepts about bringing a disruptive technology to widespread adoption. Kathryn Kranen will share insights gained as a market pioneer of three technologies that have become the major pillars of today's mainstream system-on-chip verification: hardware emulation, constrained-random simulation, and formal property verification. You will also hear some of her visions of what the future of design and verification may hold.

Speaker Bio
Kathryn Kranen is responsible for leading Jasper¹s team in successfully bringing the company¹s pioneering technology to the mainstream design verification market. She has more than 20 years EDA industry experience and a proven management track record. While serving as president and CEO of Verisity Design, Inc., US headquarters of Verisity Ltd., Kathryn and the team she built created an entirely new market in design verification. (Verisity later became a public company, and was the top-performing IPO of 2001.) Prior to Verisity, Kathryn was vice president of North American sales at Quickturn Systems. She started her career as a design engineer at Rockwell International, and later joined Daisy Systems, an early EDA company. Kathryn graduated Summa cum Laude from Texas A&M University with a B.S. in Electrical Engineering. Kathryn is serving her fifth term on the EDA Consortium board of directors, and was elected its vice chairperson. In 2005, Kathryn was recipient of the prestigious Marie R. Pistilli Women in Electronic Design Automation (EDA) Achievement Award. In 2009, EE Times listed Kathryn as one of the "Top 10 Women in Microelectronics".


Invited Speaker: Automated Detection and Repair of Concurrency Bugs
Ben Liblit, University of Wisconsin–Madison

Finding and fixing concurrency bugs is critical in modern software systems. This talk examines two recent efforts to automate both the detection and the repair of certain types of concurrency bugs using a mixture of static, dynamic, and statistical methods.

First, we present a low-overhead instrumentation framework to diagnose production-run failures caused by concurrency bugs. We track specific thread interleavings at run-time, using sparse random sampling to limit overhead. Methods drawn from work in statistical debugging let us identify strong failure predictors among the sampled concurrent behaviors. Our approach offers a spectrum of performance and diagnosis capabilities suitable for wide deployment to real user desktops.

Second, we describe a strategy for automatically fixing one of the most common types of concurrency bugs in real-world code. Starting with descriptions of bad interleavings, our tool automatically inserts synchronization operations to steer future executions away from danger. Static analyses help us maintain good performance while reducing the risk of deadlocks. Dynamic monitoring allows for run-time recovery from deadlocks that could not be statically avoided.

Overall, our approach yields overheads too low to reliably measure; produces small, simple, understandable patches; and completely eliminates detected bugs in the targeted class across a variety of complex, real-world applications.

Speaker Bio
Ben Liblit is an associate professor in the Computer Sciences Department of the University of Wisconsin–Madison, with research interests in programming languages and software engineering. Professor Liblit worked as a professional software engineer for four years before beginning graduate study. His experience has inspired a research style that emphasizes practical, best-effort solutions that bring formal methods to bear against the ugly complexities of real-world software development.

Professor Liblit completed his Ph.D. in 2004 at UC Berkeley with advisor Alex Aiken. He earned the 2005 ACM Doctoral Dissertation Award for his work on post-deployment statistical debugging, and has received AFOSR Young Investigator and NSF CAREER awards in support of his research.


Invited Speaker: Verification Challenges of Workload Optimized Hardware Systems
Klaus-Dieter Schubert, IBM Deutschland Research and Development GmbH

Over the last couple of years it became more and more obvious that improvements in chip technology get smaller and smaller with each generation. Processor frequency is stagnating for some time now and single thread performance of general purpose processor cores is only increasing very slowly from generation to generation despite the fact that designers have more and more transistors they can utilize.

However, to stay competitive in the Compute Server business it is necessary to follow Moore's law and provide significant performance improvements to the customer every year. This begs the question how this can be achieved when traditional ways like cycle time improvements and the usage of more transistors are not yielding the desired results. The answer has to be a combination of logic, system and software design.

This talk will first describe why continuing with "business as usual" will fail going forward. It will then discuss a number of scenarios for workload optimized systems to overcome these hurdles before the focus will shift to the question: What challenges will that present to the area of hardware verification?

Speaker Bio
Klaus-Dieter Schubert received the Dipl.-Ing. degree in electrical engineering in 1990 from Stuttgart University (Germany). Subsequently, he joined IBM in Boeblingen and has been responsible for hardware verification of various IBM mainframe systems and its components. He was the technical lead for the hardware verification of the z900 2064 system before he moved to the field of hardware and software co-verification where he established the concept of virtual power-on (VPO) for zSeries and pSeries systems. From 2006 to 2008, Mr. Schubert was on a work assignment in Austin, Texas, where he has led the verification team for the POWER7 microprocessor. Today he is an IBM Distinguished Engineer and the technical leader for the hardware verification of future POWER processors. He has received two IBM Outstanding Achievement Awards for his contributions in the field of hardware verification.


Invited Speaker: Preprocessing and Inprocessing Techniques in SAT
Armin Biere, Johannes Kepler University, Linz

SAT solvers are used in many applications in and outside of Computer Science. The success of SAT is based on the use of good decision heuristics, learning, restarts, and compact data structures with fast algorithms. But also efficient and effective encoding, preprocessing and inprocessing techniques are important in practice. In this talk we give an overview of old and more recent inprocessing and preprocessing techniques starting with ancient pure literal reasoning and failed literal probing. Hyper-binary resolution and variable elimination are more recent techniques of this century. We discuss blocked-clause elimination, which gives a nice connection to optimizing encodings and conclude with our recent results on unhiding redundancy fast.

Speaker Bio
Since 2004 Prof. Armin Biere chairs the Institute for Formal Models and Verification at the Johannes Kepler University in Linz, Austria. Between 2000 and 2004 he held a position as Assistant Professor within the Department of Computer Science at ETH Zürich, Switzerland. In 1999 Biere was working for a start-up company in electronic design automation after one year as Post-Doc with Edmund Clarke at CMU, Pittsburgh, USA. In 1997 Biere received a Ph.D. in Computer Science from the University of Karlsruhe, Germany. His primary research interests are applied formal methods, more specifically formal verification of hardware and software, using model checking, propositional and related techniques. He is the author and co-author of more than 60 papers and served on the program committee of more than 45 international workshops and conferences. His highest influential work is the contribution to Bounded Model Checking. Decision procedures for SAT, QBF and SMT, developed by him or under his guidance rank at the top of many international competitions. Besides organizing several workshops Armin Biere was co-chair of SAT'06 and FMCAD'09. He is on the editorial board of the Journal on Satisfiability, Boolean Modeling and Computation (JSAT), and is one of the editors of the Handbook of Satisfiability. He also organizes the Hardware Model Checking Competition.




























Contact Information

Proceedings Publication

Springer Lecture Notes in Computer Science



HVC 2011 Poster

HVC 2011 Poster