IBM Israel
Skip to main content
 
Search IBM Research
   Home  |  Products & services  |  Support & downloads  |  My account
Select a Country Select a country
IBM Research Home IBM Research Home
IBM Haifa Labs Homepage IBM Haifa Labs Home
IBM Haifa Labs Homepage IBM Haifa Labs Leadership Seminars

Software Testing and Verification Seminar 2003

Invitation
Program
Visitors information
Software Testing and Verification in HRL
Feedback


Abstracts
Interleaving Review Technique (Abstract)
Shachar Fienblit, IBM Haifa Labs

Desk checking is known to be an effective reading technique for early detection of sequential program errors. This paper discusses how to extend desk checking for concurrent and distributed programs. Concurrent and distributed programs have unbounded possible schedules, and execution states that include more than one process. The new concurrent and distributed desk checking technique supports the selection of schedules and states to be reviewed. The cross-product technique assists in the selection process. Schedule selection guidelines that facilitate early detection and coverage are introduced. Industrial pilots show that concurrent and distributed desk checking is an effective early error detection review technique.


Proving Termination One Loop at a Time (Abstract)
Michael Codish, Ben Gurion University

Classic techniques for proving termination require the identification of a measure that maps program states to the elements of a well-founded domain. Termination is guaranteed if this measure is shown to decrease with each iteration of a loop in the program. This is a \emph{global} termination condition --- there exists a measure which is shown to decrease over all of the loops in the program. In recent years, systems based on \emph{local} termination conditions are emerging. Here, termination is guaranteed if for every loop there exists a measure which decreases as execution follows through that loop.

We demonstrate that for a large class of termination problems the two approaches are equally powerful. We illustrate that in practice, local termination conditions are simpler and easier to find automatically.
However, in theory, the local approach requires a closure operation on loops which may require to consider an exponential number of local conditions.


A Case Study on Performance Improvement of Applications (Abstract)
Gad Haber, IBM Haifa Labs

A case study in which the software of a business application was modified in order to improve performance, is given.

Applying a set of source-level tuning modifications on the frequently executed portion of the code helped relieve performance bottlenecks in CPU, memory and I/O utilization.

The main tuning techniques to be discussed are: functions' and loops' memoization, field packing and software caching.


GES - Generic Environment Simulator (Abstract)
Yossi Shachar and Ori Kovetz, Rafael

Generic Environment Simulator (GES) is a software tool that works on a PC platform, enabling the simulation of an application's natural environment (hardware,software, external logic) in order to test that application.

GES enables systems and software engineers to define the structure of the various messages that are sent within the system, build testing scenarios from them, and define criteria for successful testing. All of these parameters are saved in a database and can be used for repeat testing.

It is possible to define a collection of tests in GES as an STD, in which documentation of the progress of the STD is produced automatically from the scenarios and the defined criteria.

In addition, GES supports a range of hardware cards that can interface with the application under test, enabling the definition of displays, graphs, recordings, replay, etc.

GES significantly improves the reliability of the application under test and enables integration and mechanical testing at a very high level. GES is currently used as a generic tool in all departments at the Rafael Armament Development Authority.


Static Code Analysis Procedures in the Development Cycle at Microsoft (Abstract)
Mooly Beeri, Microsoft

Developers are human-they make mistakes. Can we use software to help them write better software?

It is estimated that 80% of the development cycle is spent on fixing bugs. Bugs cost the least when fixed as close as possible to development time. Handling a bug found during a code review is cheaper than handling a bug found in the product itself, and much cheaper than handling a bug found in the product once it has shipped.

Tools are developed that aim at pointing out defects in the code base of the product. These tools should be applied as close as possible to development time, preferably on the developer's desktop. Some tools are lightweight, while some are heavier (performance-wise). We need to come up with a way to utilize those tools as part of the development cycle.

In this presentation, we will cover:
  • What are static code analysis tools?
  • What types of tools are used in Microsoft?
    • Lightweight tools
      • Architecture overview
      • Capabilities
      • What types of defects can they detect?
    • Heavyweight tools
      • Architecture overview
      • Capabilities
      • What types of defects can they detect?
  • The development cycle
  • Integrating the tools into the development cycle


Fidgeting till the Point of no Return (Abstract)
Marina Biberstein, IBM Haifa Labs

In previous work we have shown the {\em alternative pasts} algorithm, which delays the assignment of values to variables until that variable is used. The algorithm then chooses among the variable values that are consistent with some possible execution. The alternative pasts algorithm can be seen as belonging to a class of algorithms that shadow the execution and may choose at any point to modify the values of some of the variables. A shadowing algorithm is sound if it can be shown that for each possible outcome of the program running under the algorithm, there exists an unmodified execution with the same output.

We build on this work and extend it in two directions. First we show a more powerful shadowing algorithm that delays not only writes but also reads of variables, at most till a relevant control decision is taken, which is the longest possible delay for algorthms of this class. We prove that this algorithm inherits the ability of the alternative pasts algorithm to generate {\em significantly different} interleavings, which are guarranteed to execute differently. We show that it improves over the alternative pasts algorithm by minimizing dependence on the "base" schedule, i.e., schedule produced by the runtime on the given system without external interventions.


The AGEDIS Productivity Tools and UML Execution Framework (Abstract)
Alan Hartman, IBM Haifa Labs

The AGEDIS project is a European project for the creation of a methodology and tools for automated test generation and execution for distributed systems. The methodology is model based and uses as its primary input a UML model of the application under test. The core tools provided by the AGEDIS consortium include a model compiler, a test generator, and a test execution engine. The additional tools provided - in order to improve the productivity of testers - include a model simulator, a test suite editor and browser, a model coverage analyzer, a defect analysis tool, a bug reporting tool, and a report generator. This talk will focus on the productivity tools, especially on the model simulator, coverage analyzer, and defect analyzer. The coverage analyzer and defect analyzer are key elements of a feedback loop in the AGEDIS methodology, whereby the test suite and trace records are analyzed, and further test cases are generated in order to increase the reliability of the application under test. The model simulator forms the basis of a UML execution framework where a tester or developer can walk through scenarios of execution and perform debugging of the model, or create test cases manually.


Regression-verification of C Programs (Abstract)
Ofer Strichman, Technion

In the last few years several tools for verification of industrial-size C programs have been developed. None of these tools is complete (i.e. there are programs that it cannot decide whether they are correct or not) but they all have a rather impressive record of proving programs of realistic size and complexity. In this talk I will survey some of these tools and describe briefly their internal mechanisms. I will then describe a project we soon intend to begin in the Technion, in which we will attempt to construct a tool (based on one of the C verification tools) that formally proves the equivalence of two C closely related C programs, under certain restrictions.

The hope is that this will enable to step up from regression testing to regression verification. That is, rather than checking that two versions of C code produce the same output for a given set of inputs, developers will attempt to prove this equivalence for all possible inputs. Such capability will reduce the number of errors compared to regression testing, with less effort than required by formal property verification.






























  About IBM  |  Privacy  |  Terms of use  |  Contact