Skip to main content

IBM Research - Haifa

Software Performance and Quality Analysis

Software quality and reliability are essential for attaining a high degree of user satisfaction and acceptance of software products. As software systems and applications become increasingly more complex, it becomes ever more challenging to maintain a high quality using existing testing and performance analysis practices. The Software Performance and Quality Analysis(SPARQ) group aims at making significant contributions in the area of software quality by developing new technologies, practices, and tools, working in close collaboration with development, testing, and performance analysis groups.

Technology

  • IBM Functional Coverage Unified Solution (IBM FOCUS)

    IBM FOCUS is a general-purpose tool for improving the testing of an application. IBM FOCUS aids in making sure that both the design of the application and the test suite are complete and cover every aspect of what the application is meant to do. IBM FOCUS is independent of the application's domain, while providing and extending much of the functionality of existing domain-specific tools, such as test planning, advanced code coverage analysis, design and requirements review, and functional coverage.

    For more details, see the short overview on CTD and its value. A slightly deeper introduction to CTD and the IBM FOCUS tool is also available.

  • Collaborative Code Review Tool (CCRT)

    The Collaborative Code Review Tool is a code review plug-in for Eclipse that supports the code review process, particularly the "Selective Homeworkless Review" methodology. The tool naturally integrates into the developer's working environment. It supports a distributed review environment and the various roles used in a review meeting. Reviewers can review the code at the same time, either through a virtual or a face-to-face meeting, or at different times. Review comments and author navigation through the code are visible to all reviewers. The tool also supports a quantitative feedback mechanism that reports the effectiveness of the ongoing review effort.

    For more details click here.

  • ConcurrentTesting

    The Concurrent Testing tool (ConTest) is a comprehensive tool for testing concurrent Java programs, both for unit and system tests. By changing the relative timing of operations among threads, ConTest causes bugs to appear earlier in testing. ConTest works under the hood with the tests you write yourself and increases their power. It supports a host of other features: code coverage, concurrent coverage, deadlock detection, bug search, and more. ConTest is an open platform on which you can write your own tools or features for concurrent testing, debugging, and healing, and add them as plug-ins.

    For more details click here.

Methodology

  • Test Improvement Methodologies

    The SPARQ group teaches and deploys methodologies and tools that improve the testing of applications via coverage-based techniques.

    Coverage-based techniques can be very helpful in increasing the quality of testing. They can be activated in advance, in the form of test planning, or in retrospect, in the form of coverage analysis. Test planning refers to the selection of what to test from a possibly enormous number of scenarios, configurations, or conditions, in a way that eliminates redundancy and reduces the risk of bugs escaping to the field as much as possible. One well-known technique for test planning is combinatorial test design (a.k.a. pair-wise testing), which is based on the observation that in most cases the appearance of a bug depends on the combination of a small number of parameters (usually two or three) of the system under test. Therefore, in combinatorial test design, the space to be tested is modeled by a set of parameters and their respective values, and a subset of the space is selected so that it covers all possible combinations of every two (or more) parameters. This number can be increased or decreased by the tester according to the available testing resources and system quality demands. Once the space to be tested is modeled, creating alternative test plans is very easy. To make combinatorial test design effective in the field, the technology must handle various real-life issues such as scalability, complexity of the space to be tested (e.g., complex constraints on the combinations of parameters), tests that were already executed, and constraints on the distribution of values in the subset of tests to be implemented and executed. Our tool, IBM FOCUS, supports combinatorial test design and handles many of the issues described above. An excellent article (in Hebrew) about combinatorial test design can be found here.

    Coverage analysis measures the execution of tests against a model of the program and reports what is missing and what is covered by the tests. In code coverage, the model is simply the source code of the program. In functional coverage, the model is designated aspects of the functionality of the system such as the inputs and outputs, a snapshot of the system's state, or possible scenarios. It is important to choose coverage models for areas which the tester or developer thinks are risky or error prone.

    Once coverage data is gathered, various analyses help in understanding what is covered and what is missing from the tests. For code coverage, there are standard views such as source views and hierarchical drill down views. However, it is sometimes hard to conclude which functionality is missing from the tests. We have developed a more advanced analysis called substring hole analysis, in which information about the missing functionality is inferred by extracting common substrings from the names of the non-covered functions. For functional coverage, hole analysis finds the non-covered areas of the model, which point exactly to the missing functionality. The IBM FOCUS tool supports both code coverage and functional coverage analysis.

    Another technique that increases the quality of testing is test selection. This technique is activated once actual tests are already created (and possibly executed), to reduce the testing resources and find bugs earlier during the test suite run. The most common test selection technique, which is also supported by IBM FOCUS, is based on code coverage, and selects a subset of the tests that has the same code coverage as the original tests. More advanced techniques are based on clustering the tests according to various parameters; they involve static analysis and statistical methods.

  • Review Moderator Workshop

    Reviews are the most effective technique known today for detecting software problems early. Review effectiveness is skill sensitive. Diverse technical and non-technical skills are required to conduct effective reviews.  The review moderator workshop teaches how to be an effective moderator. In a nutshell, the moderator's role is to ensure the effectiveness of the review process. The moderator leads the inspection team. In addition, the moderator assesses the readiness of the artifact for review, chooses the review participants, and determines if another review is required. The moderator also ensures the continued selection of the best artifact to review and the best review technique for maximizing the effectiveness of the review process.

    The review moderator workshop covers the different review techniques and their selection process, the moderator role, conducting review meetings, and more. The typical situation is that there is much more code than time available to review it. Special attention is given to conducting effective code reviews under time constraints. A running example from an ongoing project is chosen before the workshop begins. The workshop participants spend  a limited amount of time (roughly two hours) prior to the workshop reviewing parts of the example. The workshop includes a hands-on session in which the participants lead reviews of the running example.

    The duration of the workshop is usually three days---two days for going over the workshop material and one day for the hands-on session.

Academic links

Manager (acting)

Sharon Keidar-Barner, Manager Software Performance and Quality Analysis, IBM Research - Haifa

: Manager Software Performance and Quality Analysis, IBM Research - Haifa