It really works! Applying Machine Learning to Simulation-based Verification
IBM Haifa Labs News Center
Now in its third-year at the IBM Haifa Research Lab, the Coverage-Directed Generation project, known as CDG, is being successfully used for simulation-base verification. Approximately 80 - 90% of today's verification efforts use simulation-based verification, as opposed to formal verification. Coverage Directed test Generation (CDG) provides test generators with automated feedback, which can greatly reduce the manual work in the verification process and increase its efficiency.
Simulation-based verification usually follows a circular path. It begins when the engineers first decide on a verification plan. Next, they create directives that guide a random test generator, which produces tests that can be run on the design. Most verification environments today have built-in random generation capabilities, and in large and complex designs external test generators are used. Genesys and X-Gen are examples of two random test generators (RTGs) developed at the IBM Haifa Lab and are currently used as the primary means to verify processors across IBM chiefly at the processor and system level. CDG itself is being used both at the unit level, where the test generation is part of the verification environment, and at the processor and system levels together with Genesys and X-Gen.
The generated tests are then used as input to the simulator along with the design being tested. During simulation, monitors and checkers determine whether the behavior is a pass or fail. Coverage tools such as Meteor, also developed at the IBM Haifa Lab, are used to collect information and provide reports that analyze the design's behavior and verify that all the required types of behavior were tested, or 'covered'. Tools such as Meteor include sophisticated analytical abilities along with full data mining and reporting features. These coverage reports and analysis information are fed back into the random test generators so that a new set of more 'directed' tests can be formulated. This is where CDG machine learning comes into play.
The Haifa CDG system is unique in that it learns from observing test directives on one hand and coverage data on the other. Based on these observations, CDG learns the relationships between the directives and coverage data, and based on this, it can provide directives that improve the probability of hitting coverage events (either covered or uncovered events).
Say, for example, a test plan is required to check behaviors A, B, and C on a processor. CDG looks at what was input to the RTG and what was output to the coverage reports, and then gives new directives to the RTG so it can generate improved test suites. The goal of CDG is to automatically provide the test generator with directives that are based on coverage analysis. This feedback loop is all done using machine learning. By analyzing the feedback loop, CDG can automatically provide directives to the test generator, which help in reaching difficult cases, namely non-covered or rarely covered tasks. CDG also enables the same behaviors to be tested using different tests, thereby ensuring a more robust verification process and increasing the chance of uncovering hidden bugs.
Although coverage-directed test generation is not new, the integration of machine learning has turned this approach from a complex inefficient one into one that provides successful design verification. Previous attempts at coverage-directed test generation used techniques such as reverse engineering of the design under test and genetic algorithms to create heuristics that would solve the problem.
Using their machine learning approach, Shai Fine and Avi Ziv at the IBM Haifa Research Lab have developed the first effort to successfully use coverage directed generation for simulation-based verification. Their first paper on this approach, "Coverage directed test generation for functional verification using Bayesian networks" published at DAC 2003, is already quoted in three textbooks and being taught as a basic technique in the field of verification.
Courses in simulation-based verification are still difficult to find in Israel, where formal verification is far more popular as an academic field of study. This year, Avi Ziv will be giving a course at the Technion on "Advanced Topics in Simulation-Based Functional Verification" as part of the Computer Science department's curriculum. Shai Fine and Avi Ziv hope this is part of a growing trend where simulation-based verification, currently being driven primarily by industry initiatives, will slowly make its way into the academic environment.
For more information on machine learning for simulation-based verification, see: http://www.haifa.il.ibm.com/projects/verification/ml/cdg.html.