Skip to main content

Post Silicon Validation Technologies and Verification Analytics

Functional verification has been and continues to be one of the most challenging and time consuming activities in modern microprocessor design process. The Post Silicon Validation Technologies and Verification Analytics group addresses two main areas in verification. The first area is post silicon validation. The second area is around managing and analyzing a large scale verification process.

Post-Silicon Validation Technologies

The size and complexity of modern hardware systems have turned the functional verification of these systems into a mammoth task. Verifying such systems involves tens or hundreds of person years and requires the computing power of thousands of workstations. But even with all this effort, it is virtually impossible to eliminate all bugs in a design before tape-out. In fact, a recent survey shows that only about 30% of the designs reach first silicon success. Moreover, in many cases, project plans call for several planned tape-outs at intermediate stages of the project before the final release of the system. As a result, an implementation of the system on silicon running at real-time speed is available. This silicon is used, among other things, as an intermediate and final vehicle for functional validation of the system in a process known as post-silicon validation.

In recent years, we have seen increasing evidence that pre- and post-silicon verification find it extremely hard to achieve their goals. This creates an increasing need to bridge the gap between these two domains by sharing methodologies and technologies, allowing easier integration between the domains.

The focus of this activity is the stimuli generation aspect of a unified pre- and post-silicon methodology. We utilize many of the concepts and technologies that make constrained random test generators successful in pre-silicon verification for the post-silicon domain. Specifically, we are developing Threadmill, a user-directable tool that uses declarative test templates for post-silicon validation, similar to those used in Genesys-Pro. In addition we have developed FANE, a tool for analyzing failures detected by software on silicon, acceleration and emulation platforms.

Verification Analytics

Our main project in this domain is the Verification Cockpit. The goal of this project is to create a central planning, tracking, analysis and control platform for large scale hardware verification projects. Based on the Rational lifecycle management and reporting tool set (including Rational Team Concert and Rational Insight), the project integrates existing standalone verification tools and data sources such as test plans, coverage, and test execution environments to a consolidated verification platform. The project also harnesses advanced analytics methods for enhancing the verification process.

Performance Verification

The vast amount of resources invested in producing next-generation processors and systems are intended solely to improve their run-time performance, and, to a lesser extent, their power consumption. In fact, even when new functionality is introduced to the processor, it is done only to achieve better performance and power consumption when run on intended tasks. Increasingly sophisticated micro-architectural mechanisms lie at the heart of those generation-to-generation improvements. These mechanisms include deep and wide execution pipelines, shared cache hierarchies synchronized between processor cores, advanced branch predictors to allow speculative execution before conditional decisions are resolved, and hardware structures allowing fast access to critical resources such as complex address translation tables.

These tremendous design and implementation efforts are worthless if the desired performance gain is not reached. However, performance by nature is ultimately determined only by the runtime of the large and relevant industrial workloads the system is intended to work on. Obviously such test-runs cannot be seriously performed at the time the hardware is being designed; this is because there is no simulation of the design capable of running even a minuscule fraction of the workload intended to run on the real hardware.

Our performance verification research efforts are intended to bridge this apparent contradiction between the crucial business need to verify the performance of the new processor design on its intended workloads, and the inability to run those workloads before the design is cast in silicon and the full system is built. We do this using specially-crafted software models of the processor, careful design of tests intended to check its non-functional behavior (e.g., the number of cycles it takes it to perform a given low-level task, and the implication this may have on high-level programs), and big-data analysis of a very large pool of simulation results. All this, coupled with a deep understanding of modern micro-architectural mechanisms.


Ronny Morad, Manager Post Silicon Validation Technologies and Verification Analytics, IBM Research - Haifa

: Manager Post Silicon Validation Technologies and Verification Analytics, IBM Research - Haifa