Global Sources
EE Times-India
Stay in touch with EE Times India
EE Times-India > Processors/DSPs

Speed up processor verification with testbench infrastructure reuse

Posted: 01 Sep 2011     Print Version  Bookmark and Share

Keywords:verification  IP  Simulation  testbench 

If you think that processor design was dying out, then think again. The architecture wars rage on between the majors, and plenty of smaller companies still find it worthwhile to develop proprietary architectures (or enhance existing ones) for niche markets. Although fundamental concepts in processor design evolve slowly, supporting technologies are advancing with great enthusiasm.

The growth of these supporting technologies is due, at least in part, to the sheer complexity and dynamics of industrial projects. Powerful algorithms and procedures, dealing with all common obstacles, are tried, tested and generally available. But even still, verification crises such as delayed signoff or even dead silicon are common.
� 61% of new processor designs require a re-spin [IC Insights 2009]
� 48% of total processor development cost are verification related [IC Economics, 2007]
� 55% of all processor designs are delivered late [IC Insights 2009]

This situation means that processor verification is still a major activity in the semiconductor industry, and that reliable and predictable processor verification outcomes remain important, if elusive goals. To reach these goals, the industry must make the inevitable shift towards embracing IP. Without prejudice to existing verification infrastructure, specialised processor verification IP can free engineers from historical development and maintenance commitments. This liberated time and energy can then allow a renewed focus on verification quality and turnaround times.

 Processor verification

Figure 1: Shown is the Processor verification testbench.

Testbench infrastructure
Simulation-based processor verification essentially consists of the comparison of a reference model (including an Instruction Set Simulator, or ISS) and the processor RTL using a common set of tests (figure 1). The quality of these tests – their coverage of all interesting behaviour in the minimum amount of time – is essential to project success. In addition, multiple types of code coverage are performed. Examples of this include RTL code coverage and functional coverage of both the architecture and implementation.

In spite of the structural simplicity of the processor testbench, its complexity often exceeds that of the design under test. The two major contributors to this are the ISS and the random test generators. The complexity of the ISS is a direct reflection of the complexity of the processor architecture. A random test generator, on the other hand, must scale not only with the processor architecture, but also with its implementation.

Maximum infrastructure reuse
Processor projects are not done in isolation, and a major challenge for verification teams is to reuse as much of the infrastructure as possible from one project to the next. The Big Question is, as the verification space grows and changes, how can we achieve the maximum infrastructure reuse?

To address this question, let us examine the idea of the verification space. Figure 2 depicts the verification space as a product of two main features, the complexity of architecture and of implementation.

 verification space

Figure 2: A depiction of the verification space.

Architectural complexity is a function of the instructions, exceptions and state defined in the Instruction Set Architecture (ISA), including special behaviours and modes. For general-purpose processor architectures, such as ARM or X86, the ISA will be continually expansive over time as the architecture evolves to cover every possible usage while maintaining backwards compatibility.

Implementation complexity grows with the inception of new design features. Even a small increase in implementation complexity can cause huge problems for the verification flow. Examples include adding multiple cores, enhanced pipelines, and memory system optimisations.

When changes to a design are made, the verification space changes in two ways: in general-purpose processor families, where backward code compatibility is an important goal, the project-to-project verification space increases in implementation complexity as new micro-architecture features are added (Y axis of fig.2).

For example, new caching and branch prediction systems might be added to improve performance, or multi-processor operations may have to be supported. This situation is depicted in figure 3a, where the new verification space for the test generator to cover is shown in red.

1 • 2 Next Page Last Page

Comment on "Speed up processor verification with..."
*  You can enter [0] more charecters.
*Verify code:


Visit Asia Webinars to learn about the latest in technology and get practical design tips.


Go to top             Connect on Facebook      Follow us on Twitter      Follow us on Orkut

Back to Top