Global Sources
EE Times-India
EE Times-India > EDA/IP

How to speed up memory characterisation

Posted: 03 May 2013     Print Version  Bookmark and Share

Keywords:Memory characterisation  effective current source model  SPICE simulations 

The ever increasing presence of microprocessors in many system-on-chip (SoC) designs has paved the way for embedded memories' remarkable proliferation. These chips can have more than 100 embedded memory instances, including ROMs, RAMs and register files that consume up to half the die area. Some of the most timing-critical paths might start, end, or pass through some of these memory instances. Thus, the models for these memory elements must accurately account for process, voltage, timing and power variability to enable truthful chip verification. Memory characterisation is the process of abstracting a memory design to create an accurate timing and power model most commonly used by downstream implementation and signoff flows. Ad-hoc approaches to memory characterisation do not accurately model the data required for faithful SoC signoff, delaying tape-out and increasing the total cost of the design.

Memory characterisation often requires hundreds of SPICE simulations. The number of memory instances per chip and the need to support a wide range of process, voltage and temperature corners (PVTs) make these simulations a daunting task. Also, the growing size of memory instances and sensitivity to process variation add more dimensions to an already challenging undertaking. Further, the need to create library variants for high-speed, low-power and high-density processes makes it imperative to automate the memory characterisation flow.

Overview of memory characterisation methodologies
Broadly speaking, there are two main methodologies for memory characterisation. The first one is to characterise memory compiler-generated models and the second is to characterise individual memory instances. Further, there is an assortment of approaches for instance-based characterisation, including dynamic simulation, transistor-level static timing analysis and ad-hoc divide and conquer.

Memory compilers construct memory instances by abutted placement of pre-designed leaf cells (e.g., bit-columns, word and bit line drivers, column decoders, multiplexers and sense amplifiers, etc.) and routing cells where direct connection is not feasible. The compiler also generates a power ring, defines power pin locations and creates various electrical views, netlists and any additional files required for downstream verification and integration.

Memory compilers do not explicitly characterise the generated cells but instead create models by fitting timing data to polynomial equations whose coefficients are derived from characterizing a small sample of memory instances. This approach enables memory compilers to generate hundreds or thousands of unique memory instances, differing in address size, data width, column/row density and performance. However, the model accuracy of this approach is poor.

To safeguard against chip failure due to inaccurate models, the memory compiler adds margins. These can lead to more timing closure iterations, increased power and larger chip area, however. In addition, the fitting approach doesn't work well for the advanced current-based models: effective current source model (ECSM) and composite current source (CCS), which are commonly used for timing, power and noise at 40 nm and below.

To overcome the inaccuracies of compiler-generated models, design teams resort to instance-specific characterisation over a range of PVTs. This is a much more time-consuming process that yields more accurate results. However, often due to limitations in the characterisation approach and available resources, the accuracy improvement is not as much as it could be, while the cost is high.

Approaches for memory characterisation
One method for instance-based memory characterisation is to treat the entire memory as a single black box and characterise the whole instance using a FastSPICE simulator. The advantage of this method is that it enables the creation of accurate power and leakage models that truly represent the activity of the entire memory block. It can also be distributed across a number of machines to speed-up simulation time. Unfortunately, this approach is not without disadvantages—namely, a FastSPICE simulator trades off accuracy for performance. Further, the black box approach still requires users to identify probe points for characterizing timing constraints. For a custom memory, the characterisation engineer can get this information from the memory designer, but this information is not available from memory compilers. Finally, this method doesn't work well for generating some of the newer model formats such as noise models, and cannot be scaled to generate process variation models needed for statistical static timing analysis (SSTA).

A second approach for memory characterisation is using transistor-level static timing analysis (STA) techniques that utilise delay calculators to estimate the delay of sub-circuits within the memory block to identify the slowest paths. The advantages of this method are fast turn-around time and the fact that it does not require vectors to perform timing analysis. However, STA techniques suffer from identifying false timing violations that require further analysis with SPICE/FastSPICE simulators to determine if these are of real concern.

1 • 2 • 3 • 4 Next Page Last Page

Comment on "How to speed up memory characterisat..."
*  You can enter [0] more charecters.
*Verify code:


Visit Asia Webinars to learn about the latest in technology and get practical design tips.


Go to top             Connect on Facebook      Follow us on Twitter      Follow us on Orkut

Back to Top