WP_Term Object
(
    [term_id] => 14
    [name] => Synopsys
    [slug] => synopsys
    [term_group] => 0
    [term_taxonomy_id] => 14
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 699
    [filter] => raw
    [cat_ID] => 14
    [category_count] => 699
    [category_description] => 
    [cat_name] => Synopsys
    [category_nicename] => synopsys
    [category_parent] => 157
)

Making Full Memory IP Robust During Design

Making Full Memory IP Robust During Design
by Daniel Payne on 08-28-2020 at 10:00 am

Looking at a typical SoC design today it’s likely to contain a massive amount of memory IP, like: RAM, ROM, register files. Keeping memory close to the CPU makes sense for the lowest latency and highest performance metrics, but what about process variations affecting the memory operation? At the recent DAC conference held online I was able to get some answers by attending a Designer Track session entitled, Ensuring Design Robustness on Full Memory IP using Sigma Amplification, presented by Ashish Kumar, ST Microelectronics.

Ashish showed that for a 64Mb SRAM there are over 50,000 MOS devices, 100K nodes and 400K extracted RC parasitic elements, and such a large netlist requires a large number of Monte Carlo simulation runs to ensure robustness. The blocks comprising this SRAM include Bitcells, Local I/O, row decoders, local control, global I/O and global control, as shown below:

64Mb SRAM example, memory IP

There are 64M bit cells and they require a 6.3 sigma analysis, and with classical Monte Carlo it requires 6.4B simulations to achieve a 99% yield, something that cannot be accomplished in a reasonable run time, so clearly we are in trouble to achieving a robust design. The other blocks inside of the memory have lower counts and corresponding lower sigma goals.

The actual bit cell schematic uses seven MOS transistors, and for a read operation the signal SAEN pulses high, then the Bit Line (BL) and Bit Line Bar (BLB) nodes start to diverge in value creating Vdiff. The larger the value of Vdiff the more stable and robust our memory cell design is.

Vdiff, Memory IP

Using a traditional SPICE circuit simulator like HSPICE from Synopsys you could get 100 Monte Carlo runs in a reasonable amount of time, or with the high-capacity CustomSim tool maybe 1,000 runs, but both of those fall short of the 6.4B runs required get our robust design goals met. Process variations are no longer occurring with Gaussian distributions, see the plot below for the areas circled in Brown color, so to get a robust design we need a method to statistically fit these distributions.

non gaussian distributions

The value of Vdiff can be measured while running Monte Carlo simulations with a varying number of runs, and we find that with more runs the lower the value of Vdiff becomes. For 10,000 runs we find that the value of Vdiff falls to 260mV, but that it takes about 50 hours for circuit simulation to complete.

Vdiff vs MC runs min 1

A normal Gaussian distribution is shown in Blue, but we really want to look at the tail of the distribution so if a statistical sample is taken in the tail region as shown by the Red curve then fewer simulations are required. This technique is called Sigma Amplification.

Sigma Amplificaiton min

Within the Synopsys circuit simulators you can choose a Sigma Amplification value, and they recommend a value of 2 or smaller (shown in Grey) in order to reach an affordable sample size.

Sigma vs sample size min

 

In the HSPICE GUI a designer can reach 5.8 Sigma for a memory block, but instead of taking 109,522,300 Monte Carlo runs it only takes 1,000 runs by using a Sigma Amplification of 1.7:

HSPICE GUI min

The minimum value of Vdiff using traditional Monte Carlo with 1,000 runs is 270mV, while Vdiff is calculated to be 140mV when using a Sigma Amplification value of 1.67, which proves that we are finding even more worst case samples that are not following a normal distribution. In just 5 hours we know that Vdiff has a much lower value than before using Sigma Amplification, something that was not possible with traditional Monte Carlo simulations.

Vdiff vs Sigma Amplificaiton min

Comparing Vdiff versus distributions, consider three approaches:

  • Real distribution – traditional Monte Carlo with Gaussian distribution
  • Extrapolated with Gaussian distribution
  • Sigma Amplification

Using real distribution with 10,000 samples we can only achieve +/- 4 sigma results. If we  extrapolate with gaussian distribution, then Vdiff can reach +/- 5 sigma in 1,000 samples, but our  Vdiff values become inaccurate, so faster incorrect answers. With the Sigma Amplification approach we can reach +/- 5 sigma results, using only 1,000 samples while maintaining acceptable accuracy levels.

comparision table min

Another way to visualize these three approaches is to plot Vdiff as a function of Sigma. The Real Distribution is shown in Blue color, and the most accurate comparison is Sigma Amplification in Grey, while the least accurate approach is Extrapolated with Gaussian distribution in Purple.

correlation min 1

 

Summary

Memory IP design is important for modern SoCs, so getting robust IP blocks is quite important to reliable operation over the product lifetime, but using traditional Monte Carlo approaches with Gaussian distributions during the design phase just takes too much time. Fortunately for us, there are approaches to focus on the tail regions to uncover the worst-case conditions by using a statistical distribution approach. Synopsys calls their approach Sigma Amplification and the theory is borne out on real memory IP, so designers can now achieve a much higher sigma goal with fewer Monte Carlo runs.

Also Read:

ARC Processor Virtual Summit!

Synopsys Webinar: A Comprehensive Overview of High-Speed Data Center Communications

Accelerating High-Performance Computing SoC Designs with Synopsys IP

Share this post via:

Comments

One Reply to “Making Full Memory IP Robust During Design”

You must register or log in to view/post comments.