WP_Term Object
(
    [term_id] => 57
    [name] => MunEDA
    [slug] => muneda
    [term_group] => 0
    [term_taxonomy_id] => 57
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 43
    [filter] => raw
    [cat_ID] => 57
    [category_count] => 43
    [category_description] => 
    [cat_name] => MunEDA
    [category_nicename] => muneda
    [category_parent] => 157
)
            
Pic800x100 1
WP_Term Object
(
    [term_id] => 57
    [name] => MunEDA
    [slug] => muneda
    [term_group] => 0
    [term_taxonomy_id] => 57
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 43
    [filter] => raw
    [cat_ID] => 57
    [category_count] => 43
    [category_description] => 
    [cat_name] => MunEDA
    [category_nicename] => muneda
    [category_parent] => 157
)

Webinar: Fast and Accurate High-Sigma Analysis with Worst-Case Points

Webinar: Fast and Accurate High-Sigma Analysis with Worst-Case Points
by Daniel Payne on 11-02-2023 at 10:00 am

IC designers are tasked with meeting specifications like robustness in SRAM bit cells where the probability of a violation are lower than 1 part-per-billion (1 ppb). Another example of robustness is a Flip-Flop register that must have a probability of specification violation lower than 1 part-per-million (1 ppm). Using Monte Carlo simulation at the SPICE level for normal distributed performance with a small sample size to achieve 1 ppm requires 4.75 sigma analysis, while reaching 1 ppb increases to 6.0 sigma analysis. The problem is that for non-normal distributed performance the standard Monte Carlo approach requires a sample size that is simply too large to simulate, so a more efficient approach is required and that’s where high-sigma analysis and worst-case points come into use.

Register for this MunEDA webinar scheduled for November 14th at 9AM PST, and be prepared to have your questions answered by the experts.

MunEDA is an EDA vendor with much experience is this area of high-sigma analysis methods, and they will be presenting a webinar on this topic in November. I’ll describe some of the benefits of attending this webinar for engineers that need to design for robustness.

In the non-normal distribution case to prove that the failure rate is below a required limit of 1 ppm, or 4.75 sigma, requires 3 million simulations. To estimate the failure rate to a 95% accuracy as being between 0.5 ppm and 1.5 ppm requires a much larger 15.4 million simulations. Trying to achieve 6.0 sigma with this same math then requires billions of simulations, something impractical to even consider.

The webinar goes into details on how parameter variation and yield are impacted by Monte Carlo techniques like brute-force random sampling versus searching the failure region by an optimizer to find the highest density of failing points. The worst-case point is the region which has the highest density of failure points, and is closest to the mean point of passing values.

Worst case point min
Worst-case Point

Just knowing where this worst-case point is located will help guide where SPICE simulations should be made and even helps during analog yield optimizations. Failure rates can be estimated from worst-case distances. Different sampling methods at the worst-case point are introduced and compared. The First Order Reliability Model (FORM) is a straight line drawn through the worst-case point, and serves as a boundary between passing and failing regions.

FORM min
First Order Reliability Model (FORM)

The error rate of using the FORM approximation is presented as a small number. The algorithms for finding the worst-case point are presented, and they show how few simulation runs are required to find 6-sigma values with small error values.

The shape of performance functions of the SRAM bit cell are shown to be continuous and only slightly non-linear, and using the FORM approach results in small errors. MunEDA has applied these high-sigma Worst Case Analysis (WCA) algorithms to its EDA tools resulting in the ability to scale to high-sigma levels like 5, 6 or even 7 sigma by only using a small number of simulation runs. The typical runtime for a 6.5 sigma SRAM bitcell analysis is completed in under 2 minutes, using just on CPU.

The MunEDA high-sigma methods are actually building models then used by Machine Learning (ML), which scale nicely to handle large circuits, like up to 110,000 mismatch parameters in a memory read-path analysis.

Cases where you still should run brute-force Monte Carlo analysis were presented: non-linearity, number of variables, complexity of test bench, low-sigma. Results from customer examples were shared that all used high-sigma analysis.

Summary

If you ever wondered how an EDA vendor like MunEDA approaches their results for high-sigma analysis, then this webinar is another must see. It covers the history of various analysis methods, and how MunEDA chose its worst-case point method. Real numbers are shared, so you know just how fast their tools operate.

Register for this MunEDA webinar scheduled for November 14th at 9AM PST, and be prepared to have your questions answered by the experts.

Related Blogs

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.