WP_Term Object
(
    [term_id] => 69
    [name] => Platform Design Automation
    [slug] => platform-design-automation
    [term_group] => 0
    [term_taxonomy_id] => 69
    [taxonomy] => category
    [description] => 
    [parent] => 14433
    [count] => 11
    [filter] => raw
    [cat_ID] => 69
    [category_count] => 11
    [category_description] => 
    [cat_name] => Platform Design Automation
    [category_nicename] => platform-design-automation
    [category_parent] => 14433
    [is_post] => 1
)

Noise, The Need for Speed, and Machine Learning

Noise, The Need for Speed, and Machine Learning
by Riko Radojcic on 05-08-2017 at 7:00 am

Technology trends make the concerns with electronic noise a primary constraint that impacts many mainstream products, driving the need for “Design-for-Noise” practices. That is, scaling, and the associated reduction in the device operating voltage and current, in effect magnifies the relative importance of non-scalable phenomena, such as noise. This makes what used to be a second order variable that was important only for specialty applications, into a primary concern, affecting most of mainstream type of designs. Thus for example, Flicker Noise and Random Telegraph Noise are major components of the overall variability budget in modern CMOS devices, and ultimately have direct impact on SRAM bit cell yield, DAC/ADC least significant bit resolution, clock jitter, etc.. The background of the noise phenomena and the relevant technology trends is outlined in recent Design-for-Noisepaper. In the past, the industry tended to evolve a set of methodologies and practices to manage new phenomena in technology – such as for example the Design-for-Manufacturability (DfM) practices for sub-wavelength lithography, or Design-for-Reliability (DfR) practices for managing the various intrinsic failure mechanisms (Electromigration, hot carriers, TDDB, NBTI, etc..). Hence “Design-for-Noise” (DfN) is a term used to describe the new practices necessary to manage the effect of electronic noise in advanced technologies.
The DfN methodology focuses on implementation of new solutions in test and characterization arena, that can then be used to ensure that the design is robust and the manufacturing process is stable with respect to the noise phenomena. This approach is analogous to the case of the DFM solutions, that ended up in implementation of OPC and RET practices in the process end of the product realization flow, rather than directly changing the design methodologies, per se. That is, rather than developing radical new designs and associated methodologies, it is believed that a practical approach for managing noise in advanced IC’s is to enhance the well-established SPICE-based simulation methodologies with better noise characterization, better noise models and better noise process control practices.

Thus, instrumentation and methodologies that enable direct (rather than inferred) measurement of noise is essential for realistic and practical DfN. The measurement technology must be accurate enough to resolve the minute noise signals (~fA’s), fast enough to generate statistically valid data needed for corner and/or statistical noise models (~sec’s per bias point), and simple enough to be incorporated in standard WAT test; all within the usual lab and test floor throughput and cost constraints. Such measurement technology then enables accurate SPICE simulation for noise, and ensures process control and consistency, both of which are necessary to define and manage suitable margin for noise phenomena – all within the reasonable economic constraints.

These seemingly conflicting constraints – accurate AND fast, economical AND addressing complex noise characterization – can be realized using state of the art measurement hardware integrated with advanced Artificial Intelligence software.

Test hardware capable of resolving fA noise signals normally requires long measurement time, necessary to ensure that all transients have settled and to enable averaging the signal over many thousands of data points. Hence noise characterization requiring minutes per bias point per DUT is not unusual. This kind of test time has therefore traditionally restricted direct noise characterization to engineering device characterization lab, and is normally performed only occasionally, early in the technology life, as a part of extracting and calibrating device models. Note that a corollary of this characterization approach, dictated by prudent engineering practices, is to bias the noise models on a very conservative side, to ensure adequate margin for any process variability.

However, use of advanced machine learning algorithms, along with the suitable training procedures, in conjunction with state of the art hardware, enable a drastic acceleration of data acquisition without compromising of accuracy. This can result in up to an order of magnitude reduction of the overall noise test time. This type of acceleration then enables not only noise characterization over a statistically valid sample size necessary for extraction of corner models, but also allows implementation of direct noise measurement in production process control environment. Note that since noise is not correlated with any of the standard process control metrics (IDsat, IDlin, Vth…), direct measurement is the only way of tracking the impact of process variability and process optimization on actual device and circuit noise.

Thus, use of machine learning enables a drastic acceleration of noise test time, thereby making noise characterization a practical direct process control metric, even within the typical volume manufacturing throughput and economic constraints. This in turn, enables development of statistically valid noise models to allow designers to optimize the noise margin with confidence, and process control practices to ensure consistent IC product yield and performance throughout technology ramp. Design for Noise at its best !!!
Platform-DA, a Process-Design Integration Infrastructure company with deep knowledge of device characterization and modeling, has applied its proprietary Artificial Intelligence algorithms to develop an advanced noise characterization solution. It provides a complete and integrated noise solution, combining state of the art measurement hardware, proprietary data acquisition and management algorithms, and device and noise modeling software.

The hardware includes not only the SMU’s but all the necessary cabling, jigs and even a probing solution, enabling easy integration with any test environment. The software encompasses not only the machine learning based data acquisition control, but also user friendly data management and visualization tools. And the model extraction environment includes not only the model extraction but a complete set of in-situ QA tools, resulting in accurate device models compatible with all the standard SPICE simulators.

You can read more about PDA on SemiWikiHERE. PDA will be participating in DAC 2017 in Austin, and will be demonstrating these capabilities.