CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

The Latest in Static Timing Analysis with Variation Modeling

The Latest in Static Timing Analysis with Variation Modeling
by Tom Dillinger on 03-30-2016 at 12:00 pm

In many ways, static timing analysis (STA) is more of an art than a science. Methodologists are faced with addressing complex phenomena that impact circuit delay — e.g., signal crosstalk, dynamic I*R supply voltage drop, temperature inversion, device aging effects, and especially (correlated and uncorrelated) process variation between logic cells in a performance-critical path. The uncertainty in clock and data signal arrivals at a storage element at both fast and slow PVT corners necessitated judicious allocation of timing margins, for verification of both setup and hold constraints.

With the progression of process technology, the impact of (global and local) process variation has increased, and thus required a more sophisticated solution, in lieu of a simple margining approach. The STA methodologists needed to address how to reflect statistical variation in the arrival time propagation calculations, and the determination of a “confidence level” for arrival-to-setup/hold checks. (Lesser timing margin values would still be applicable to other phenomena besides process variation.)

As illustrated in the figure below, the definition of a PVT corner for timing analysis was expanded to include a local, intra-die delay variation component. An on-die PVT “global mean” is defined, with a local distribution around that reference. Note that this global mean is somewhat artificial, as it represents a value around which measured local variation is added to align with the total measured process variation data.

Designing to a global “n-sigma” target at the far extremes of the process distribution would be too pessimistic, and increasingly difficult for designers to close timing. An overall global mean + local n-sigma method is used instead. (Note in the figure that the author is recommending that a very high-sigma still be applied for hold time checks at the fast PVT corner, due to the unforgiving behavior of a hold time failure.)

Recently, I had the distinct pleasure of chatting with Igor Keller, Distinguished Engineer in the Silicon Signoff and Verification Group at Cadence. He and his colleagues presented a paper at this year’s Tau Workshop, which caught my eye, entitled “Importance of Modeling Non-Gaussianities in Static Timing Analysis in sub-16nm Technologies”. The Tau Workshop is the premier venue for STA methodologists and EDA tool developers, to discuss how current challenges in the field are being addressed — it is definitely worth attending/tracking (link).

Igor reviewed some of the recent history of STA development, then highlighted a critical area that his team has been addressing.

First, a brief recap…

Full “statistical” STA (SSTA) was proposed over a decade ago, yet the implementation proved to be extremely complex. The delay and output slew characterization of cells as a function of loading and input signal slew — the backbone of STA — was costly. The propagation of full statistical arrival probability distributions was intricate. It required mathematical interpretation of the probability distribution of arrivals and slews at cell pins and the addition of probability distributions for cell delays, as timing analysis progressed through the network timing graph. In addition to timing signoff, physical implementation tools also need to integrate the timing engine as part of their iterative design optimizations. The adverse performance impact of full SSTA made utilization during physical design cumbersome.

An alternative method emerged as more practical, and still sufficient — Advanced On-Chip Variation (AOCV) analysis. AOCV utilizes the concept of stage depth in STA calculations, using the levelization of gates in a logic path to determine the depth number. A derate delay multiplier based upon logic path depth is applied to the local delay distribution to reflect the correlated variation of on-die circuits — the greater the number of gates in the path, the higher the assumed correlation. The derate multiplier decreases with the stage delay number. (Some AOCV approaches also include location-based derate tables, to further reflect local correlation factors when the physical extent of the path is bounded.) This methodology has gained acceptance, with STA tool functionality and with the foundries providing support for representing process variation in the form of a global mean and local derate tables.

An enhancement to existing OCV methods has been promoted by the Liberty Technical Advisory Board (TAB), a consortium of company representatives working on standards for circuit modeling (link).

The Liberty Variation Format (LVF) introduces a local standard deviation sigma into the cell characterization library data, and a table format for sigma as a function of input pin slew and output load is provided. This characterization approach allows the STA methodologist a general method to close setup/hold timing yield independently to “n-sigma”, generating corresponding derates.

(Note that there is certainly process variation impacting the setup and hold constraints at the clock/data inputs of a storage element. This variation is typically incorporated with the other timing margin factors.)

Igor highlighted that the AOCV and LVF n-sigma approach used to date has assumed Gaussian, or normal, variation distribution, as depicted above. In advanced process nodes, the variations are distinctly non-Gaussian. Additionally, the trend to operate logic circuits at reduced VDD supply voltage for low-power applications also results in non-Gaussian delay distributions. This necessitates a new approach, to the representation of the statistical “tail” of the arrival time distribution at a test point in the timing graph.

The Cadence team’s presentation at Tau highlighted how non-Gaussian cell distributions can be accurately and efficiently represented, and how the subsequent calculations of (non-Gaussian) delay, slew, and arrival time variations are propagated through the network graph.

The foundation for their approach begins with the same generation technique used for library cell characterization of delays and slews. Monte Carlo Spice simulations of cells (using advanced parameter sampling techniques) provide the discrete data. From this dataset, the following statistical parameters are calculated:

  • overall mean (based upon the global process mean above)
  • “shifted” mean (of the non-Gaussian data)
  • variance (aka, the statistical 2nd moment; the square of the standard deviation)
  • skewness (the statistical 3rd moment)

The calculation is extendible — the 4th moment, or kurtosis, could also be derived for the data distribution. Further, to accelerate the adoption of this approach, these values can be represented in a similar table format to the current LVF data.

Timing graph analysis now proceeds with delay/slew calculation and the propagation of arrival times. (Although our discussion focused on forward propagation of arrival times, Igor indicated the same technique applies to backward propagation slack calculation, as well.)

The main STA network timing methods are graph-based analysis (GBA) and path-based analysis (PBA, which should always be “bounded” by a GBA calculation). These methods require algorithms for min/max/sum calculations for cell pin arrival and pin-to-pin delay arcs. The Tau paper goes into detail on these calculations, using the best representation for the non-Gaussian distribution of the shifted mean, variance, and skewness values — e.g., a log-normal or a Cauchy distribution. The key is that these calculations do not adversely impact runtime performance.

The tail of the arrival data distribution at a test point provides a statistical probability of the timing yield, represented as a “quantile” for non-normal distributions. (Three sigma for a normal distribution corresponds to the 0.99865 quantile.)

Igor provided examples of the distinctly non-Gaussian cell delay values, including circuits operating at low VDD at advanced nodes. The figures below highlight the fact that the “0.99865 quantile delay” is far from the (Gaussian mean + 3 sigma) calculation, especially at low VDD.

Example of the delay distribution for a high Vt inverter cell @ VDD=0.6V. Note the difference between the (Gaussian) 3-sigma delay and the non-Gaussian 0.99865 quantile delay, which reflects the same timing yield.

Delay distributions for standard Vt inverter cells. The second example uses 7nm device models, operating at an extremely low VDD. Again, note the difference between Gaussian and 0.99865 quantile delays.

The Tau paper provided comparisons between reference Monte Carlo Spice simulations of full paths, to the prediction from the Gaussian and non-Gaussian distribution cell library LVF models — a few examples are excised from that paper in the figure below. The benefits of the improved non-Gaussian delay model are clear.

Cadence has integrated the non-Gaussian LVF extension support into their Tempus STA signal tool, and as the integrated timing engine in their Innovus implementation platform. They are working with the Liberty consortium to extend the current LVF definition as a standard.

STA is evolving to provide methodologies that support accurate timing yield signoff, in the face of increasing variation, while maintaining efficiency of library generation and delay calculation/propagation. That said, there are plenty of challenges ahead. Igor provided additional insights,

“We are working on several facets of STA — improved modeling of crosstalk, better support for multiple-input switching effects, better inclusion of aging models.”

Look for compelling advances in timing yield analysis in the future. For more information on Cadence Tempus, please follow this link.

-chipguy


Reflections on a Trade Show, and a Turning Point for Silicon

Reflections on a Trade Show, and a Turning Point for Silicon
by Alex Lidow on 03-30-2016 at 7:00 am

AAEAAQAAAAAAAAg4AAAAJGUyZjdhZGFmLTEyMWMtNGMyZC05NzIxLTdiNTdmNTFhYjNmNw

This past week over 5,000 people converged on the AppliedPower Electronics Conference (APEC) in Long Beach California to understand the state-of-the-art and the future of the electronics that powers things such as servers, electric cars, white goods, factories, medical implants, as well as drones. The conference, which is the premier event in applied power electronics, had technical papers as well as a conference hall full of exhibits related to power electronics.

I have been to every APEC show since inception in 1986. This one was different.

On Display at the Conference
For 60 years, well before the APEC conference was conceived, all electronic trade shows have demonstrated the latest and greatest advances in silicon devices as well as the systems and products that are built upon this excellent semiconductor. At APEC 2016 there was a ground swell of products, papers, demonstrations, and an obvious general enthusiasm for devices based on a relatively new semiconductor – gallium nitride (GaN). GaN devices were exhibited by Panasonic, Infineon, Texas Instruments, GaN Systems, Transphorm, and Efficient Power Conversion (EPC).

There were drones from Solace Power that can be recharged in mid-air, drones with LiDAR systems mapping the conference hall in real time, a dozen wireless charging systems from companies such as Semtech, Neosen, Gill Electronics, WiTricity, and Solace Power. A satellite from Planetary Resources landed at the EPC booth – satellite designers love GaN transistors and integrated circuits because they are tiny, efficient, and very resistant to the radiation that can damage silicon devices in space.


Figure 1: Phoenix Aerial Systems had a drone on display that had a working LiDAR system mapping the conference room in real time. LiDAR systems use GaN transistors because they are more than 10 times faster than silicon, thus giving greater image resolution.


Figure 2: GaN devices are extremely small and are used in many medical devices such as implantable pain scintillators, implantable heart pumps, and prosthetics.


Figure 3: Gill Electronics had this automotive center console on display. Embedded in the console is an AirFuel wireless charging system that can charge multiple devices placed in the recessed section on top.


Figure 4: Planetary Resources had a model of their Arkyd 200 satellite that is designed to recover valuable minerals from near-earth asteroids. In addition to being very resistant to radiation, GaN devices are used to reduce size and weight as well as to improve the efficiency of the solar panels.

More Stuff at the Conference
There was the Little Box Challenge winner at the GaN Systems booth (The winners, a company in Belgium named CE+T took home a $1,000,000 prize from Google). Panasonic showed a GaN-based, very tiny 45 W AC adapter. Texas Instruments had a DC-DC converter that converter 48 V to 1 V with astonishing efficiency thanks to GaN (This is the single-stage, energy saving power conversion solution the server industry has been demanding for years!).


Figure 5: Semtech’s chip set, as well as EPC’s GaN FETs, was used in Neosen’s tri-mode wireless charger. This device can charge devices using any of the three popular standards, Qi, PMA, or AirFuel .


Figure 6: WiTricity displayed their notebook computer charging pad that uses EPC’s GaN FETs. Soon entire desktops will be wireless charging platforms.

Envelope tracking systems for efficient 4G/LTE and 5G wireless base stationswere present, as were X-ray machines that fit into an ingestible pill (Think colonoscopy) were on display thanks to the miniaturization possible with GaN technology.


Figure 7: Check Cap has developed an X-ray machine that fits into an ingestible pill. This incredible device can do a colonoscopy without pre-purging or an invasive medical procedure. As the pill passes through the patient’s system, a 3-D image of the patient’s colon is sent to a wireless receiver worn as a patch during the test. Approval in Europe is expected this year.

Technical Papers and Discussions
GaN was not only in evidence on the conference room floor, it was also in many of the technical papers. There were 106 technical papers and presentations that referenced GaN in one way or another. GaN was the talk of the show by far.

EPC has been touting GaN for 6 years since it’s start of production in early 2010. At that time GaN FETs were 5-10 times higher performance that the best silicon transistors. At that time EPC made the claim that by 2015 GaN would not only continue to be higher and higher performance, it would also be less expensive to produce than silicon devices that can handle the same amount of power. That timetable was met, so this year at APEC power systems designers were confronted with the first time that a new material could outperform silicon at a lower unit cost, and is available off the shelf. Also, after 6 years in production, EPC has shown excellent field reliability that is as good as silicon reliability performance. These two facts contributed significantly to a change in mood of the power design engineers compared with past years!

Moore’s Law – Passing the Baton to Achieve the Promise
GaN is the logical successor to silicon for power conversion and analog devices; and possibly for digital components as well. GaN is opening new markets – as shown by the multitude of products on display at APEC . GaN technology enables applications such as wireless charging, higher resolution MRI imaging, micro satellites, high resolution and low cost LiDAR, and higher bandwidth wireless communications.

The recent sluggishness of the end markets for semiconductors is somewhat a by-product of the end of Moore’s Law, silicon cannot keep pace with the need to double performance while lowering cost. But don’t fret, GaN is on track to re-establish that amazing “go-go period” when consumers could count on marvelous new products and applications that year after year delivered higher performance and a constantly reducing price.Moore’s Law is not dead, it has a new beginning with a new technology, GaN, to take up the baton.


Who will provide data center Soc of the future, Intel or Qualcomm ?

Who will provide data center Soc of the future, Intel or Qualcomm ?
by Eric Esteve on 03-29-2016 at 4:00 pm

Intel has been incredibly successful by designing high performance server SoC to address the data center market segment, and the chance to see the company loosing large market share is pretty low, at least in the short term. Now, if we look at the really long term, 2030 or even 2040, like did the Semiconductor Industry Association (SIA) in a recent report (“Rebooting the IT Revolution: A Call to Action”) launched in September 2015, we realize that the current way of designing chips will have to drastically change. Designing SoC for performance only, even on the most advanced technology nodes, even by shifting node down whenever it’s possible, will simply not be sustainable.

If you don’t trust me, just take a look at the diagram below: the total energy of computing (Benchmark curve) would pass the world’s energy production by 2037 if the ways we design computing systems don’t change.

At first, we have to say that the authors have not limited their investigations to the different SoC design approaches, but have evaluated yet to come non-silicon devices, the impact of 3-D design, near threshold operations, just to name a few. As far as I am concerned, I propose to investigate within a field that I know: Silicon devices, SoC design techniques and Si fabrication technology.

During the last 15 years, we have seen two types of chip makers, both being very successful by developing SoC for two completely different markets. One group lead by Intel or Cisco is developing SoC targeting data centers or networking is targeting always higher performance (computing power or bandwidth capacity), whatever the power consumption, assuming this power doesn’t prevent the chip to run normally.

The other group lead by Qualcomm or Apple developing application processor SoC for battery powered mobile systems. This group has learned how to provide the highest CPU, GPU or DSP performance while keeping the power consumption as low as possible, using design techniques like clock gating or power island at chip level and power management units (PMU) usage at system level. We should not forget their technology partners like foundries, TSMC, Samsung or GlobalFoundries, who have systematically developed low power technology option, as well as the IP vendors providing low power version of the foundation IP.

It’s interesting to notice that Intel’s tentative to penetrate the mobile segment have been frequent but never successful. Is it due to a kind of “company culture” focused on pure performance, preventing to support the right technology option (low power), or to the designers themselves, reluctant to adopt design techniques radically different from what has been used for decades to create successful CPU SoC?

No matter are the reasons, probably a mix of company culture or short term marketing (Intel is successful on the data center segment with 99% market share, according with Bloomberg, so why changing now), the data manipulation, computing and networking, is growing exponantially, in every category. The diagram below, extracted from the SIA report (and very similar to forecast built by Cisco that you can easily find on the web), clearly shows that the data growth is exponential. If you look at the top 3 contributors, Multimedia, Consumer IoT and Industrial IoT, the industry consensus is that it will continue to grow… in fact for IoT and IIoT we just see the begining of a much larger deployement ! If you consider that a large part of the world is not yet involved, but strongly desire to participate to the data feast, this will reinforce the exponential growth trend. If no action is taken in the mid term, the computing industry will face a real issue by 2035/2040…

As of today, a data center is a building full of server racks, which need to be cooled via an expansive air conditioning system. The electricity bill is high and more than 50% of this electricity is used for the cooling system itself. Now, if you look at the server chips, they need and efficient package in respect with power dissipation, plus additional heat-sink. In other word, at every step you pay a price penalty due to the high power dissipated by the chip.

If we want the companies managing data centers (Google, Amazon, etc.) to radically change for a power conscious architecture, don’t expect them to make this change by altruism, the proposed solution should provide a lower Cost of Ownership (CoO). This means that the overall cost should be lower at the end of the year. Could we define a server architecture providing equivalent performance (MIPS, latency, bandwidth) but with much lower power dissipation, leading to drastically optimized electricity bill? I don’t know, but this could be a research track to be explored immediately, I mean searching for a solution which could be implemented in the next 3 to 5 years, instead of expecting the emergence of a magic material to replace the Silicon (which may arise). If you look forward, 2037 is not so far away from now. It’s as close as 1995…

Eric Esteve from IPNEST


Andy Grove’s Less Remembered Intel

Andy Grove’s Less Remembered Intel
by Sumit Sharma on 03-29-2016 at 12:00 pm

The following paragraphs present another one of those articles that I wrote for a Cyber Media publication, probably in the year 2000. It’s been almost fifteen years since then. When I read Sunit Rikhi’s glowing tribute to Andy Grove, a few grey cells stirred in my brain and I recalled that I had written something about the Intel that Andy Grove had shaped. Luckily I managed to recover it from my archives. Intel is known as a Tech giant. This article which I had christened “No Sympathies for the Underdogs” presents a different facet of Intel that not many have talked about. Enjoy for sheer nostalgia…

Ever wondered why people support the underdogs? After all shouldn’t the guys who have made it to the top after a lot of hard and smart work be the favorites always? Why is it that people often tend to develop a soft corner for the closest rivals of such giants? Why do so many people hope in their heart of hearts that an AMD displace or put an Intel in its place? Or that someone give Microsoft the shivers!

The examples of Intel and AMD make interesting case studies. The manner in which Intel has gone about monopolizing the processor market (almost) is commendable. It has not allowed itself to be restricted as a pure technology company. In fact it has paid as much attention to marketing and promotion as it has to technology. It kept its ears to the ground and was quick in responding to market feelers. It might have made mistakes from time to time, and had its share of problems, but the speed with which it has responded as well as proactively defined the market is amazing.

Not too long back, a Cyrix and a K6 seemed to be vying with each other to put Intel’s offerings in its place. People welcomed these moves and hoped that finally they would have a fair choice. The more enthusiastic ones predicted that Intel’s days were over. When one of Intel’s key guys switched over to competition, ‘pundits’ proclaimed that Intel’s fate was sealed.

Despite such sentiments for the underdogs competition has not been able to break Intel’s strangle hold on the market. And here reasons may be related more to marketing than to technology.

For one, Intel was quick to bring out a cheaper product itself in the form of Celeron. When Celeron didn’t quite sweep people off the ground, it was quick to make amends and improve upon the product. Additionally, its marketing arm burnt the midnight oil to make sure that AMD and Cyrix got compared with Celeron and not with Pentium. It thus created a niche for Pentium, which remained unrivaled.

From an Indian context, Intel’s presence in this market also played a key role. The competing organizations seemed to have missed the importance of the Indian market. And that made a significant difference. Nor were these organizations imaginative enough to do a number on Intel without being present in India.

In stark contrast, Intel masterminded a strategy to make deep inroads into the fastest growing ‘assembler’ segment. By starting what it termed the GID movement (Genuine Intel Dealer), it sought to bring respectability to a hitherto ostracized lot. The results were amazing.

Since then, Intel has never looked back. And its competition has never seemed sure of itself. To put the last nail in its rivals’ coffins (pardon the phrase, for in this business, you never know when the dead will rise again), Intel did something that a component maker is hardly known to do. It addressed the end user market with such vehemence that computers almost got synonymous with Pentium.

Many scoffed at Intel’s initial attempt to woo end users. But Intel persisted with its campaigns. In addition to advertising in the print media, it also made its presence felt on TV and featured in popular programs. It found novel ways to attract families to fairs organized by it. (It had recognized the pulse of the Indian parents who would do everything in their capacity for their children’s education.) And to make sure that it made an early impact on the young minds, it had something going on for students.
So complete and devastating was its marketing act that competition seemed no where in sight. Even its worst detractors, I am sure, could not have helped admiring the manner in which Intel systematically demolished competition in the price conscious Indian market.

Being good is not enough. You must also be known to be good! Not only did Intel have a good product. It made sure, the whole world knew. (Or at least in the Indian context, all of India knew.)

At the end of the day, people care only for what they get. And even if they harbor a soft corner for the underdogs, all that they will give them is their sympathies.


Automotive Artificial Intelligence (AI) Insights from Patents

Automotive Artificial Intelligence (AI) Insights from Patents
by Alex G. Lee on 03-29-2016 at 7:00 am

US9254824 illustrates an adaptive anti-collision system for providing timely alert information by analyzing the driving pattern of the driver using a neural network. A neural network utilizes massive connected artificial neurons to mimic the capability of a biological neural network so as to acquire information from external environment. In essence, a neural network is an attempt to simulate the human brain.

The adaptive anti-collision system determines the driving pattern corresponding to a vehicle speed, a safe distance and a braking distance based on the vehicle speed or acceleration, road condition, and drivers’ driving behavior using the neural network. Then, the adaptive anti-collision system adjusts control parameters of the vehicle (e.g., safe distance) control unit dynamically according to the driving pattern to issue an alert or activate a braking action.

US20140108307 illustrates a system for providing personalized and context-based suggestions to a driver using a machine learning. For instance, the system obtains contextual information indicating that a contact of the driver is only a few minutes in driving time from the driver’s route. Further, based on social communications and social media information stored in the driver profile, the system also determines that the contact is a friend of the driver. Based on that determination, the system informs the driver, “Your friend Peter is at a few minutes away from your route; would you like meet?”

US20150302718 illustrates a system for correlating physiological signals associated with a driver with vehicle-related events using a machine learning. Physiological signals include sensed and monitored data regarding the heart-rate, oxygen use, eye motion, galvanic skin response, blood flow, pupil dilation, and facial expression. Vehicle-related events include traffic, weather, visibility, road conditions, accidents, traffic alerts, distance-from-other vehicles. Vehicle-related events can be determined and communicated through external sources (e.g., cloud-data, inter-vehicle communication) as well as the vehicle’s controller-area network.

Based on the vehicle event data and physiological data, a driver state is then determined by correlating the vehicle event data with the physiological data associated with the driver using the machine learning. The driver state includes a level of driver stress, a level of driver drowsiness, a level of fear of the driver, a state correlated to the event of overtaking another driver, and a state correlating to the event of being overtaken by another driver. The system provides a suggested action based on the state of the driver. For example, the system determines that the driver becomes angry or nervous when in heavy traffic. The system then suggests to the driver that the channel on the audio system be changed to provide relatively soothing music.

US8190319 illustrates an adaptive real-time driver advisory control system for a hybrid electric vehicle to achieve fuel economy improvement using a fuzzy logic. A fuzzy logic-based adaptive algorithm with a learning capability can estimate a driver’s long term driving preferences. The advisory control system uses a set of rules with fuzzy predicates and an approximate reasoning method to summarize a strategy that accounts for instantaneous fuel consumption, vehicle speed, vehicle acceleration, and the driver’s torque request, in order to determine the upper bound of the torque request that accounts for maximum fuel efficiency and drivability. The advisory control system then provides feedback to the driver such that the fuel economy of the vehicle can be improved in a real world driving environment.


Yelling fire in a crowded chip factory

Yelling fire in a crowded chip factory
by Don Dingee on 03-28-2016 at 4:00 pm

Semiconductor market forecasts for 2016 are all over the place. Jim Handy and Tom Starnes floated a report in January looking for 10% growth. Jim Feldhan at Semico turned outright negative at -0.3% just a couple weeks ago. Tossing out the high and low scores, analysts tracked by GSA range from 0.3% to 7.0% in March updates. What’s going on here?

I’m not an analyst, I’m a product marketer. My job for the last 25 years has been to watch trends, gather facts and opinions, and sort out where and how to place the bets that best utilize engineering, manufacturing, and sales resources. Whenever someone comes up with a forecast number, I always ask how they got there.

Reading through the latest reports and opinions, a few things jump out. Semico’s opinion is based on their Inflection Point Indicator, a leading model with four quarters of advance visibility. It’s hard to say what is in Feldhan’s recipe exactly, although he gives hints – in his view, GDP growth of major economies other than India are slowing, DRAM is weakening in both demand and ASP, and the big application segments of PC and mobile are in non-regenerative braking modes.

Bill Jewell of Semiconductor Intelligence has already shared his latest opinion on SemiWiki, and he’s settling in at around 3%. He points to an electronics slowdown in China (from incredible 14% levels to a more reasonable 10%) while showing a jump in the US to 6.5%. Jewell cites near-term reduced global revenue guidance from most semiconductor firms, but says something very interesting and potentially messy in his text:

“We are assuming a decline of 5% in 1Q 2016, healthy quarter-to-quarter growth in 2Q and 3Q 2016, and a mild seasonal decline in 4Q 2016.”

Meanwhile, the low end comes from the SIA itself, dialing down its number to 0.3%. Their formula is pretty simple – the $45B DRAM segment sheds -7.9%, while most other segments grow, including sensors at 3.6%, microprocessors and MCUs at 3.6%, and logic including ASICs and FPGAs at 3.5%.

Then there is the Objective Analysis high end. They use a cap ex analysis as a major component of their model. They cite the IoT as (still) being 5 years away, and an overreliance on China for growth. However, their model factors in DRAM and NAND flash capacity, and they suggest two things are happening. First is a switchover from DRAM to NAND (something we pointed out in our Samsung chapter in “Mobile Unleashed” – their fab capacity is largely interchangeable). Second is while ASPs are flattening, bit capacity is growing, 20% for DRAM and 35% for FLASH, which translates to an estimated 14% revenue growth in the memory segment.

My conclusion from all that is, unlike the PC days when a microprocessor carried everything else in mass quantities along for the ride, there is no such thing as the semiconductor market anymore. Component categories are not moving in lockstep, and there is no clear trend in geographic markets. One really has to break this down by application segment to understand what could be happening. That’s one of the problems with IoT forecasting – it isn’t really a segment, but rather a collection of technologies and a hodgepodge of use cases that make it hard to say with certainty what happens five years out.

Of course, all the analysts claim to be accurate within a statistical margin. I’d add one factor I didn’t see anyone talk about – a pronounced shift from merchant business to custom or semicustom business, which is a lot harder for analysts to get their arms around. Cap ex may also be a bit misleading outside of the memory segment for this next phase, because much of the IoT activity is going to be on mature processes already in place.

OK, so I’ll put my money where my mouth is. I don’t have any sophisticated model here that would produce a decimal place of accuracy. My best guess at the semiconductor “market” would be 2% growth for 2016, however if one were to remove DRAM, that number would be more like 4%. The mere fact DRAM is dragging the market not due to overcapacity issues says a lot.

Which pieces of these methodologies pass the sniff test for you? Should we stop calling this a market and do what the SIA is suggesting, analyzing growth rates by component segment? If it isn’t the IoT, what will trigger semiconductors to outperform GDP growth rates again – or is that not going to happen anytime soon? Thoughts welcome.

References for this post:
2016 Forecasts – Global Semiconductor Alliance
2016 Semiconductor Sales Go Negative– Semico Research
2015 semiconductor market flat, 2016 looking somewhat better – Semiconductor Intelligence
Semi Market Breakdown and 2016 Forecasts – EETimes
2015 Reflections and 2016 Outlook – Objective Analysis


Bridging Design Environments for Advanced Multi-Die Package Verification

Bridging Design Environments for Advanced Multi-Die Package Verification
by Tom Dillinger on 03-28-2016 at 12:00 pm

This year is shaping up to be an inflection point, when multi-die packaging technology will experience tremendous market growth. Advanced 2.5D/3D package offerings have been available for several years, utilizing a variety of technologies to serve as the package substrate, interposer material for embedding die micro-bump fan-out redistribution and interconnect metals, and (for 3D stacks) the method for fabricating vertical vias through intermediate package/die strata. Some recent examples include the Xilinx UltraScale product family (TSMC’s CoWoS technology) and AMD’s Radeon R9 integration of a GPU with stacked High Bandwidth Memory (HBM) die.

This year, the market growth will come from packaging technology enhancements directed at more cost-sensitive (read: mobile) applications. Wafer-level chip-scale packaging (fan-in WLCSP) has been extended to fan-out packages, and soon, fan-out multi-die solutions, as exemplified by TSMC’s recent InFO-PoP announcement.

Yet, the design environments for die and package implementation remain separate — i.e., distinct tools for chip vs. package physical design, distinct rulesets for DFM, distinct project databases and manufacturing data formats (e.g., GDS-II, Gerber). A unique technology is required to bridge these different domains, and provide an integrated design verification solution.

Recently, I had the opportunity to chat with John Park and John Ferguson at Mentor Graphics, about their approach to advanced packaging design enablement, and specifically, their participation with TSMC as a constituent of the “reference flow” for InFO-PoP. It was a most enlightening discussion.

John P. emphasized the complexity of dealing with the chip and package implementation domains. He said, “For a designer coming from the chip world, the biggest technology difference for advanced packaging is the routing environment for fan-out and signal interconnects. These traces utilize all-angle geometries, circular vias, and unique teardrop and taper contours.”

He highlighted another complexity, stating, “For the aggressive fan-out technologies like InFO, there are intricate manufacturability rules for copper meshing and voids, to provide suitable mechanical stress relief to minimize warpage, and to alleviate copper pour outgassing issues.”

John P. went into additional detail, on how Mentor has extended their leading Calibre product capabilities to support advanced packaging technologies. “The key is the geometric data processing engines integrated into Calibre 3DSTACK, which were required by the WLP technology design kit from TSMC. Their design rules make extensive use of the equation-based DRC support in Calibre — which are similar to the complex rules in Photonics technology design kits. And, Calibre supports multiple designs in a single project, a requirement for these packages.”

He continued, “These features enabled TSMC to use GDS-II as the InFO data representation, and for the familiar Calibre sign-off for manufacturing release flow already used by customers. We also enhanced the GDS-II rendering support in our Xpedition product, to support using GDS-II.”

John F. added, “There’s a subtlety that we have to manage, as well — as there are separate sources for die and package data, there may be overloaded uses of manufacturing layer info. The flow ensures that there are no conflicts in layer references.”

The flow for advanced multi-die package verification is appended below.

The initial step is to utilize the features of Xpedition Package Integrator (XPI in the diagram), which focuses on constructing the multi-die project connectivity model from the various, EDA-neutral, data formats. (An earlier semiwiki article described some of the features of XPI here.)

John F. added, “The Calibre 3DSTACK capabilities for multidie package verification are definitely not limited to Xpedition users; other environments are certainly supported (e.g., Cadence Allegro Package Designer, Zuken). For Xpedition users, there is the added benefit of an available WLP design kit utilizing Hyperlynx DRC, that enables designers to remain in the (Windows O/S) tool environment, to iterate more quickly.” (as depicted in the lower right-hand corner of the flow diagram)

“Also, debug results from the (Linux-based) Calibre sign-off flow are directly integrated in Xpedition, with cross-probing between Calibre result and Xpedition geometry.” (illustrated in the figure below)

Our discussion concluded with the all-important reminder that these advanced packaging solutions require detailed thermal/mechanical stress analysis, another area where Mentor’s support excels.

The rapid pace of development for (low-cost, small form-factor) multi-die packaging solutions has necessitated a focus on providing reference flows for verification, that support interoperability in chip and package design environments. Mentor has addressed this requirement through extensions to their Xpedition product family, and through the introduction of Calibre 3DSTACK (which does not require a new license, by the way). Design kits and reference flows are available.

It will be exciting to see how end products released later this year will leverage this advanced packaging technology.

For more info on Calibre 3DSTACK, please follow this link.

-chipguy


IC Design Optimization for Radiation Hardening

IC Design Optimization for Radiation Hardening
by Daniel Payne on 03-28-2016 at 7:00 am

I was born in 1957, the same year that the Soviets launched the first satellite into Earth orbit, officially starting the Space Race between two global super powers. Today there are many countries engaged in space research and I just read about how engineers at IEAv (Institute for Advanced Studies) in Brazil did their IC design optimization for radiation hardening. The CITAR project has multiple institutions collaborating to create ICs for satellites used in the Brazilian space program:

  • Design ICs – Centro de Tecnologia da Informacao Renato Archer
  • Radiation Tests – IEAv, USP, FEI
  • End User – INPE

In space there are cosmic rays that create trapped particles like protons, electrons and heavy ions. These particles effect ICs in orbit in a variety of ways:

  • Vth of the P and N channel MOS transistors will shift up or down
  • The sub-threshold slope increases
  • Leakage currents increase
  • Mobility is decreased

Circuit designers need to know how these particle induced effects in satellites will change the performance of an IC over time. The Total Ionizing Dose (TID) defines the extent of radiation effects. Fortunately the researchers can create radiation models here on Earth by running radiation experiments. On this IC project the chip engineers optimized their circuits for use in a rad-hard environment by using an optimization tool called WiCkeD from EDA supplier MunEDA.

Their old design methodology was updated to include rad-hard optimization using WiCkeD as shown below:

A bandgap circuit from the XFAB reference kit was optimized in this new design flow using the XH018 process. The specifications for this circuit are:

The goal is to load both the standard model and rad-hard model into WiCkeD, then optimize the circuit to pass all corner cases.

Step one is to use this circuit with initial values of Width and Length devices at a nominal corner and see how the circuit performs against the specifications. They found that both minVBG and TC were violating the specification for this initial corner.

Step two is to run Deterministic Nominal Optimization (DNO) to improve the design so that it passes all specifications. After a few DNO iterations the circuit now passes the specifications:

In robustness verification they found good results over all operating conditions. The yield estimated by 200 samples of Monte-Carlo Analysis is 99.5%, and a Worst Case Analysis showed that all specifications could be achieved with a robustness of at least 2.58 sigma:

Yield Optimization was the next step and this is where they ran corners with the fresh models and corners with the rad-hard models to see what the mismatch effects were. Robustness verification of Yield Optimization (YO) showed that all specifications could be achieved with a robustness of at least 3.21 sigma:

Here’s a quick comparison of device sizes after each optimization step:

The yield optimizer (YO) improved the robustness against random variation at worst-case corners, operating conditions, and radiation from 2.6 to 3.2 sigma without increasing the area, only be re-balancing the transistor geometries.

Summary

IC designers can optimize their circuits for rad-hard environments found in orbit by using EDA tools like WiCkeD from MunEDA. Engineers on this particular project took about two weeks elapsed time to optimize their circuits, taking about 18,000 simulations for the entire design flow.

IRPS Conference

On April 21st at the International Reliability Physics Symposiumthere’s an interesting paper from STMicroelectronics and MunEDA titled, “BTI Induced Dispersion: Challenges and Opportunities for SRAM Bit Cell Optimization“. This paper presents at 1:30PM and here’s the abstract:One major CMOS reliability concern for advanced nodes is the Bias Temperature Instability (BTI)mechanism. In addition to the native local process dispersion, the BTI induced dispersion is a field underintensive research. Important works [1, 2] focus on the distribution tail of the Vth shift and efforts aredeployed to high-sigma accurate modeling (defect centric, Skellam). In most applications influenced bydevices matching (ADC, SRAM…), it is important to understand how the initial Vth distribution evolvesin time. In this paper some key results of spread induced by BTI are reviewed for 14FDSOI and 28FDSOIfrom STMicroelectronics. Analysis between initial Vth and aged Vth correlation is presented. Then,measurement of fresh and post HTOL memory VDDmin is presented for different conditions of temperatureand process centering. Finally, an innovative algorithm of yield optimization is presented. It enables tooptimize the centering and yield (through devices sizing or process centering) including ageing, underconstraint of foot print.

Related Blogs


IoT Workshop in Beautiful Monterey California!

IoT Workshop in Beautiful Monterey California!
by Daniel Nenni on 03-27-2016 at 8:00 pm

It is that time of year again, the EDPS Workshop at the Tides Hotel in Monterey. This year will start out with a keynote on IoT from Serge Leef, VP of New Ventures and GM of the System-level Engineering Division at Mentor Graphics. Serge started his career at Intel followed by Microchip and Silicon Graphics. He has been at Mentor for the last 26 years. So yes, this is going to be an interesting session because new ventures and system-level engineering equals IoT, absolutely. And remember this is a workshop so you get to interact with industry experts at a much more personal level:

Keynote: Convergence of silicon, Sensors, Mobility, and Cloud as Driving Forces in System Design Evolution

IoT is not an abstract concept whose existence needs to be debated. It is a reality in the evolving computational landscape. Embedded systems that have once encapsulated a finite feature set in a fixed formed factor (i.e. box) are morphing into disaggregated solutions made up of loosely connected edge nodes talking to gateways which are in turn linked to the cloud. Cloud APIs exposed to web based and mobile apps, unleash creativity of huge communities of developers who can turn, once static, devices into machines with open-ended functionality bounded only by imagination.

IoT is merely an implementation detail that is not “front and center” when the value of end-to-end vertical applications is pitched to the VCs these days. IoT works behind the scenes to enable seamless and readily observable value to be delivered to a consumer. Smart garage door openers and door locks, weather and moisture driven sprinklers, real time patient monitoring and diagnostics, indoor climate and humidity controls are all enabled by the advancements in several technologies that are maturing all at once. This is creating a gold rush as people with ingenious ideas leverage their domain expertise to create uniquely valuable devices and services enabled by rapid advances in sensors, mobility, connectivity, cloud, standard communication protocols and predictive data analytics.

For the engineers the new computational topology presents fascinating challenges. Where in this world do you place the algorithms? An accepted approach is to assign the most demanding processing that requires limited data movement to the cloud, where computational and storage resources are essentially unlimited. Additional processing can be directed at “fog computing” on the gateways which are typically very capable computers containing quad-core processors and can benefit from physical proximity to the edge nodes and low data communications costs. Lastly, some computation can be done on the edge nodes (“mist computing”) where sensors and actuators coexist with low-power CPUs and memories in small, battery powered devices.

The IoT world presents fascinating new opportunities in software. In addition to much discussed and increasingly well understood topic of big data analytics another area of deriving insight from newly available data is emerging: sensor fusion. As dozens of sensor can deliver volumes of readings in real time, an opportunity exists to collect, organize, correlate and process the incoming data turning it into information which can then be translated into knowledge. Doing something with this knowledge and turning it into wisdom is a big domain specific challenge and opportunity. Many consider autonomous driving to be the perfect application domain for experimenting with sensor fusion. Other obvious sensor fusion application areas are medical diagnostics, industrial controls, energy management, etc.

An aspect not to be overlooked in all this excitement is security. With forecasts of 20B – 50B connected devices by 2020, it’s easy to see that security as a concern will soon come to the foreground. Huge number of internet connected devices with highly variable levels of security sophistication will create a massive attack surface for the hackers. While the app world, the cloud and the gateways contain built-in security layers and countermeasures, the edge nodes are the ultimate “soft targets”. Unfortunately, it will probably take a few highly publicized breaches to instill a security discipline throughout the entire IoT chain.

More Information on EDPS HERE… Early bird registration ends April 1st…


Growing Security Concerns Due To Internet of Things (IoT)

Growing Security Concerns Due To Internet of Things (IoT)
by Faisal Mushtaq on 03-27-2016 at 4:00 pm

It is believed that by 2020, there will be about 50 billion connected devices across the world, more than 7 times the present human population. The growth of digital devices is increasing exponentially because both users and technology are getting smarter every next day and the compatibility between the two is improving phenomenally with the unprecedented growth in the internet, mobile, robotics and IT technologies. Ubiquitous computing, ubiquitous use of IP and ubiquitous connectivity are the three major propellers of the IoT, and that’s why it has been successful in bringing disruptive transformation in all spheres of the human life. Today, the IoT is responsible for almost every digital communication, from Telephony to Emailing and Machine-to-Human communication (M2H) to Vehicle Telematics, all are grossly dependent on the IoT.

How the IoT is Surrounded By Incessant Threats?
An IoT network is very much required today for data collection, closed loop functioning and network resource preservation, and all of these procedures are terribly prone to risk and threats. On the other hand, it is very clear that moving without using the latest technology is like moving into a dark and directionless world, and success of businesses as well as the nations completely relies on the adoption of the IoT. But, are you aware that stalkers are always following you? They stealthily try to eavesdrop your messages and if you are not moving through a safe alley, from simple espionage to an easy access to intellectual property, there are enough chances that they will ambush upon your network and do everything to steal the data, which is the be-all and end-all of your business.

If you love tracking news then you might have heard about the ONGC case. In the recent past, one of the Navratna of India’s public sector, the Oil and Natural Gas Corporation Limited (ONGC) lost Rs. 197 crore when the perpetrators successfully made access to the official email account of an employee. They duplicated the public sector firm’s official e-mail address and used it to convince an overseas client to make the payment of a Rs. 197 crore deal. It is just one case in the huge Pandora box of the IoT threats.

How to Combat the IoT Threats?

We should always make ourselves ensured that the technology we are using is safe and we are moving in a protected environment. Because the more we are dependent on the IoT, the more vulnerable we are to espionage, phishing, ransomeware and system hacking. Therefore, to protect data and other useful information, businesses and government agencies must establish a stringent security system based on the four key components of the security framework viz., Authentication, Authorization, Network Enforced Policy and Secure Analytics. The Defense Research and Development Organization (DRDO) of India have chosen Multifactor Authentication solution for the effective re-validation of the credentials & authenticity of the users into the organization’s ERP. In addition to Multifactor Authentication, Data Leak Protection, Advanced Persistent Threat Protection, and IPS/ Firewall can help greatly reduce the risks.