webinar banner2025 (1)

Silvaco Talks Atoms to Systems – Where to Next?

Silvaco Talks Atoms to Systems – Where to Next?
by admin on 07-17-2019 at 10:00 am

At the ES Design West event in San Francisco last week Silvaco’s CTO and EVP of Products, Babak Taheri, gave a presentation titled, “Next Generation SoC Design: From Atoms to Systems”. The time slot for the talk was only 30-minutes which is simply not enough to discuss all the technology Silvaco is providing now. I had not looked closely at Silvaco in a while. I came away with a strong feeling that they are more like an iceberg – there is a lot beneath the surface, and their technology breadth is much more than most people realize. If you are not using them already, you probably should be.

It was only 3 years ago that Silvaco acquired IPextreme, launching the company’s semiconductor design IP portfolio. Silvaco’s breadth in IP now encompasses more than 100 production-proven IP cores and foundation IP libraries (e.g., I/O, standard cells, memories). With their business model, they will also help commercialize captive design IP from semiconductor companies, such as Samsung Foundry, and include the application of their unique IP fingerprinting technology.

Although all the technology areas are related to some extent, I view design IP as more of a “systems” product. What was intriguing me in the talk was the focus on “atoms”. Let’s face it, in the EDA world, even when EDA was called CAD, we always talked about systems. I don’t remember much discussion on atoms in semiconductor design since my last course in electromagnetic theory in college. Slide 8 of the presentation was titled “Design Technology Co-optimization (DTCO)”. It showed a Silvaco flow from the functions Process Simulation to Device Simulation to Automatic Parameter Extraction and into Circuit Simulation. There were some other cyclical pieces after that, but the first two boxes are what drew my attention. I worked at Celestry before it was acquired by Cadence and Celestry had significant TCAD business related to transistor modeling. So, what I think we must conclude from Babak’s message is that something important is changing here. All the new technologies which are evolving rapidly (MRAM, RRAM, advances in Flash and any other non-volatile memory technologies) taken together with feature sizes that can be measured in a few atoms demand that we rethink the approach to design. Masks are expensive; production is costly and time-consuming. New tools will be needed to make sure the silicon will function as intended. The first two boxes in the slide above were marked as Victory Process and Victory Device. A common platform of products related to process simulation and device simulation at the atomic level?

Apparently – Yes. Victory Process™ is a general purpose 1D, 2D and layout-driven 3D process simulator for applications including etching/deposition, implantation, annealing, and stress simulation. Victory Device™ enables device technology engineers to simulate the electrical, optical, chemical, and thermal behavior of semiconductor devices. Victory Device is physics-based and simulates in 2D and 3D using an advanced tetrahedral meshing engine for improved accuracy. Clearly, 3D simulation is a must for finFET technologies. In his talk, Babak mentioned the need for tools that contemplate a small number of atoms in some of the semiconductor structures. The base technology is all here at Silvaco, and if you are doing this type of work you should use it now. Still, what will be needed next? I am sure Silvaco is thinking about it.

About Dr. Babak Taheri
Babak Taheri is the CTO and EVP of products at Silvaco, a leading EDA Software Company. He manages the TCAD, EDA, and IP product divisions at Silvaco. Previously, he was the CEO / president of IBT working with investors, private equity firms, and startups on M&A, technology, and business diligence. He also held VP/GM roles at Cypress Semiconductors, Invensense (now TDK) and key roles at SRI International and Apple. He received his Ph.D. in biomedical engineering from UC Davis with majors in EECS and Neurosciences, has over 20 published articles and holds 28 issued patents.


Semicon West was sluggish with hopes of 2020 recovery

Semicon West was sluggish with hopes of 2020 recovery
by Robert Maire on 07-17-2019 at 6:00 am

Bouncing along a not too bad bottom
Given that we have followed the semiconductor industry through many down cycles, we can safely say that this one isn’t all that bad by comparison. Everyone, big & small, is still safely profitable and in relatively good shape. Though we are seeing the normal week long holiday shut downs typical of the downcycle we are not seeing wholesale layoffs or cuts.  Its not all that painful (by comparison to past down cycles). While not busy and up beat , Semicon West was not the funeral we had experienced in past down turns.

The “new normal” will be different than the “old normal”
We think that when the industry does recover, it won’t be the rip snorting, maniacal memory spending we saw in the last up cycle. It will likely be more evenly balanced between memory and logic and we would not be surprised if logic/foundry led the way off the bottom rather than having a memory driven recovery.

There are likely those who would say that you can’t have a “real” recovery without memory (and we might be one of them…), but we could have some sort of recovery.

The past cycle was an almost perfect storm of memory spend, driven by the conversion from rotating media to SSDs, sucking up a tidal wave of NAND.  At the same time, the industry was going through a massive conversion from 2D planar memory to 3D NAND, buying equipment in leaps and bounds. We are now well past the bulk of the SSD conversion as well as virtually 100% of the 3D NAND conversion, those waves have washed over the industry and subsided very quickly as the tide went out following them.

Memory industry will be gun shy for a while-
Given the beating that memory pricing has taken we doubt that memory makers will jump back into spending with prior vigor.  Spending in memory will be much more measured and incremental. The need for huge spending to convert to 3D doesn’t exist anymore. We also already have enough capacity in the industry for current SSD consumption and then some, so we don’t need big spending for that either. In short, the need for big memory capex doesn’t exist as we are well past the SSD/3D hump…..so don’t hold your breath for it.

5G could push logic/ foundry to recover first
We have heard of some good expected ramping of 5G chips at TSMC and elsewhere over the next several months. 5G devices not only for phones but for infrastructure.  While Qualcomm may be driving the first wave of 5G, others will soon add to the mix. While this is likely not enough to bump up TSMC’s capex significantly given the weakness in overall chip demand, we none the less think it positively impacts demand for leading edge production equipment.

More importantly, this wave of new 5G related demand will likely happen prior to a memory recovery which seems a year or more off at the very least. All this suggests that foundry/logic could recover first and stronger than memory…

The equipment mix will be different in this cycle
If we see a more evenly driven recovery between memory and logic or logic recovers first as we suspect, those companies benefiting will be different than last cycle.  In addition the upcoming cycle will obviously have a more significant EUV component.

We will also likely see a shift away from US suppliers in the coming up cycle as China will find ways to avoid America like the plague. Multiple patterning and multiple layer 3D NAND will be a smaller percentage of spend.

The stocks
In general, we see little reason to buy the stocks now as there is no reason when the recovery is so far off, well into 2020 or beyond.  Its not like the stocks are cheap or have sold off recently.

We would be more selective and look for down drafts or other news or event driven opportunities.  We don’t see much of a potential for upside surprise in the current reporting season either.  We think the potential for investor impatience is higher in the near term.


WEBINAR: Eliminating Hybrid Verification Barriers Through Test Suite Synthesis

WEBINAR: Eliminating Hybrid Verification Barriers Through Test Suite Synthesis
by Randy Smith on 07-16-2019 at 10:00 am

 

I’ve been following the evolution of the verification space for a very long time including several stints consulting to formal verification companies. It has always been interesting to me to see how so many diverse verification techniques emerge and been used, but without much unification of the approaches. With the emergence and adoption of the Portable Stimulus Standard (PSS), we now have the chance to better unify these approaches in a more meaningful way.

The combined use of simulation and emulation in a verification flow is now commonplace. Many engineering groups have begun augmenting this combination with virtual platforms to create a fully hybrid verification process. This allows for increased capacity and performance, as well as a shift left approach to design architecture and test planning. One barrier remains to a seamless flow between these tools, the fact that each process component requires a different testbench format. Furthermore, improved test content application may significantly accelerate these individual processes.

This webinar will be held in three different time zones starting on August 13, 2019, in the United States,  and will discuss Test Suite Synthesis, where verification scenario intent is described using the new PSS and then synthesized to the respective process implementation. The white box intent description can be used to accelerate UVM block verification, Software Driven Verification (SDV) for SoCs, and prototyping and post-silicon validation, increasing quality while reducing schedules. In addition, this same test description can be used across all these processes, providing a continuous, back and forth flow across the entire verification process. During the webinar you will see a demonstration of the practical application of test suite synthesis on real designs as they progress from architecture to block to SoC, leveraging hybrid verification techniques.

The main portion of the webinar will be presented by Aileen Honess. Aileen has more than 20 years of experience teaching, mentoring, and leading hardware verification projects across a variety of disciplines, companies, and continents. She is an expert in UVM and has recently been assisting those who are modernizing their verification methodology by adopting portable stimulus and portable specifications. After a long career at Synopsys as a lead application specialist in verification, she has assumed the role of Technical Specialist at Breker Verification Systems. She holds a BS in Electrical Engineering from UCLA.

Click here to register for this webinar. Select your time zone to determine which webinar session best fits your schedule. Once registered, you will also receive a few reminders.

About Breker Verification Systems
Breker Verification Systems is the leading provider of Portable Stimulus solutions, a standard means to specify verification intent and behaviors reusable across target platforms. It is the first company to introduce graph-based verification and the synthesis of powerful test sets from abstract scenario models. Its Portable Stimulus suite of tools is Graph-based to make complex scenarios comprehensible, Portable, eliminating test redundancy across the verification process, and Shareable to foster team communication and reuse. Breker’s Intelligent Testbench suite of tools and apps allows the synthesis of high-coverage, powerful test cases for deployment into a variety of UVM to SoC verification environments. Breker is privately held and works with leading semiconductor companies worldwide. Visit www.brekersystems.com to learn more.

Also Read

Breker on PSS and UVM

Verification 3.0 Holds it First Innovation Summit

CEO Interview: Adnan Hamid of Breker Systems


Safety Methods Meet Enterprise SSDs

Safety Methods Meet Enterprise SSDs
by Bernard Murphy on 07-16-2019 at 5:00 am

The use of safety-centric logic design techniques for automotive applications is now widely appreciated, but did you know that similar methods are gaining traction in the design of enterprise-level SSD controllers? In the never-ending optimization of datacenters, a lot attention is being paid to smart storage, offloading storage-related computation from servers to those storage systems. Together with rapid growth in SSD-based storage at the expense of HD-based storage (at least in some applications), this inevitably creates lots of new and exciting challenges. One consequence is that the controllers for SSD systems are now becoming some of the biggest and baddest in the SoC complexity hierarchy.

Why are these devices so complex? Certainly they have to offer huge bandwidth at very low latency (performance is a big plus for SSD) in architectures where a mother-chip controller may be managing multiple daughter controllers each managing a bank of SSDs. Datacenters have high cooling costs so expect low power (a big appeal for SSDs). And they also expect high reliability for enterprise applications; no-one would want to use a datacenter that loses or corrupts data. That last point is where the connection to safety-related design techniques comes up.

Then there’s lots of house-keeping, finding and isolation bad locations, sampling to predict and proactively swap out locations likely to fail, also needing to put aside un-erased blocks to be erased during quiet periods (since erasing is a slow process). All of this is known as garbage collection. On top of these functions, the controller can manage encryption, compression and lots of other offloadable features that don’t need to be managed by the server, for example SQL operations. At least that’s one viewpoint; there seem to be differing opinions on the pros and cons of offloading, see the link above.

But there’s no debate on the need for reliability. JEDEC defines a metric called Unrecoverable Bit Error Ratio (UBER) which is the number of data errors divided by the number of bits read.  Consumer-class SSDs allow for slightly less reliability, where occasional re-reads may not be too intrusive. But enterprises expect ultra-high reliability and a higher UBER, so more must be done in controllers to ensure this reliability. A lot of this is in proprietary hardware and software design to manage system aspects of reliability but some must also be basic functional reliability, demanding support in design methodologies and tools.

The need for reliability comes from the same concerns that we see in vehicles – cosmic ray events, EMI events and similar problems. These devices are all built in advanced processes and are just as vulnerable to these classes of problems as automotive devices. Which in turn means you want parity-checking, ECC for memories, duplication (or even triplication) and lockstep operation, all the design tricks you use to mitigate potential functional safety problems for ISO 26262.

Curiously, when Arteris IP first built the Resilience Package for their FlexNoC interconnect IP, their first customers were enterprise SSD builders, originally small guys, now consolidated into companies like WD, Seagate, Samsung, Toshiba and Intel. Over time, Arteris IP started to get more uptake among companies building for automotive applications thanks to growing adoption of the ISO 26262 standard. But SSD continues to be a driver; they’re now starting to see adoption in China for this kind of application.

In a lot of cases in an SoC design, safety mechanisms must be managed by the integrator, but in the interconnect, it is reasonable to expect that the IP generator should handle these reliability functions for you. This is what the FlexNoC Resilience package does. It also provides a safety controller to manage faults and a BIST module to continually monitor test data protection hardware during quiet periods. The Resilience package also natively supports the ECC and parity protection schemes used by the Cortex-R5 and R7 cores, unsurprisingly since these are the cores most commonly used in SSD controllers.

I should add that this support isn’t the only reason SSD controller designers use the Arteris IP interconnect solutions. Remember that these devices are some of the biggest, baddest SoCs around? That means the SoC-level connectivity at minimum has to be through NoC interconnect. Traditional crossbar fabrics would be simply too expensive in performance and area; only the NoC approach can guarantee the QoS demanded by these systems. Even large subsystems will depend on NoC fabrics for the same reason.

Kurt Shuler (VP Marketing at Arteris IP) tells me these approaches are now trickling down to consumer-grade SSD. I may only be a consumer, but I don’t like waiting for slow disk operations either. Can’t come too soon for me. You can learn more about this topic HERE.


SEMICON West 2019 – Day 1 – Imec

SEMICON West 2019 – Day 1 – Imec
by Scotten Jones on 07-15-2019 at 10:00 am

On Monday, July 8th Imec held a technology forum ahead of Semicon West. I saw the papers presented and interviewed three of the authors. The following is a summary of what I feel are the keys points of their research.

Arnaud Furnemont
Arnaud Furnemont’s talk was titled “From Technology Scaling to System Optimization”. Simple 2D dimensional scaling has slowed. Design Technology Co-optimization (DTCO) has led to track height reduction but as track heights shrink it leads to fin depopulation and requires process optimization to maintain performance. DTCO and scaling continues to be important, but we need to also look from the top down as scaling from a system perspective using System Technology Co-Optimization (STCO).

As dimensions scale down a limit is eventually reached for each technology and a transitions from 2D to 3D is required. We have already seen this happen with 2D NAND transition to 3D NAND. For DRAM’s he sin’t convinced capacitor scaling continue below the D13/D14 nodes and a 3D solution is needed. 3D XPoint memory will need to increase the number of layers and logic will also have to transition to 3D.

Figure 1 summarizes half pitch limits for memory by technology.

Figure 1. Memory Scaling Limits.

For logic scaling there are opportunities to partition functions in an intelligent manner. Backside power delivery through thinned wafers with micro TSVs and separately fabricating SRAM and logic and then integrating them offer options for more highly optimized solutions, see figure 2.

Figure 2. Smart Partitioning.

I have previously written about backside power delivery here.

Naoto Horiguchi
Naoto Horiguchi gave a paper entitled “Vertical Device Options for CMOS Scaling”. The main point of the papers was that vertical devices could provide a shrink to SRAM arrays versus horizontal tarnsistors, see figure 3.

Figure 3. SRAM Shrink from Vertical Transistors.

This work fits in with the previous paper because by fabrication an SRAM only array the process scan be simplified versus a full logic process, for example SRAM only requires approximately 4 interconnect layers versus 12 or more for leading edge logic.

Figures 4 and 5 illustrates the basic process for a 5nm class vertical SRAM array. The process steps in blue are EUV layers (note that the Top Electrode is mot in blue but is also an EUV layer).

Figure 4. Vertical SRAM Array Front End Of Line (FEOL) Process.

Figure 5. Vertical SRAM Array Back End Of Line (BEOL) Process.

This work was also published at the VLSI Technology Forum [1] and between figures 4 and 5 and the VLSI paper the process becomes the process can be outlined in more detail.

  1. An N-type Epi layer is deposited.
  2. 2 noncritical masks and implants are used to fabricate high doped N and P wells.
  3. A 70nm thick P-type channel Epi layer is grown.
  4. An EUV mask and etch is used to form 8nm diameter nanowire pillars. The etch is 100nm deep etching down into the high doped wells.
  5. An EUV mask and etch is used to create isolating trenches between sets of pillars.
  6. The trench is filled and then an oxide recess etch is performed, this exposes the upper areas of the pillars for gate formation.
  7. A chemical oxidation is performed to create and interface oxide, this is followed by ALD depositions of HfO2 and TiN.
  8. A Tungsten (W) fill is now deposited, CMP planarized and then a recess etch is performed.
  9. An EUV mask and etch is performed to form the gates and then an oxide fill is performed.
  10. An EUV mask, etch and W fill is performed to create the bottom gate contact.
  11. An EUV mask, etch and W fill is performed to create cross couples.
  12. A barrier layer is deposited, masked and etched and then a selective epi of Si:B is grown to form the top source/drain for the PMOS.
  13. A barrier layer is deposited, masked and etched and then a selective epi of Si:P is grown to form the top source/drain for the NMOS. An oxide is then deposited.
  14. An EUV mask, etch and W fill is performed to create the top electrode. An ILD oxide layer is deposited and planarized.
  15. An EUV mask, etch and W fill is performed to create the gate contact.
  16. An EUV mask, etch and W fill is performed to create the top electrode contact.
  17. An oxide is deposited and planarized, an EUV mask and etch is used to create a metal 1 trench that is then filled with damascene copper.
  18. An EUV mask, etch and W fill is used to create a super via.
  19. EUV masks and etches are used to create metal 2 and via 2 and then they are filled with dual damascene copper.

This flow is used to create Vertical SRAM test devices. A full flow would include at least two more metal layers and likely some processing for ESD protection. This array could them be integrated with logic and backside power distribution as shown in figure 2.

Zsolt Tokei
Zsolt Tokei presented a paper entitle 3nm Interconnects and Beyond: A toolbox to Extend Interconnect Scaling”. In order to continue to scale down interconnect issue with resistance-capacitance (RC), cost, variability and mechanical stability need to be addressed. Figure 6 summarizes the path forward.

Figure 6. The Path Forward.

Conventional dual damascene and super vias for better routing give way to barrierless interconnect with air gaps possibly fabricated by semi-damscene. There is also research into integrated thin film transistor into the BEOL for increased functionality.

The semi-damascene is as follows:

  1. A via opening is patterned and etched in a dielectric film.
  2. The via is filled with Ruthenium (Ru) and Ru deposition continues until a layer of Ru is formed over the dielectric.
  3. The Ru is then masked and etched into metal lines.
  4. Air gaps are formed between the metal lines.

The Ru has a Titanium (adhesion layer) but the via to metal line interface is continuous Ru reducing resistance and the air gaps reduce the capacitance. Zsolt wouldn’t discuss the air gap formation process but presumably some kind of conformal film is deposited and then pinched off with another deposition.

This technique was used ot fabricate the world’s first 21nm pitch interconnects. Figure 7 summarizes the results.

Figure 7.  21nm Pitch Interconnect.

Figure 8 illustrates a thin film transistor (TFT) in the BEOL and describes some applications.

Figure 8. BEOL Thin Film Transistor.

Conclusion
Imec continues to produce cutting edge research to support continued scaling and improvements in semiconductor performance.

[1] M.-S. Kim, N. Harada, Y. Kikuchi, J. Boemmels, J. Mitard, T. Huynh-Bao, P. Matagne, Z. Ta1, , W. Li,
K. Devriendt, L.-A. Ragnarsson, C. Lorant, F. Sebaai, C. Porret, E. Rosseel, A. Dangol, D. Batuk,
G. Martinez-Alanis, J. Geypen, N. Jourda1, A. Sepulveda, H. Puliyalil, G. Jamieson, M. van der Veen, L. Teugels, Z. El-Mekki, E. Altamirano-Sanchez, Y. Li2, H.Nakamura, D. Mocuta, F. Masuoka, “12-EUV Layer Surrounding Gate Transistor (SGT) for Vertical 6-T SRAM: 5-nm-class Technology for Ultra-Density Logic Devices,” VLSIT Symposium (2019).


The Wilf Corrigan Fairchild P&L Review

The Wilf Corrigan Fairchild P&L Review
by John East on 07-15-2019 at 6:00 am

The “20 Questions with John East” series continues

In 1973 plus or minus a year or so I was working as a supervising engineer in one of the bipolar digital product groups.  My boss was a man named Jerry Secrest.  He was a great boss – he taught me most of what I knew about ICs in my youth. Jerry had responsibility for a product line.  That meant that the Fab, the product and process engineers, and the test area all were under him.  It also meant that he had P&L responsibility.  That sounds like a good thing, doesn’t it?  It wasn’t!!!

P&L responsibility forces you to open your eyes to how life really is.  Sometimes reality isn’t fun.  It’s not hard to find someone today writing that profits are evil and the people who try to make them are even more evil.  But reality tells a different story.  If you’re a private company,  to get the company going and keep it going you have to raise money.  To do that, you have to tell the potential investors that once you’re up and running, you’re going to be making nice profits and that the profits will grow over time.  If you don’t,  they won’t invest.  Going back later and telling them, “I was only kidding” is ill-advised!!

If you’re a public company,   the same argument holds true, but on steroids!  People buy your stock because they think it’s going to go up.  Why do they think that?  Because you led them to believe it.  In the long run, stocks go up because earnings go up, so it’s your job to make earnings go up.  There’s no way you could go back to the shareholders and say,  “Gee.  I changed my mind.  Profits are evil.  I’m not going to try to make any.  My plan is to make the stock go down.”  What, if instead of being the CEO,  you owned some of that stock? You’d kick the CEO out in a nanosecond.  Then you’d sue him    — and you’d win the suit!

Oh.  One other thing.  In both cases,  if you lose money long enough,  you eventually run out and go broke.  So   — it’s important to be profitable!!!  Yes.  You also have a responsibility to the people who work there and another to the extended community.  Trying to take care of all three simultaneously was the hardest thing I ever did!!!!  But  — when the dust settles  — — it’s important to be profitable!!!

Sometimes capitalism sucks  —  but  I’ve been in most of the formerly communist countries back when they were just coming out of communism.  They were absolute economic disasters.  So, capitalism may well suck,  but all the other systems that I’m aware of suck more.  Far more!!  By the way.  I am not affiliated with either political party.  They both aggravate me!

Back to the story:  Each operation at Fairchild had a regular P&L review with Wilf Corrigan, then a Vice President and later to be Fairchild’s CEO. Besides Wilf, his top financial guy and several people from the operation being reviewed attended. I worked in the Digital Integrated Circuits group (DIC) which was run by a guy named John Sussenburger (We called him Suss).  Jerry worked under Suss.  Suss worked under Wilf.  The financial reviews looked at Sussenberger’s P&L and its component P&Ls.  (Tom Longo,  Paul Reagan, and Dave Deardorf were also involved in the organization at various times in those days,  but I don’t recall any of them being in the meeting)

Fairchild had a thing they called their HiPot List. (High Potential List) You were put on the HiPot list if you were a high potential employee who seemed to have the ability to work your way into upper management.  To my great delight, in 1971 or 1972 they added me to that list. Wilf had put in a rule that at each review a different person from the HiPot list should come to get some seasoning.  One day Jerry told me that my turn in the barrel had come. It was my chance to go to the review.

I went to the conference room at least 20 minutes before the start time. There was a large potted plant at the back of the conference room. I figured out that one of the chairs in the back was partially hidden from view by that potted plant. Naturally I took that chair.  (I was scared to death of all the Fairchild high-level managers) Then I waited. After a while the real attendees came filing in and the meeting began. Wilf Corrigan is astute! There is no tricking Wilf with the numbers. They would put up very complex foils absolutely full of numbers and Wilf would immediately zero in on the number that made a difference

He was very direct, but very polite. No screaming, shouting or table pounding even though DIC was losing money. We all understood that Wilf Corrigan didn’t like losing money, so the review didn’t have a good feel to it  –  all the attendees were on pins and needles.  But – it didn’t seem to be a problem. Wilf was calm and cordial.  He very calmly went about getting an understanding of what was going on.   When all the data had been presented, Sussenberger asked Wilf if he had any questions.  I thought, “Wow. This isn’t so bad. We’re losing money, but Wilf understands. Nobody got beaten up or fired. What was I worried about?”

Wilf said, “Yes. A couple of questions:  How much money did you say you were losing?”

(I don’t remember the actual numbers so I’ll make some up.)

John: “Oh. We’re losing about a million dollars, Wilf.”

Wilf: “What does your average employee make, John?”

John: “Gee I don’t know exactly, but I’ll guess about $20,000.”

Wilf:  –“Hmmm. That comes to 50 people, doesn’t it?”

John: “Well, you could look at it that way, Wilf.”

Wilf: ‘‘I do look at it that way, John.”

Wilf  “OK. I don’t have any more questions, but John, I’m planning to do you a favor.”

John: “What’s that, Wilf?”

Wilf: “Tonight I’m going by the hardware store on my way home. I’m going to buy one of those clicker/counters….. you know the little mechanical things with a button. Each time you hit the button with your thumb it ups the count by one.”

John: “Great Wilf!  That’s great!!!  Sounds really good!!! …  I like that!!!       …………..  ……………………..   but … what are you going to do with that?”

Wilf: “I’m going to go into your building Monday morning. I’m going to stand in the main hall. Each time someone walks by me I’m going to ask him if he works for John Sussenburger.  If he says ‘Yes’, I’m going to say, ‘You’re fired’ and click the button. When the count gets to 50, you’ll be profitable!   —– John, you’re really going to enjoy running a profitable business!”

I stayed out of the hall that Monday morning!

Wilf Corrigan went on to become the founder / CEO of LSI Logic

Next week:  The demise of Fairchild.

See the entire John East series HERE.

Pictured:  Wilf Corrigan.


AI Chip Landscape and Observations

AI Chip Landscape and Observations
by Shan Tang on 07-14-2019 at 8:00 am

It’s been more than two years since I started the AI chip list. We saw a lot of news about AI chips from tech giants, IC/IP vendors and a huge number of startups. Now I have a new “AI Chip Landscape” infographic and dozens of AI chip related articles (in Chinses, sorry about that :p).

At this moment, I’d like to share some of my observations.

First and foremost, there is no need to argue about the necessity of dedicate AI hardware acceleration anymore. I believe, in the future, basically, all chips will have AI acceleration design inside. The only difference is how much area you will put there for AI. That is why we can find almost all of the traditional IC/IP vendors in the list.

Non-traditional chip makers designing their own AI chip has become a common practice and showing their special power in more and more cases. Tesla’s FSP chip could be highly customized to their own algorithms, which is evolving constantly by the “experience” of millions of cars on the road, and enforced with the help of Tesla’s strong HW/SW system teams. How do others compete with them? Google, Amazon, Facebook, Apple, Microsoft are working similarly, with the real world requirements, the best understanding of the application scenarios, strong system engineering capabilities, and deep pocket. Their chips are of course easier to succeed. How do traditional chip makers and chip start-ups compete with them? I think these will be the key questions that will shape the future of the industry.

A huge number of AI chip startups emerge, which outnumbered any other segments or any other time in the IC industry. Although it is slow down a bit now, we still hear money-raising news from time to time. The first wave of startups is now moving from showing architecture innovations to fighting in the real world to win customers with first generation chip and toolchain. For latecomers, “to be different” is getting harder and harder. Companies who use emerging technologies, such as in-memory computing/processing, analog computing, optical computing, neuromorphic, etc., are easier getting attention, but they have a long way to go before productizing their concept. In the list, you can see that, even in these new areas, there already are multiple players. Another type of differentiation is to provide vertical solutions instead of just the chip. But, if the technical challenge of such vertical applications is small, then the differentiation advantage is also small; for some more difficult applications, such as automatic driving, the challenge they face is the need to mobilize a large number of resources to do the development. Whatever, the startups have to fight for their futures. But, if we look at the potential usage of AI in almost everywhere, it is worth betting.

“Hardware is easy, software is much more difficult” is something we all agree on now. The toolchain that comes with the chip is the biggest headache and is with big value as well. In many speeches from AI chip vendors, they spend more and more time introducing their software solutions. Moreover, the optimization of software tools is basically endless. After you have single-core toolchain, you need to start thinking about multi-core, multi-chip, multi-board solutions; after you have compiler or library for neural network, then you have to think about how to optimize non-NN algorithm in the heterogeneous system for more complex applications. On the other hand, it is not just a software problem at all. You need to figure out the best tradeoff of software and hardware to get optimized results. From last year, we see that more and more people working on the Framework and Compiler for optimization, especially the compiler part. My optimistic estimate is that the compiler for just NN part will be mature and stable within a year. But other issues I mention above require continuous efforts and much longer time.

Among benchmarks of AI hardware, the most solid work we saw is MLPerf (just released the Inference recently) with most of the important players joining. The problem of MLPerf is that the efforts of the deployment are not trivia. Only a few large companies have submitted the training results. I am looking forward to more results in the coming months. At the same time, AI-Benchmark from ETH Zurich got attention, which is similar to the traditional mobile phone benchmark, using the results of running application to score the chip’s AI ability. Although it is not fully fair and accurate, the deployment is simple and it already covered most of the Android phone chips.

The AI chip race is a major force to drive the “the Golden age of architecture”. Many failed attempts 20 or 30 years ago seem to be worthy to revisit now. The most successful story is the systolic array, which is used in Google’s TPU. However, for many of them, we have to be cautious to ask: “Is the problem cause its failure in the past is gone today?”

Similar to other segments of AI industry, the speed of adopting academic works into the business world is unprecedented. The good news is that new technologies can get into products faster, and researchers can get rewards quicker, which could be a great boost to the innovations. In most cases, academic research focuses on a breakthrough at one point. However, implementing a chip and its solutions requires a huge amount of engineering works. So, the distance in between is actually very large. Nowadays, people more like talking about innovation at one point, but neglect (maybe intentionally ) the “dirty work” required to implement the product. This could be dangerous to the investors and the innovators.

Last but not least, one interesting observation is that the AI chip boom is significantly driving the development of the related areas, such as EDA/IP, design service, foundry, and many others. We see the progress in the areas, like new types of memory, packaging (chiplet), on-chip/off-chip networks, are all speeding up, which may eventually lead to the next round of exciting AI chip innovations.


Are the 100 Most Promising AI Start-ups Prototyping?

Are the 100 Most Promising AI Start-ups Prototyping?
by Daniel Nenni on 07-12-2019 at 10:00 am

I came across a report on the 100 most promising AI start-ups. The report claimed that CBInsights had “selected the 100 most promising AI start-ups from a pool of 3K+ companies based on several factors …”  Wait, what … 3K+ companies!?!?  This was a stunning reminder of the sheer magnitude of what is shaping up to be a veritable tsunami of AI start-ups.  Combine this tsunami with a large number of inquiries from early stage AI start-ups asking S2C for help with FPGA prototyping, and it’s becoming abundantly clear that the demand curve for modestly priced FPGA prototyping products and services will be shaped like a hockey stick, absolutely.

Many of these start-ups are at a complete loss as to what’s required to plan for, implement, and execute an FPGA prototype.  So, for what it’s worth, here are a few hindsight considerations from one experience I had with FPGA prototyping at a start-up developing a small but very complex SoC;

  1. Start FPGA prototype planning early and involve all SoC stakeholders … chip design verification, silicon bring-up, firmware development, etc…
  2. Write an SoC Product Requirements Document (PRD) that all stakeholders clearly understand, and agree to … and establish a revision control process that keeps it current as the requirements evolve through the project.
  3. Remember that the SoC, not the FPGA prototype, is the mission … so, set reasonable expectations for FPGA prototype project scope and facilitate the FPGA prototype project with sufficient resources, deep FPGA prototyping expertise, and aggressive but achievable milestones.

Of course, the project I reference did none of the above, that’s why I referred to them as “hindsight considerations”.  The FPGA prototype “vision” for pre-tapeout verification was to use the FPGA prototype in parallel with simulation-based verification for higher verification coverage, and then make the prototype available to firmware developers for early firmware bring-up and get it all done before tapeout.

This particular design verification project was challenged from the beginning.  As it turned out, not all simulation jockeys believed in FPGA prototypes, and there was some passive aggressive behavior that impacted the success of the overall SoC verification effort.  The “golden netlist” (the one used for tapeout) for SoC simulation always needs to be modified for the FPGA implementation to accommodate the intrinsic FPGA clocking, embedded memory, embedded IP, and DFT physical constructs.  The burden of understanding the netlist differences, what impact they will have on the test results when the FPGA prototype is subjected to the same testing as the golden netlist, and which netlist differences to ignore and which need special attention, should be a team responsibility.

This project came to appreciate the need for a robust netlist versioning process for modifying the netlist throughout the verification project to assure that the two verification platforms stay aligned when design bugs are fixed and PRD changes are introduced into the design during the development process.  The importance of a robust SoC bug reporting platform (Bugzilla, Jira, etc.) was also really appreciated late in the project when the bug discovery rate slowed to a trickle to minimize one of the two verification teams spending a lot of time isolating, fixing, and verifying an SoC bug that the other team had already fixed weeks ago.

Then, there was the point at which the FPGA prototype platform was handed over to the firmware team … this was a rude awakening.  Unlike the use of the FPGA platform for design verification, the firmware team expects the FPGA platform to actually work according to the PRD!  The firmware team has no patience for a software validation platform that doesn’t work, and they don’t have the skill set nor the inclination to try to understand why their software is not working on the hardware.  They simple “kick” the FPGA prototype back to the hardware guys and tell them to “fix it”.

The first attempts to run firmware on the FPGA prototype can be plagued by elusive problems that are as simple as how the hardware comes out of “reset”.  The FPGA prototyping team must plan to support the firmware team during the inevitable ramp-up of firmware validation on the FPGA prototype platform.  If firmware validation is on the critical path to tapeout, as it was for this project, days and weeks matter.

As a strong foundation to any FPGA prototyping project, high quality, reliable FPGA hardware is an absolute must.  The SoC verification project is challenging enough without having to worry about the underlying FPGA hardware.  S2C has been building and delivering FPGA prototyping hardware and software since 2005, and each generation has been an improvement on the previous generation.  S2C supports Intel and Xilinx FPGAs, and offers Single, Dual and Quad FPGA versions of its Prodigy Logic Systems.

To get a quick S2C quote click here.


The Coming Tsunami in Multi-chip Packaging

The Coming Tsunami in Multi-chip Packaging
by Tom Dillinger on 07-12-2019 at 6:00 am

The pace of Moore’s Law scaling for monolithic integrated circuit density has abated, due to a combination of fundamental technical challenges and financial considerations.  Yet, from an architectural perspective, the diversity in end product requirements continues to grow.  New heterogeneous processing units are being employed to optimize data-centric applications.  The traditional processor-memory interface latencies are an impediment to the performance throughput needed for these application markets.  Regular Semiwiki readers are aware of the recent advances in advanced multi-chip package (MCP) offerings, with 2.5D interposer-based and 3D through-silicon via based topologies.

Yet, it wasn’t clear – to me, at least – how quickly these offerings would be embraced and how aggressively customers would push the scope of multi-die integration for their design architectures.  I recently had the opportunity to attend an Advanced Packaging Workshop conducted by Intel.  After the workshop, I concluded that the rate at which advanced MCP designs are pursued will accelerate dramatically.

At the workshop, the most compelling indication of this technology growth was provided by Ram Viswanath, Vice President, Intel Assembly/Test Technology Development (ATTD).  Ram indicated, “We have developed unique 3D and 2.5D packaging technologies that we are eager to share with customers.  Product architects now have the ability to pursue MCP’s that will offer unprecedented scale and functional diversity.”   The comment caught the audience by surprise – several members asked Ram to confirm.  Yes, the world’s largest semiconductor IDM is enthusiastically pursuing collaborative MCP designs with customers.

For example, the conceptual figure below depicts a CPU, GPU, VR accelerator, and memory architecture integrated into a package that is one-sixth the dimension associated with a discrete implementation.  The size of this organic package is potentially very large – say, up to 100mm x 100mm.  (Intel’s Cascade Lake server module from the Xeon family is a 76mm x 72.5mm MCP containing two “full reticle-size” processor die.)

For the cynical reader, this was not the same atmosphere as the previous announcement of the fledgling Intel Custom Foundry fab services.  There was a clear, concise message that emerging data-driven applications will want to leverage multi-chip package integration (around an Intel CPU or FPGA).  The Intel ATTD business unit is committed to support these unique customer designs.

The rest of this article will go into a bit of the MCP history at Intel, with details on the 2.5D and 3D package technologies, as well as some future packaging research underway.

MCP History

After the workshop, I followed up with Ram V., who provided a wealth of insights into the R&D activity that has been invested in MCP technology development at Intel.  He indicated, “There is extensive experience with multi-chip packaging at Intel.  For example, we are shipping a unique embedded silicon bridge technology for inter-die package connectivity, which has been in development for over a decade.  This capability to provide wide interface connectivity between die offers low pJ/bit power dissipation, low signal integrity loss between die, and at low cost.”  The figure below depicts the interconnect traces between the microbumps of adjacent die edges – a key metric is the (product of the) areal bump density and routable embedded traces per mm of die perimeter. 

“The technology development focused on embedding a silicon bridge into the panel-based organic package assembly flow.  The x, y, z, and theta registration requirements for the bridge are extremely demanding.”, Ram continued.

Ram showed examples of Stratix FPGA modules with HBM memory utilizing the embedded bridge.  “This product roadmap began with just a few SKU’s when Intel was fabricating Altera FPGA’s, prior to the acquisition.   FPGA applications have grown significantly since then – there are now MCP offerings throughout the Stratix product line.”  He also showed (unencapsulated) examples of the recently-announced Kaby Lake CPU modules with external GPU, leveraging embedded bridges between die.

“Any significant assembly or reliability issues with the die from various fabs?”, I asked.

“This technology is the result of a collaborative development with suppliers.”, he replied.  Pointing to different die in the various MCP modules, he continued, “This one is from TSMC.  This one is from GF.  Here are HBM memory stacks from SK Hynix.  We have worked closely with all the suppliers on the specifications for the micro-bump metallurgy, the volume of bump material, the BEOL dielectric material properties, die thickness and warpage.  All these sources have been qualified.”

EMIB

The embedded multi-die interconnect bridge (EMIB) is a small piece of silicon developed to provide wide interconnectivity between adjacent edges of two die in the MCP.  The EMIB currently integrates four metallization planes – 2 signal and 2 power/ground (primarily for shielding, but could also be used for P/G distribution between die).

Additionally, the SI team at Intel has analyzed the signal losses for different interconnect topologies of signal and ground traces of various lengths – see the figure below.

Ravi Mahajan, Intel Fellow, provided additional technical information on EMIB.  He indicated the metal thickness of the EMIB planes is between that of silicon die RDL layers and package traces, achieving a balance between interconnect pitch and loss characteristics.  “We’re at 2um/2um L/S, working toward 1um/1um.  Our SI analysis for the EMIB suggests up to an 8mm length will provide sufficient eye diagram margin.  Conceptually, an EMIB could up to ~200mm**2.”  (e.g., 25mm between adjacent die edges X 8 mm wide)

Currently, the design and fabrication of the bridge is done by Intel ATTD – there is no external design kit, per se.  “The development of the I/O pad floorplans for the adjacent die with the embedded bridge at ATTD is a collaborative effort.”, Ram indicated.  “There is also significant cooperation on the design of the VDDIO and GNDIO power delivery through the package and around the EMIB to the perimeter bump arrays on the die.  Intel ATTD also does the thermal and mechanical integrity analysis of the composite package design.  As the thermal dissipation levels of the emerging markets for MPC’s can be high, and due to the different thermal coefficient of expansion between the die and EMIB silicon and the organic substrate, thermal and mechanical analysis of the microbump attach interface is critical.”

It is probably evident, but worth mentioning that the presence of the EMIB silicon in the package does not interfere with the traditional process for adding surface-mount passives to the overall assembly (e.g., decoupling caps).  At the workshop, the support for backside package metal loop inductors and SMT caps was highlighted – “Intel CPU packages have integrated voltage regulation and voltage domain control since the 22nm process node.  The inductor and capacitor elements on the package are part of the buck converter used in the regulator design.”, Ram indicated.  Customers for Intel MCP designs would have this capability, as well.

Note the characteristics of the EMIB-based design differ considerably from the 2.5D package offerings utilizing a silicon interposer.  On the one hand, the Si interposer allows greater flexibility in inter-die connectivity, as the interposer spans the extent of the entire assembly.  (Newer 2.5D offerings are ‘stitching’ the connection traces between reticle exposures to provide an interposer design greater than the 1X maximum reticle field dimensions.)  Conversely, the EMIB approach is focused on adjacent die edge (wide, parallel) connectivity.  The integration of multiple bridges into the conventional package assembly and final encapsulation flow enables a large area field – e.g., a 100mm X 100mm dimension on a 500mm X 500mm organic substrate panel that was mentioned during the workshop.  The EMIB with organic substrate provides a definite cost optimization.

3D “Foveros”

With the Lakefield CPU product family, Intel introduced 3D die-stacked package offerings, utilizing through silicon vias.  The figure below illustrates the 3D die stacks.

Advanced packaging R&D investment is focused on reducing both the TSV and microbump pitch dimensions – currently at a 50um pitch, heading to ~30-35um pitch.  This will necessitate a transition from thermo compression bonding to a unique “hybrid bonding” process – see the figure below.

Whereas thermo compression bonding utilizes pressure and temperature to meld exposed pad metallurgies on the two die faces, hybrid bonding starts with a (somewhat esoteric) polishing process to provide pad metals with precisely controlled “dishing” of a few nanometers at the die surface.  The bonding step utilizes van der Waals forces between the (hydrophilic, extremely planar) die surfaces, then expands the metals during annealing to result in the pad connection.

Another key 3D packaging R&D concern centers around scaling the (base) die thickness – the goal for advanced 3D packages is to aggressively scale the Z-height of the final assembly.  “Thinning of the stacked die exacerbates assembly and reliability issues.”, Ravi M. highlighted.  As an interesting visual example, he said, “Consider the handling and warpage requirements for die no thicker than a sheet of A4 paper.”  (Starting 300mm wafer thickness:  ~775um;  A4 paper sheet thickness:  ~50um)

In the near future, the ability to combine multiple 3D die stacks as part of a large 2.5D topology will be available, a configuration Intel ATTD denoted as “co-EMIB”.  The figure below illustrates the concept of a combination of 3D stacked die with the embedded bridge between stacks.

Chiplets, KGD’s, and AIB

The accelerated adoption of MCP technology will rely upon the availability of a broad offering of chiplets, in a manner similar to the hard IP functionality in an SoC.  As mentioned above, the Intel ATTD team has already addressed the physical materials issues with the major silicon sources, to ensure high assembly/test yields and reliability.  Yet, the electrical and functional interface definition between chiplet I/O needs an industry-wide focused effort on standardization, to ensure chiplet interoperability.

Intel has released the AIB specification into the public domain, and is an active participant in the DARPA “CHIPS” program to promote chiplet standards.  (DARPA link, AIB link — registration required)  Somewhat surprisingly, the IEEE does not appear to be as actively engaged in this standard activity – soon, no doubt.

At the workshop, the Intel ATTD team indicated that internal activities are well underway on the next gen chiplet interface spec (MDIO), targeting an increase in data rate from 2Gbps to 5.4Gbps (at a lower voltage swing to optimize power).

MCP product designs will continue, but the growth in adoption necessitates a definitive standard – an “Ethernet for chiplet interconnectivity”, as described by Andreas Olofsson from DARPA.

There is another facet to chiplet-based designs that was discussed briefly at the workshop.  The final, post burn-in test yield of the MCP will depend upon the test and reliability characteristics of the known good die (KGD) chiplets.  The ATTD team indicated that Intel has made a long-standing (internal) investment in production ATE equipment development.  One of the specific features highlighted was the capability to perform accelerated temperature cycling testing at wafer level, to quickly identify/sort infant fails – this the resulting KGD forwarded to package assembly will not present a major yield loss after final burn-in.  The suppliers of chiplet “IP” will also certainly need to address how to provide high reliability die, at low test cost.

Futures

The final workshop presentation was from Johanna Swan, Intel Fellow, who described some of the advanced packaging R&D activities underway.  The most compelling opportunity would be to alter the trace-to-via connectivity process.  Rather than the large via pad-to-trace size disparity depicted in the figure above, a “zero misaligned via” would enable significant improvement in interconnect density.  The figure below illustrates the current package trace-via topology, and the new ZMV trace-via connection at 2-4um trace widths.

The current epoxy-based package panel utilizes laser-drilled vias – to realize a ZMV, a new technology is under research.  (Johanna indicated that photoimagable materials of the polyimide family would provide the via density, but that materials, process, and cost constraints require staying with epoxy-based panels – a unique via-in-epoxy process is needed.)  If the ZMV technology transitions to production, the MCP interconnect (line + space) trace density would be substantially increased – when combined with improvements in microbump pitch, the system-level functionality realizable in a large MCP would be extremely impressive.

Summary

There were three key takeaways from the workshop.

  • Heterogeneous multi-chip (die and/or chiplet) packaging will introduce tremendous opportunities for system architects to pursue power/perf/area + volume/cost optimizations.
  • The Intel EMIB interconnect bridge at die interfaces offers a unique set of cost/size/complexity tradeoffs compared to a 2.5D package incorporating a silicon interposer.
  • The Intel ATTD team is committed to supporting their advanced 2.5D, 3D, and merged (co-EMIB) technologies for customers seeking unique product solutions to data-driven markets.

Frankly, in the recent history of microelectronics, I cannot think of a more interesting time to be a product architect.

-chipguy

 


HBM or CDM ESD Verification – You Need Both

HBM or CDM ESD Verification – You Need Both
by Tom Simon on 07-11-2019 at 11:00 am

In the realm of ESD protection, Charged Device Model (CDM) is becoming the biggest challenge. Of course, Human Body Model (HBM) is still essential, and needs to be used when verifying chips. However, a number of factors are raising the potential losses that CDM events can cause relative to HBM. These factors fall into two categories: ESD event causes and effects, and difficulty predicting in advance if ESD protections are sufficient and effective. Let’s address each of these in turn.

HBM contemplates an individual pin coming in contact with a charged object, such as a person’s hands. The other requisite condition is that there is a path to ground on another pin. Of course, there might be multiple pins affected in a single real-world event, but testing can be compartmentalized down to two pins at a time. With automated handling, individual ICs are rarely handled by human hands, reducing the likelihood that a pin will be exposed to electrostatic charge.

Even so, protections against HBM type events are very important for chip yield and reliability. There are many other scenarios, during handling and in the field where a device might face a high current ESD discharge. HBM testing can also serve as a proxy for other types of ESD related events. So, we see there is a continued need for adding and verifying protections for HBM.

On the other hand, automated chip handling can subject IC packages to tribo-electric charging as they move through the manufacturing and board assembly processes. This charge build-up can cause big problems once a ground path becomes available. Most often this occurs when one or more pins come in contact with grounded metal. Unlike HBM, the discharge can occur nearly anywhere in the IC. When the package is charged, it induces a capacitive charge build up on large nets on the IC itself. Stored charge is distributed along these nets and any capacitive devices connected to them. Once a conduction path is created large amounts of stored charge begin to flow along these wires. Voltage gradients created during discharge events can subject ESD and core devices to large voltage differentials. Also, even though CDM current pulses are very short their amplitudes can be large leading to thermal failures in thin wires and in triggered devices.

Leading edge designs have several traits that increase the severity of CDM issues. Large IC’s create the need for bigger packages, which can accumulate more charge. The same can be said for the larger nets found in these designs. Additionally, supply nets in FinFET designs are longer and narrower than in earlier nodes. During discharge events, large voltage gradients can occur across these nets, which can damage the smaller and more sensitive advanced node devices.

Both HBM and CDM create challenges for verification teams prior to tape out. Some issues are common to both types of events, others arise or are exacerbated by the nature of CDM events. ESD discharge events are dynamic by their very nature. One or more devices may trigger. It is important to make a full accounting of which devices are actually triggered. Current flows and voltages will be highly dependent on which devices trigger and where charge is stored. Some ESD devices may exhibit snap-back behavior, which rules out the use of ordinary circuit simulators. Instead, a specialized dynamic simulator is necessary to capture device triggering and the resulting voltage build-up afterwards.

The Fieldview from Magwel’s CDMi shows a violation with high voltage built up across the terminals
of a device during a CDM discharge event

The metal structures of the involved nets do not look or act like simple lumped loads. Supply nets are often made up of wide metal and have complex current flow. This makes the methods used for extraction and simulation critical for obtaining accurate results.

Magwel has added a CDM simulation and verification tool to its ESD suite, which previously addressed HBM. Their successful HBM tool, called ESDi, has given them years of experience dealing with the fundamental issues of usability, performance and quality of results. Magwel is known for its highly accurate solver based extraction engine. Their special purpose simulation engines are also a key technology that enabled the development of their CDM offering, CDMi.

CDM protection is often used on large designs, to accommodate this Magwel R&D has rolled out a number of performance enhancements that offer much higher throughput. CDMi also takes advantage of parallel processing to ensure better runtimes. Error reporting that avoids an overload of false errors and debugging play a vital role in overall productivity, so Magwel has used its experience in this area to provide easy to use reporting and a cross-linked layout and field view capability that helps users identify the source of design issues.

Because of its unique solver based technology and their extensive experience in the ESD field, Magwel is well positioned to provide an effective solution for CDM protection network simulation and verification. With the rollout of the CDMi product and the level of interest it has garnered, it seems that Magwel is on the right track. The Magwel website has more information on CDMi and is well worth looking over.

About Us
Magwel® offers 3D field solver and simulation based analysis and design solutions for digital, analog/mixed-signal, power management, automotive, and RF semiconductors. Magwel® software products address power device design with Rdson extraction and electro-migration analysis, ESD protection network simulation/analysis, latch-up analysis and power distribution network integrity with EMIR and thermal analysis. Leading semiconductor vendors use Magwel’s tools to improve productivity, avoid redesign, respins and field failures. Magwel is privately held and is headquartered in Leuven, Belgium. Further information on Magwel can be found at www.magwel.com