CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

Electromigration Analysis and FinFET Self-Heating

Electromigration Analysis and FinFET Self-Heating
by Tom Dillinger on 09-24-2015 at 12:00 pm

FinFET processes provide power, performance, and area benefits over planar technologies. Yet, a vexing problem aggravated by FinFET’s is the greater local device current density, which translates to an increased concern for signal and power rail metal electromigration reliability failures. There is a critical secondary effect, as well – the thermal profile of the FinFET influences the temperature of the metal interconnect neighborhood, which accelerates the EM failure rate probability.

At the recent TSMC OIP symposium, Ansys/Apache provided exceptional insights into the issue, and how their toolset assists designers with EM analysis at advanced nodes.

The “Self-heating effect”
Self-heating refers to the thermal energy originating at a current-carrying element. The local temperature rise depends upon the thermal dissipation path(s) away from the element. The model for thermal conduction uses an electrical equivalent – heat flows through a “thermal resistance”, which is characteristic of the materials surrounding the source.

FinFET self-heating has unique thermal resistances from the device channel. Unlike planar (non-SOI) devices, the thermal path through the substrate is constrained by the poor thermal conductivity of the dielectric at the base of the fin. As a result, a significant percentage of the device self-heat energy flows vertically and laterally to the MEOL metallization, with a delta_T increase in the metals.

Note: FinFET self-heat dissipation also has an adverse impact on instrinsic device reliability mechanisms, such as bias temperature instability (BTI) and hot carrier injection (HCI). A subsequent article will review TSMC OIP presentations on FinFET “device aging” models and self-heat acceleration.

In addition to heat transfer from FinFET devices, there is a temperature increase in interconnects due to wire self-heating – namely, the resistive losses in the metal. There is also heat transfer from the resistive dissipation in neighboring interconnects.

The EM induced failure rates – i.e., the “mean time to fail” probability – due to the average current density in a metal wire is typically represented by Black’s equation:

(For more background on EM, please see the semiwiki article:
https://www.semiwiki.com/forum/content/1085-ic-custom-ip-blocks-electromigration-em-ir-drop-effects.html )

Note the exponential dependence on temperature in this model. The delta_T in a wire above the local die temperature due to heat transfer and resistive self-heating requires a sophisticated thermal resistance model and power dissipation analysis, which Ansys has developed and qualified with TSMC.

Ansys flow for enhanced EM analysis
Ansys is an industry leader in electromigration analysis (RedHawk, Totem), and in thermal modeling and analysis of the chip-package-system environment (RedHawk, Sentinel-TI, Icepak). The on-chip extraction features of this toolset have been enhanced to support multi-patterning decomposition “color aware” metal biasing of TSMC’s advanced FinFET nodes. To address the requirements for delta_T in electromigration analysis, new tool capabilities have been added, and a new flow qualified.

The key new feature is the characterization of the interconnect temperature to include the thermal sources mentioned earlier. Starting with the (tile-based) Chip Thermal Model without self-heat, a new solution for the interconnect temperature with device and wire self-heat contributions is derived using the chip-package thermal analysis tools.

There are a number of different methods used to determine an electromigration-based chip reliability estimate. The most direct approach is:

  • assume a max allowable increase in interconnect (or via) resistance over a product lifetime due to EM (e.g., x%)
  • measure the average current density on test wafers which results in that delta_R – this is an accelerated stress test applied to various interconnect structures
  • using Black’s equation, adjust the measured J_average current density limit from the stress test to the product environment

For example, when deriving the J_average limit, assume the MEOL wire temperature is equal to Tj_max plus a small delta_T due to the wire’s intrinsic J_rms self-heating – e.g., 105 degrees C. + 5 degrees delta. The J_average limit may actually be a set of tables using the interconnect width and length. Electromigration current density limits are a function of the metallurgy of the materials – e.g., metal grain size versus width – and are greatly reduced for short length segments (below the “Blech length”).

  • calculate the thermal resistances between devices and wires for the delta_T characterization
  • using the Ansys flow, calculate the detailed wire temperatures
  • if the wire J_average and temperature are below the process reliability limits, assume the interconnects do not contribute significantly to the reliability MTTF; else, highlight these as electromigration “hot spots” which need addressing

Ansys has enhanced their tools and flows to address a key reliability issue in FinFET technologies – the potential for significant delta_T on interconnects/vias due to thermal coupling, and the resulting acceleration of electromigration reliability failures.

This is a good example of the benefits of the “early” EDA vendor partnership with TSMC that Cliff Hou highlighted during his OIP keynote presentation.

-chipguy

Also read: Four Takeaways from the TSMC OIP 2015


SEMI SMC: Atoms Still Don’t Scale

SEMI SMC: Atoms Still Don’t Scale
by Paul McLellan on 09-24-2015 at 7:00 am

Last Tuesday was the SEMI’s annual Strategic Materials Conference (SMC). The opening keynotes were given by Gary Patton, the CTO of GlobalFoundries, and Mark Thirsk, Managing Partner of Linx Consulting. This year it was held in the Computer History Museum (which always makes the commute interesting since you have to fight with a zillion people going to the Googleplex along the same road).

Gary made something explicit that I sort of half-knew. Prior to 90nm, pretty much all the advances in semiconductor process came from scaling. Then we were down to gate oxide just 3 atoms thick and we needed to switch to materials and other sorts of innovation: strained silicon, High K metal gate, FinFET and more.


As we innovated in materials we spread our attention all over the periodic table, from a dozen elements in the 1980s (H, B, N, O, F, Al, P, Cl, Ar ,As ,Sb, and most of all Si) to about 60, half the table. Who even knew what Hafnium was before it became important in high K dielectrics?
There will be lots of innovation needed in devices in the future (gate-all-round, carbon nanotubes, III-V devices…) but there is a more urgent problem that our interconnect is running into the wall. The interconnect itself is protected by so many liners and caps that there is very little actual interconnect left and so the resistance is too high. There is lots of innovation going on in this area:

One materials challenge is that supply is often very concentrated in just one or two suppliers. For instance, one thing that several people commented on during SMC was availability of neon (Ne). The price has gone up 10X. It turns out that it all comes from eastern Ukraine, which you can’t have failed to notice is not the most stable part of the world right now. Plus, semiconductor just doesn’t move the needle that much. When we switched to copper interconnect the non-specialsit copper suppliers rubbed their hands with glee, before they discovered that annual consumption might be the same as the electrical wiring of a large building.

Mark pointed out that one result of this is that there is a lot of M&A (and spinouts of specialty divisions) in the materials industry. It is also important to remember that almost 3/4 of all 300mm capacity is in Asia (as of 2015). As a result there are also a large number of new suppliers entering in Asia, especially in China with its desire to become more self-sufficient in semiconductor manufacturing and the surrounding ecosystems. China is investing $160B over 10 years, with loans to supply-chain participants growing in importance.

So near-term trends that Mark and Gary both pointed out for devices and, especially, interconnect:

  • High-mobility channel InGaAs n-channel and Ge p-channel FinFETs
  • CVD Co improves Cu wetting and extends Cu gap fill (cobalt is a thin conformal layer that repairs discontinuities)
  • 5nm Ru:TaN liner followed by ECD copper (plated films have larger grain size)

But each of these conceals dozens of smaller innovations needed to make the simple-sounding one-sentence summary work in an high-volume fab. For example, cobalt in the metalization requires materials and chemicals from a number of different specialized suppliers:

  • PVD targets
  • CVD precursors
  • Electro/electroless plating chemistries
  • CMP slurries for Co
  • Co recess processes – Dry/Wet chemistry approaches
  • Co compatible cleans (wets)

In summary, here is a couple of decades of material innovation, and the future innovation that is being developed, summarized in a single diagram:

But looming ahead is perhaps the biggest barrier of all. We are atomic level deposition and processes. But atoms still don’t scale. That is something no amount of innovation is going to change.


Why Sidense OTP is Like the Armored Car of NVM

Why Sidense OTP is Like the Armored Car of NVM
by Tom Simon on 09-23-2015 at 4:00 pm

I have written about Sidense before, but last week at the TSMC Open Innovation Platform Forum, I had a chance to hear a talk by, and have lunch with Betina Hold Director of R&D at Sidense. Here is what I learned.

Sidense has been focusing on the growing market in what they like to call the smart connected universe. It is best to think of this as the union of mobile, IoT, wearables, medical, industrial and automotive. Incidentally Automotive was a hot topic at the OIP Forum.

However, getting back to our main topic, all of these applications have a need for non-volatile memory (NVM). In many cases the need is for secure and reliable storage for device ID’s, security codes, trim information and a variety of other write-few, read-many pieces of data. This can even extend to boot code.

This is a perfect application for one time programmable(OTP) NVM. Even if you need to make multiple writes, the NVM controller or system firmware can make it look like “few times programmable” (FTP) memory. OTP-NVM has several distinct advantages that make it a clear choice for chip design. The most obvious is that it requires no special masks nor any changes to the process at all. The Sidense 1-T solution uses the process and PDK designers are already using for their circuits.

Sidense has working silicon from 180nm down to 16 Fin FET. During her talk, Betina pointed out that the underlying OTP-NVP technology they use has actually improved with FinFET devices. They are seeing 100,000X difference between the read currents in 0 and 1 states with 16nm. Sidense is seeing that the leakage current in Fin FET devices has improved >10X relative to 28nm.

Sidense has been very successful at reducing the power requirements for its OTP. Typically, only the chip’s VDD is needed and no external or higher voltage supplies are required. The high read margins help reduce the power needs. Sidense also offers interfaces for sub-threshold operating voltages that are being targeted in new processes. This will further help reduce operating power.

The thing that Betina spoke about most passionately was how Sidense OTP NVM addresses security. She pointed out that their CTO Wlodek Kurjanowicz came from Chipworks, where he became very familiar with techniques used to reverse engineer circuits. Sidense works hard to ensure that data stored on their NVM-OTP cannot be hacked or compromised.

Hackers will go to great lengths to reverse engineer or even attempt to alter stored data and code. Sidense OTP NVM uses fully differential data storage, with complementary logic and bit cell structures to avoid any power signatures based on read values. Also, there are built in protections against tampering with clocks, operational voltage and even hacking attempts through lowering temperature.

The bit cells are also difficult to physically reverse engineer. There is no discernible physical difference between a 1 and a 0. Additionally, the hard IP routing for the OTP-NVM is designed to inhibit access if some of the metal is removed. All of this is very important because gaining physical access to connected devices that may have the ability to threaten larger secure systems is easier than ever. No longer can security depend on keeping hackers away from physical systems.

Even though I am very familiar with NVM-OTP, I learned a lot about the advantages of using them and how Sidense endeavors to ensure an extra level of security. For more information about Sidense, follow this link.


Krivi Specialty I/O Library Support UMC 28nm

Krivi Specialty I/O Library Support UMC 28nm
by Eric Esteve on 09-23-2015 at 12:00 pm

There is an industry consensus about 28nm, the technology node is here to stay, and to stay for very long. If we except 20nm node, which by opposition will have a very short lifetime, 28nm is the last node following the economic part of Moore’s law: designing on smaller technology allows building cheaper IC when you integrate the same functions, or to integrate two fold more gates at the same price. If a chip maker has deep enough pocket to afford huge development cost (R&D cost in the $80 to $100 million) he can target 16FF or even 10FF, the resulting IC will be faster and lower power but not necessarily cheaper than on 28nm, as we can see on below picture.

For a vast majority of the semiconductor industry targeting 28nm and above nodes it’s a very good news to know that the 28nm offer will be as large as possible, including UMC on top of TSMC and GlobalFoundries (Samsung focusing on a few customers asking for very large production volume). In this context, it’s important for potential UMC customer to rely on an offer as complete as possible, including specialty I/O pads. Krivi has announced the validation of IO pad library platform at UMC 28nm technology.

This IO platform supports wide variety of interface standards such as DDR, LVDS, and Memory card super combo IO libraries. We often write about SuperSpeed USB, PCI Express 3.0 or HDMI 2.0, these SerDes based protocol standards, essential to design SoC for consumer, networking or PC peripheral application. But to design a SoC you will also need to integrate these specialties I/Os and you expect these IO libraries to be proven in test silicon for their compliance to respective electrical standards, ESD and latch-up performance. Your boss may accept that the first release of the 16G PCIe 4.0 PHY fail to be 100% at spec… but certainly not if you SoC prototype don’t work due to a failing LVDS I/O.

According with Sivaramakrishnan Subramanian, Co-founder and Senior Principal engineer at Krivi: “Our specialty IO platform gives great flexibility to SoC and IP companies using UMC 28nm technologies to pick off-the-shelf IO pads that match or exceed best power, performance and area in industry”. If you don’t know Krivi Semiconductor, you most probably know ARM Ltd. The team in charge of the DDR3 PHY design within ARM has spin off to create Krivi in 2013. The same team has designed a specialty I/O platform supporting:

  • Universal DDR IO pad library supports all popular DDR standards like DDR4, DDR3/3U/3L, LPDDR3, LPDDR2, HSTL class-I and RLDRAM-3 etc. This library works at a maximum speed of 2.667Gbps in HLP process technology and boasts of having industry’s smallest foot-print. This library is designed with an aggressive power target of receiving data at 1mW/Gbps.
  • LVDS and Sub-LVDS combo IO library have data input and output cells along with an in-built Bandgap voltage reference cell for biasing. This IO pad meets TIA/EIA-644-A and SMIA 1.0 Part 2: CCP2 Sub-LVDS standards while working at top speed of 2Gbps and 1.6Gbps respectively.
  • Memory card I/O pad supports interface signaling ranging from 1.2V to 3.3V while using 1.8V gate oxide IO devices. This bi-directional I/O pad supports eMMC and UHS-I SD card standards.
  • SLVS, SubLVDS and SD UHS-II combo IO pad library is designed to meet JESD8-13, SMIA 1.0 Part 2: CCP2, and UHS-II SD card association standards with a top speed of 1.56Gbps.

Large semi or IP companies have moved IP design resources to India in early 2000’s and we can clearly identify start-up issued from these chip or IP makers, especially in the mixed-signal area, like Cosmic Circuits (acquired by Cadence in 2012), Silabtech (spin-off of OMAP PHY design team from TI) or Krivi, in charge of DDRn PHY IP with ARM. These design teams have the same level of experience and excellence than their counterpart working with the well-known IP vendors, and UMC choice make sense, according with Shih-Chin Lin:

“UMC has built a strong library, IP and design support environment for customers designing into our high volume production 28nm technology,” said Shih-Chin Lin, senior director of UMC’s IP & Design Support division at UMC. “With the addition of the IO Alcor platform from Krivi, our mutual customers now have access to a valuable resource that will allow them to seamlessly integrate a wide variety of high speed memory IO into their SoC design.”


How GlobalFoundries’ CTO Nearly Became a Lawyer…Called Funkhauser

How GlobalFoundries’ CTO Nearly Became a Lawyer…Called Funkhauser
by Paul McLellan on 09-23-2015 at 7:00 am

I sat down for a chat with Gary Patton, the CTO of GlobalFoundries, at today’s SEMI Strategic Materials Conference where he had just given one of the keynotes (which I’ll cover another time). His family name isn’t really Patton, his grandfather’s name was Funkhauser, but his step-grandfather’s name was Patton. His father decided to go with Patton (if your name was Funkhauser maybe you would too). Gary actually dug into the ancestry, Funkhauser is obviously German, but it turns out there are none in Germany, only in Indiana.

Gary grew up in southern California and went to UCLA, originally studying Pre-Law. But he didn’t really like the way everyone involved seemed to be Marxists (hey, it was the 70s) and since he was good at math he switched to engineering. He had to scramble to catch up since he’d done a lot of humanities courses and no engineering courses. Eventually he specialized in electrical engineering. After UCLA he stayed in California but moved north and did he PhD at Standford under Jim Plummer.

At the time IBM had various research programs jointly with Stanford and also recruited heavily from there. Gary joined IBM and moved East to what IBMers call “Watson” but is really the “Thomas J. Watson Research Center”. Initially he worked on the SiGe HBT (heterojunction biplolar transistor) which resulted in a big press announcement and a couple of years ago in the IEDM 50th anniversary that paper was picked as the most significant for that year.

The expectation is that this would be the basis of future IBM servers…but then they went CMOS like everyone else. At the time IBM was not in the OEM semiconductor business, they only consumed their own semiconductors themselves, and 3rd parties were approaching them all the time. Eventually that policy changed and this turned into a power amp and RF on SOI business in Burlington (now GlobalFoundries’ Fab 9) where they are the market leader.

Somewhere around then he had to decide whether to remain on the purely technical side but he chose to transition into management. He was at IBM for 25 years, including a fair bit of research but also moving to San Jose to run IBM’s disk drive business (then on Cottle Road, near what is now Ramac Park, surely the only park in the world named after a disk drive). When that business was sold to Hitachi he moved back and for the last 8 years he ran semiconductor R&D for IBM.

Now he is doing much the same thing on the GlobalFoundries’ side of the house where he became CTO when GF acquired the IBM semiconductor business.

GlobalFoundries announced 22nm FD-SOI, which Gary thinks is a very attractive process and is highly differentiated from other foundries giving 16nm performance at 28nm price, and with software control of the forward and reverse back bias giving a very powerful knob to control power vs performance. In fact Sanjay Jha, the CEO, talked about this just last week at the Shanghai FD-SOI workshop.

The old IBM ASIC business is growing now that it is a business line for a pure-play foundry rather than a sideline for a company increasingly focused elsewhere.

I asked Gary about transistor costs. He told me that he thinks we will see real cost reduction (per transistor) for 10nm and 7nm. After all, 14/16nm is really 20nm (same metal stack) and is in some ways a poor tradeoff: it needs double patterning but only just (22nm does not need it) and it doesn’t push deep into the double patterning region to get a big payback. He also says that 7nm will be the last process that can use 193i (immersion lithography and 193nm light). After that we truly need EUV.

Talking of EUV, he told me that they have just had the UP2 upgrade to their EUV scanner in Albany. The availability is much better. In fact he worries more about availability of the tool than the other challenges. Clearly with the light source power there is a ways to go but it is looking good. There is progress on pellicles. More work needs to be done on resist but that seems more like engineering than research. But there is no denying that it is a complicated tool. But if the availability is not high then it will simply be an uneconomic technical curiosity.

One of the things that several people had talked about during the morning was the need for new interconnect. We have heard a lot about FinFET and FD-SOI over the last few years. Gary told me that in Albany they used to have more emphasis on the devices, but now he thinks they are heavier on the interconnect side. There is lots of innovation, especially using new materials, and it is the next challenge. It is no good having great transistors if we can’t connect them up.

So he sees his role, and the big picture technology strategy for GlobalFoundries to be:

  • accelerate the leading edge
  • produce differentiated solutions

So that’s how a law student is now the CTO of one of the most technical businesses in the world.


Enterprise Design Management Comes of Age

Enterprise Design Management Comes of Age
by Tom Simon on 09-22-2015 at 12:00 pm

The motivations for having a data and process management system in place for semiconductor design have existed for a long time. I am reluctant to admit it, but I remember early efforts to do this back in the 80’s at Valid Logic. Cadence was also developing this capability in house through the early 90’s. Back then designs were much smaller; often it would have been enough to get the design work from just one tool under management.

Today designs have billions of transistors, not hundreds of thousands. Also the advent of design platforms necessitates that not only one tool, but completely separate disciplines, such as software, board, packaging, and SOC all be managed together. We live in an age of platforms and complex interdependent development environments.

The development of design and process management has moved out of the tool companies, providing a welcome neutrality for the solutions. But more than that, they have to cover a wider range of activities than any one tool company can offer. That said, many traditional EDA companies are now working in the areas of IP and software development.

Dassualt, after a series of strategic acquisitions, has assembled and developed a comprehensive suite for managing product development. Indeed, I worked for Synchronicity, the original developer of their DesginSync, back in 2000. Dassault acquired that technology when they merged with MatrixOne. On the other end of the spectrum they acquired Pinpoint, used for managing product development, in their merger with Tuscany.

Last week at the Enovia user group meeting in Mountain View I received an update on the Dassault capabilities. Michael Munsey, Dassault’s Director of Semiconductor, Software and IoT Strategy for the Enovia solutions, opened the meeting with an overview of the present day need for the Enovia solutions. In today’s designs, IP is used extensively. IP can be developed internally or externally, but it is important to know its pedigree. Michael cited an example of an export restricted IP that had been used by a design group, which then made available their project as IP for others to use. But the new IP was not marked as export restricted. You can imagine how painful this turned out.

Michael also recalled the days when IP ‘management’ consisted of getting representatives from different groups in a room and doing a show of hands regarding IP use and proliferation. A host of problems can arise: improper royalty billing, missed fixes for critical issues, improper verification flows, etc.

One of Dassault’s main points was that all verification should be tied to a product design specification. And, when specifications change then the verification steps need to be changed to reflect that. A good example of where product verification must be tied to product specification is with ISO 26262, the automobile functional safety standard.

The Enovia tools make the process straightforward by employing a web interface. But their software is designed to be flow aware. Pinpoint can read tool output reports to capture critical metrics on designs. I was impressed to see that the list of things it can capture includes path timing, power, IR drop, and more. Pinpoint is smart enough to read in LEF/DEF so key design status information is made available in a web interface without the need to open up specific design tools, or more importantly expose design data files. Multi-site and multi-company projects can have independent development work, and all the key information in project status is easily shared without exposing the design data itself.

With the advent of more and more platform based designs and the growth in the need for system integration skills, it looks like Dassualt’s Enovia line will be attracting a lot of attention. For more information on the Enovia user group meeting and Dassault’s offerings in this space click here. It was also heartening to see that a few folks who I worked with there in R&D and application support are still there, 15 years later.

Also Read

Talking Directly to EDA R&D

A Systems Company Update from #52DAC

Design Collaboration, Requirements and IP Management at #52DAC


New Sensing Scheme for OTP Memories

New Sensing Scheme for OTP Memories
by Paul McLellan on 09-22-2015 at 7:00 am

Last week at TSMC’s OIP symposium, Jen-Tai Hsu, Kilopass’s VP R&D, presented A New Solution to Sensing Scheme Issues Revealed.

See also Jen-Tai Hsu Joins Kilopass and Looks to the Future of Memories

He started with giving some statistics about Kilopass:

  • 50+ employees
  • 10X growth 2008 to 1015
  • over 80 patents (including more filed for this new sensing scheme)
  • 179 customers, 400 sockets, 10B units shipped

Kilopass’s technology works in a standard process using antifuse, causing a breakdown of the gate-oxide. Since the mechanical damage is so small it is not detectable even by invasive techniques, unlike eFuse technologies where the breaks in the fuse material are clearly visible by inspection. Over the generations of process nodes they have reduced the power by a factor of 10 and reduced the read access time to 20ns. Since the technology scales with the process, the memory can scale as high as 4Mb. It also is low power and instant-on.

Kilopass has focused on 3 major markets:

  • security keys and encryption. This only requires Kb of memory. The end markets are set-top box, gaming, SSD, military
  • configuration and trimming of analog. This also requires Kb of memory. End markets are power management, high precision analog and MEMS sensors
  • microcode and boot code. This requires megabits to tens of megabits. Applications are microcontrollers, baseband and media processors, multi-RF, wireless LAN and more

The diagram above shows how the programming works. There are two transistors per cell. The top one remains a transistor for a 0 (gate isolated from the source/drain) but after programming a 1 the oxide is punched through and the gate has a high resistance short to the drain. Since the actual damage to the gate oxide might occur anywhere (close to the drain or far from it), the resulting resistance is variable.

The traditional way to read the data is as follows. The bitline (WLP) is pre-charged, then the appropriate wordline (WLR) is used for access and the bitline (BL) is sensed and compared against a reference in the sense amp. Depending on whether the “transistor” is a transistor or a resistor, the current will be higher than the reference bitline current or not. If it is higher then a 1 is sensed, lower and a zero. The challenge is to sense the data fast, since the longer the time taken, the clearer the value, but all users want a fast read time. See the diagram below.
Historically this has worked well. In older nodes, the variations are small relative to the drive strengths of the transistors. But increasingly it gets harder to tell the difference between a weak 1 cell and an noisy 0 cell, which risks misreading the value. As a result it can take a long time to sense “to be sure.” As we march down the treadmill of process nodes, like many other things, the variation is getting so large it is approaching the parameters of the device itself. A new approach is needed.
The new approach the Kilopass have pioneered adds a couple of steps. Once the word line is used for access, after a delay the bitline reference is shut off. The bit line is sensed and the data latched and then the sense amp is shut off. The new sense amp incorporates the timing circuitry. The whole scheme is more tolerant of process variation and should be suitable for migration all the way to below 10nm. This approach is more immune to ground noise and has greater discrimination between weak 1 and noisy 0. Finally, shutting off the sense amp at the end saves power.
It turns out that this scheme works particularly well with TSMC’s process since their I[SUB]ref[/SUB] spread is half that of other fabs. The new sensing scheme coupled with tighter cell means doubling the read speed.


The Cost Challenge has Been Met – Let the Disruption Begin!

The Cost Challenge has Been Met – Let the Disruption Begin!
by Alex Lidow on 09-22-2015 at 12:00 am

Displacing the Silicon Power MOSFET with eGaN® FETs
35 years ago the silicon power MOSFET was a disruptive technology that displaced the bipolar transistor – and a $12B market emerged. The dynamics of this transition taught us that there are four key factors controlling the adoption rate of a new power conversion technology:

  • Does it enable significant new applications?
  • Is it easy to use?
  • Is it reliable?
  • Is it VERY cost effective to the user?

All four factors have been met with Efficient Power Conversion’s (@EPC_Corp) gallium nitride (#GaN) products. Let’s review the work that has been done over the past five years to address the first three key factors – it is the fourth question, competitive pricing, that is the focus of this article.

Does it Enable Significant New Applications?
As with all new technologies, enabling applications that otherwise go unrealized is the best starting point for introducing the technology. With new applications, the need for increased performance justifies the higher pricing that usually accompanies the introduction of a new technology into the market; so too with gallium nitride semiconductors.
Since their introduction five years ago, eGaN® FETs have fostered the development of several new applications, including wireless power transfer,envelope tracking, LiDAR (Light Distancing and Ranging), X-Ray-in-a-pillcolonoscopies, and wirelessly powered artificial hearts. Beyond applications dependent upon power transistors, gallium nitride technology is now being applied to integrated circuits (IC) – both analog, and in the future, digital. These IC products enhance the performance and cost competitiveness of existing products and new, unforeseen applications are rapidly emerging.

Is it Easy to Use?
eGaN® transistors from EPC are designed to be used in a similar fashion to existing power MOSFETs, and therefore power systems engineers can use their design experience with minimal additional training.

However, to assist design engineers up the learning curve, EPC has established itself as the leader in educating the industry about gallium nitride devices and their applications. As a matter of fact, in addition to publishing over 75 technical articles and presentations, EPC has published four GaN transistor textbooks (including one in Chinese).

To assist practicing power design engineers, EPC has conducted “hands-on” daylong seminars in major electronic industry cities around the world, such as San Jose, Boston, Shanghai and Tokyo. EPC is actively engaged with more than 30 universities around the world in order to lay the groundwork for the next generation of highly skilled power system designers being trained in getting the most out of eGaN® FETs.

Is it Reliable?
Reliability testing of GaN transistors continues to accumulate with positive results. eGaN® FETs have been subjected to a wide variety of reliability tests for device qualification, including High Temperature Reverse Bias, High Temperature Gate Bias, High Temperature Storage, Temperature Cycling, High Temperature High Humidity Reverse Bias, Autoclave, and Moisture Sensitivity.

Acceleration factor tests have been conducted over voltage and temperature in order to estimate the time to failure within the datasheet operating range. Under both HTRB and HTGB type stress conditions, the mean time to failure (MTTF) well exceeds 10 years at maximum operating temperature and at critical voltage levels.

These studies have further shown that eGaN® FETs are able to operate with very low probability of failures within the reasonable lifetime of end products manufactured today.

Is it Very Cost Effective to the User?
For emerging technologies, competitive pricing with well-established technologies is a formidable challenge, especially if the new technology aspires to be a disruptive, displacement technology. And with the latest family of eGaN® FETs, the competitive price barrier has been crossed – now the performance of GaN at the price of a MOSFET is a reality! Here is how competitive pricing was done:

  • Reduced production costs – eGaN® FETs are production based upon MOSFET fabrication methods and equipment. Benefiting from the learning curve of the last five years, yields have increased to the point of being comparable with MOSFET yields, production methods have been streamlined and, thus, the cost of producing GaN devices has dropped substantially.
  • Chip scale packaging – typical MOSFET plastic packaging represents about 50% of the cost of the final product. Low voltage eGaN® FETs are “package-less,” thus cutting the cost for production by that 50%! In addition, without a package the possibility of field failures due to poor assembly are greatly reduced.
  • Extremely small size – faster switching translates into smaller size, higher efficiency, and lower system cost. For example, the latest family of eGaN® FETs is about one-fortieth the area of equivalent MOSFET component.


Conclusion
There’s no stopping gallium nitride technology now – all four key factors to high volume adoption have been achieved: GaN enables new applications, GaN is easy to use, GaN is reliable, and GaN is competitively priced.

The $12B MOSFET market is right now “fair game” for superior eGaN® FET technology. And, looking to the near future, with the growth in identified end-use applications mentioned about, which are enabled by GaN, the market potential could double by 2020. Longer term, the total available market for GaN technology could reach over $300B, with the inclusion of analog and digital integrated circuits incorporating this high performance technology.

Power conversion is going GaN…those slow to adopt will begin to lose ground.


Dialog and Atmel, 2 cultures to build 1 successful company ?

Dialog and Atmel, 2 cultures to build 1 successful company ?
by Eric Esteve on 09-21-2015 at 2:00 pm

Consolidation is on-going in semi industry, as we learn that Dialog Semi just announced the acquisition of Atmel. Is it a good news for Atmel, or for Dialog ? Apparently not for the stock market, as Dialog stock went down by 20% within the last few hours… But we can try to look more in-depth at the potential synergy between these two companies’s product port-folio and is such acquisition makes sense when looking at the market segments they target.

In the 1990’s, Dialog was part of Temic (semiconductor subsidiary of Daimler) up to a management buy out in 1998 creating Dialog Semiconductor Plc. It took some time to take the right decision : become completely fabless and focus on high volume consumer products. In 2005, the wireless segment was on it way to become the largest in term of volume, growth and relative semiconductor content per system, and Dialog was delivering 600M Audio-codec IC, the company revenue was about $160M. Within 10 years, Dialog revenue has grown up to $1,150 M or with a 40% CAGR during 2014-2010. The revenue split below shows that most of the revenue is made in the mobile industry, with mixed-signal diversified products : power management, audio and display. Nevertheless, Dialog is also preparing the future offering wireless audio and smart Bluetooth IC.

Dialog CEO is Jalal Bagherli and Jalal was part of the TI ASIC Europe management team in the 1990’s, as was for example former ARM CEO Warren East. Being ASIC FAE with TI at that time, I am not surprised by Jalal (or Warren) success as there were many excellent peoples in this group, excellent in marketing and business… and I learnt a lot!
Being with Atmel in the 2000’s I remember that the company was one of the leaders on the Flash market, but the founder and CEO Georges Perlegos understood that diversification towards Application Specific Products (ASIC and ASSP) was a good move. In year 2000, Atmel had revenue of $2B, more than $1,500M of the revenue was coming from Flash and around $300M from ASIC and AVR microcontrollers. But Atmel not only wasn’t fabless, but was acquiring fab in 2001-2002… George Perlegos didn’t survive to the internet bubble crash effect and the company is driven since 2006 by Steven Laub, a financial industry veteran.

Just one point: I have a huge respect from Georges Perlegos, owner of Flash essential patents and able to start a semi company from scratch and drive the revenue up to $ 2 billion. He just thought that “real men have fab” at the time Atmel should have gone fabless…

Let’s fast forward to 2014 results and port-folio. Atmel revenue is now $1,413M in 2014, for an income of $32M (to be compared with 2014 income of $138M for Dialog). Very interesting is the split by product family: non-volatile memory represents 12% of total NR, or $150M, when Microcontroller weight 70%, or about $1B. Flash product line is almost dead, Atmel is almost fabless, and the Microcontroller segment is, by far, the larger. This segment deserves a definition (from Atmel’s Annual report) as it aggregates wireless radio, touch product family or mobile sensor hub:

Microcontroller. This segment includes Atmel’s general purpose microcontroller and microprocessor families, AVR ® 8-bit and 32-bit products, Atmel ® | SMART TM ARM ® based products, Atmel’s 8051 8-bit products, designated commercial wireless products, including low power radio and SOC products that meet Bluetooth, Bluetooth Low Energy, Zigbee and Wi-Fi specifications, Atmel’s maXTouch capacitive touch product families and optimized products for smart energy, touch button, and mobile sensor hub and LED lighting applications.

We have on one hand a company focusing on mixed-signal IC and the mobile industry and well-managed by Jalal Bagherli (remember the 40% CAGR in revenue for the last five years), realizing over $1B on a very competitive market at the end of the crazy growth priod. Dialog has started to diversify (BTLE, power conversion,…) and think about emerging market like IoT. On the other hand, Atmel has accomplished a complete revolution and built a strong microcontroller product line, from 8-bit 8051 to 32-bit SMART ARM based family (probably the industry best line today), weighting almost $1B. Atmel’s product can address various (any ?) market segments, especially including emerging segments like wearable with microcontroller, wireless and sensors when Dialog knows how to design successful ASSP targeting one precise segment. That’s the type of synergy we can expect to be successful and the product overlap is minimum with BTLE, but selecting Smart Blutooth to target emerging application is rather a smart decision. As usual, the success of this merge will depend on the merge of two cultures.

As far as I can see from their respective annual report, one company has a strong marketing culture (product positionning, target market, are clearly explained…) when the other is clearly financial oriented (you will like the annual report if you like crunching data). The merged company success will probably require that one of these two culture take over…

Eric Esteve from IPNEST


The Paradox of Atmel No More

The Paradox of Atmel No More
by Majeed Ahmad on 09-21-2015 at 12:00 pm

Technology pundits said Atmel is one of the best-positioned companies in the Internet of Things (IoT) realm, if not the best IoT company outright. The San Jose, California–based chipmaker boasts an enormous portfolio of microcontrollers and a deep expertise in security and automotive electronics hardware. Yet its underperforming stock price worried many in the industry.

The news about Atmel’s desire to be acquired have been circulating for a few months. And that specific premise didn’t go well with the company’s IoT story. After all, how can one company claim a place in the IoT gold rush and also be willing to give up it autonomy while being an acquisition target?


Atmel: The long journey from EEPROM to IoT

The truth is that IoT is a very broad and fragmented marketplace that is still in an early stage of commercial realization. Meanwhile, margins are getting thinner and competition is getting intense in the cut-throat MCU world. Specifically, in Atmel’s case, which caters to both traditional segments such as white goods and energy meters as well as innovative new segments like medical and wearable devices.

The traditional MCU segments encompass a broad customer base and thinner margins while newer product lines like wearables are taking longer than expected to reach larger volumes. It’s worth noting that Atmel is a midsize chipmaker who is competing with MCU makers like NXP, STMicro and TI that are large semiconductor outfits.

Atmel Goes to Dialog

So, in today’s “buy or get bought” world of semiconductor consolidation, Atmel decided to cash in its IoT prowess at a good price. The U.K.-based Dialog Semiconductor plc has announced that it is acquiring Atmel for a total of $4.6 billion in cash and shares.

Dialog, an analog/mixed-signal power management chip firm, has been making advances toward the IoT universe through low-power wireless products like Bluetooth Smart and ZigBee. Dialog can benefit from Atmel’s digital assets as well as wireless connectivity stacks that the company has been acquiring over the past years.

So Dialog can add its low power expertise to these products and try to quickly expand in the IoT markets such as smart home and wearable devices. An investor projection claims that Dialog and Atmel will have nearly $2.7 billion in combined annual revenue and complementary products for connectivity and automotive segments.


Jalal Bagherli: The IoT ambitions

Atmel’s long technology journey that it started 21 years ago is coming to an end at the beginning of the IoT arena. The chipmaker from San Jose is now passing its IoT baton to Dialog that has ambitions of its own. Dialog, who has been a huge beneficiary of the popularity of mobile devices like smartphones and tablets, is now eyeing the next frontier in portable electronics: IoT and wearable devices.

From EEPROM to IoT

In 1984, Atmel was founded on George Perlegos’s groundbreaking work on electrically erasable programmable read-only memory or EEPROM. The Greek-born Perlegos was also part of the team at Intel that had developed the first cell-based flash memory chip.

Atmel began to develop memory chips for niche markets like cellular handsets. It quickly drew attention from OEMs—especially mobile phone makers like Ericsson, Motorola and Nokia—for designing chips that consumed less power than rival memory chips.


George Perlegos: The quiet man who built Atmel

Atmel also got noticed for spending nearly 50 percent of the revenue on research and development at a time during the mid-1990s when the industry norm hovered around 35 percent. Atmel’s low-key boss Perlegos gave a lot of independence to key engineers and helped develop a talent-centric company culture.

Atmel went public in 1991. Eventually, Perlegos took Atmel into programmable logic and application-specific standard product (ASSP) markets while successfully targeting the growing market niches. It worked quite well, for a while. In 2008, Steve Laub, a former Lattice Semiconductor executive, took the helm at Atmel after a boardroom battle.

Laub began a major strategic shift toward microcontrollers that subsequently laid the ground for Atmel’s big bet on IoT. Connectivity, especially wireless connectivity, became the new mantra at Atmel. And that wireless connectivity is now serving as the glue between Atmel and Dialog.

Also read:

4 Reasons why Atmel is Ready to Ride the IoT Wave

Atmel’s L21 MCU for IoT Tops Low Power Benchmark

Atmel Tightens Automotive Focus with Three New Cortex-M7 MCUs