webinar banner2025 (1)

SPIE 2020 – ASML EUV and Inspection Update

SPIE 2020 – ASML EUV and Inspection Update
by Scotten Jones on 04-20-2020 at 10:00 am

0.33 NA EUV systems for HVM Ron Schuurhuis Page 02

I couldn’t attend the SPIE Advanced Lithography Conference this year for personal reasons, but last week Mike Lercel of ASML was nice enough to walk me through the major ASML presentations from the conference.

Introduction
In late 2018, Samsung and TSMC introduced 7nm foundry logic processes with 5 to 7 EUV layers, throughout 2019 both companies ramped up those processes and they are currently in high volume production. This year Samsung and TSMC are both ramping up 5nm foundry logic processes with 12 to 14 EUV layers and Intel is working on their EUV based 7nm process expected next year. Intel’s 7nm process should have densities comparable to Samsung and TSMC’s 5nm processes.

Samsung has also introduced their 1z DRAM process in late 2019 that initially was optical but then transitioned to a single EUV layer. In late March 2020 Samsung announced they had shipped one million DRAM modules with EUV based DRAMs. Samsung’s next generation DRAM process, the so called 1c generation DRAM is expected to have 4 EUV layers.

Clearly EUV is now accepted as best solution for critical layers for leading edge logic and DRAM production.

Mike discussed four presentations with me:

  1. Current production is being done with 0.33NA systems and ASML presented a current status and roadmap for these systems.
  2. The EUV source is a key component of the systems and the details of a new improved source were described.
  3. The status of efforts to produce a 0.5NA system improving resolution and productivity.
  4. ASML bought HMI and is continuing to develop their multi beam – Ebeam wafer inspection technology.

0.33NA Systems
The promise of EUV is summarized in figure 1.

Figure 1. Why ASML customers want EUV.

By the end of 2019 ASML had shipped 53 systems and over 10 million wafers had been exposed in the field. Figure 2 presents the systems shipped and wafers exposed by quarter.

Figure 2. EUV systems shipped and wafer exposed.

One particularly impressive aspects of figure 2 is the background photo that shows rows of EUV systems installed at an undisclosed customer site.

The current systems in the field are the NXE:3400B that have now demonstrated an average of >1,900 wafers per day (wpd) for one week, and >2,700 wpd for the best day.

Figure 3 illustrates that  average availability is now reaching 85% with the top 10% of systems at 90%. 90% has long been the goal for the 3400B systems and ASML continues to work to tighten the 3400B system availability around 90%.

Figure 3. NXE:3400B availability trend.

 ASML has now started to ship the NXE:3400C, the next generation system. The NXE:3400C features improved optics and mechanical throughput achieving an approximately 20% increase in throughput over the 3500B at 160 wafers per hour (wph) at a 20mJ/cm2 dose and 135 wph at a 30mJ/cm2 dose. The 3400B was always specified at a 20mJ/cm2 dose for throughput, the 30mJ/cm2 is in recognition of the need to increase dose as the feature sizes shrink. Authors note, I believe that even for 7nm foundry logic, the current doses are higher than 30mJ/cm2.

The 3600C features several improvements to the system to increase availability and the target is to reach 95%, the same availability that is achieved with DUV systems. The improvements will be discussed further in the paper on the source.

In mid-2021 ASML expects to ship the NXE:3600D with 160 wph throughput at a 30mJ/cm2 dose and longer terms there are plans to introduce a system with >=220 wph at a 30mJ/cm2 dose. The key to continual improvements in throughput are higher source power (see the EUV source section) and faster mechanical handling.

These throughput improvements are achieved while continually improving dose accuracy, overlay, CD uniformity and focus uniformity.

Figure 4. 0.33NA system roadmap.

 EUV source
The largest availability loss causes on the 3400B system are the droplet generator and collector mirror, see figure 5.

Figure 5. Causes of availability loss.

 The 3400C system directly addresses these issues with automated refills of the tin generator and a fast swap droplet nozzle and an easy access door for fast collector mirror swaps.

Figure 6. NXE:3400C availability improvements.

The lifetime of the collector mirror is also continuously improving while the power is also increasing.

Figure 7. Collector lifetime.

 The net result of these improvements is a target for 95% uptime for the 3400C system in the field.

Looking forward at continued improvements in throughput ASML continues to drive up source power. Figure 8 illustrates the trend in source power. Note that the lag from research to high volume manufacturing is approximately 2 years so that we could possibly see a 500-watt source (the current source runs around 250-watts) around 2022.

Figure 8. Source power trend.

 0.5NA System
The resolution of an exposure system is inversely proportional to NA. As critical dimensions shrink 0.33NA EUV systems will require multi-patterning to print the smallest features. The goal with the high NA systems is to match overlay and productivity of 0.33NA systems while enabling single pass lithography to be extended to smaller features.

The optical system for the 0.5NA systems is anamorphic, that is the magnification 4x in one direction and 8y in the orthogonal direction. This result in the field size being ½ of what it is for a 4x/4y system with the same reticle size. In order to achieve the high productivity goals the  acceleration of the mask stage is 4x of a 0.33NA system and the acceleration of the wafer stage is 2x of a 0.33NA system.

Figure 9. 0.55NA system anamorphic lens.

 Improvements in transmission in the fast stages result in improved throughput over the 0.33NA system at the same throughput. It should be noted here that some of the high speed sateg technology developed for the 0.55NA systems are being implemented on the 0.33NA systems to further improve throughput on those systems as well.

Figure 10. 0.55NA system throughput advantage.

 Currently ASML is realizing the wafer and mask stage acceleration and finalizing the architecture. The main differences from the 0.33NA systems are the new optics system and faster stages although once again the faster stage technology is being used for the 0.33NA systems.

The 0.55NA systems also require better alignment and leveling. ASML is currently testing specific configuration to determine particle generation at high acceleration and are starting to gather some of the first sensor data.

ASML is also building out the infrastructure for the 0.55NA systems at various facilities around the world.

  1. ASML Wilton Connecticut is responsible for the reticle stages.
  2. At ASML headquarters in Veldhoven in the Netherlands the systems will be assembled.
  3. Ziess in Oberkochen Germany is responsible for the optics fabrication.
  4. ASML San Diego California is responsible for the source.

4 systems are currently on order with systems expected to be available in the 2022/2023 time frame.

Multibeam EBeam
ASML acquired HMI and has continued to pursue the HMI multibeam EBeam exposure technology. Ebeam inspection has very high resolution but is very slow taking approximately 2 hours to inspect 0.1% of a wafer.

The multibeam approach utilizes 9 beams in a 3 x 3 array all scanning simultaneously. Figure 11 illustrates the basic tool concept.

Figure 11. Multibeam EBeam system concept.

ASML has now demonstrated <2% cross talk between the beams and they are applying stage technology from their DUV exposure tools to improve the multibeam system throughput. They are targeting a 5-6x improvement in throughput and longer term are working on a 25-beam system.

Conclusion
EUV is now the solution of choice for critical lithography for leading edge processes. ASML continues to show progress both in the current 0.33NA generation systems and the development of next generation 0.55NA systems.

Also Read:

SPIE 2020 – Applied Materials Material-Enabled Patterning

LithoVision – Economics in the 3D Era

IEDM 2019 – Imec Interviews


Design IP Revenue Grew 5.2% in 2019, Good News in Declining Semi Market

Design IP Revenue Grew 5.2% in 2019, Good News in Declining Semi Market
by Eric Esteve on 04-20-2020 at 6:00 am

IP Category 2018 2019

Good news is good to hear, particularly these days! The behavior of the Design IP market in 2019 is extremely positive, when the semiconductor market has seen a decline worst than in 2009 (economy crisis) or 2001 (internet bubble collapse). Analysis this 5.2% growth in detail will help to understand the future of the IP market, as we think this market exit a decade based on smartphone explosion fueling the IP growth, to enter in the 2020 decade we expect to be data centric. But let’s have a look at the main trends shaking the Design IP in 2019.

ARM is still a solid #1, with more than 40% market share… but ARM is staying flat year over year. If we dig, we find that ARM license revenues have grown by 13.8% when royalty revenues have declined by 6%. ARM assign this lose in royalty to the smartphone volume decline and it make sense, considering ARM penetration in CPU and GPU IP in wireless phone.

The question that you may ask is about the impact of RISC-V on ARM revenues. The answer is that RISC-V adoption has certainly grown in 2019, but that it’s too early to measure the precise impact. This should clarify, but it will probably take a couple of years before we can measure the RISC-V penetration in term of revenues flow. Clearly the change on CPU IP business model is on-going, and customers are happy to support this evolution!

Now, let’s have a look at the various IP vendors who have been successful, as well as IP categories growing share of the IP market.

Synopsys and Cadence, respectively #2 and #3, are growing respectively by 13.8% and 22.9%. Synopsys highest growth come from the Interface IP category (19.3%), the other categories also contributing, but less, when Cadence growth is shared between Interface IP (thanks to Nusemi acquisition, but not only) and DSP IP (Tensilica).

Both EDA vendors are positioned as “one-stop-shop” IP suppliers, both have created their IP offer by running the acquisition of small to mid size vendors leaders on their segment. Synopsys has started in the early 2000’s when Cadence positioning in IP has started with Denali acquisition in 2010.

This growth rate is the clear signal showing that their IP long term strategy is successful. Being a “one-stop-shop” supplier strategy has been made possible because both were large enough companies with deep pocket allowing the multiple acquisitions to make to support this positioning.

We will see that the other winners in the IP market are, at the opposite, companies being extremely focused and able to be technical leaders on their segment or sub-segment.

Lesson to learn from the IP category evolution between 2018 and 2019. In processor we aggregate CPU, DSP and GPU IP categories. Interface is one category, integrating the protocol-based function like USB, PCI Express, Ethernet, MIPI, SATA, DP, but also Die-to-Die (D2D) interface and memory controller (DDRn, LPDDRn, HBM, GDDR).

We can see that the market share of processor has moved from 53.5% to 51%, when interface category has enjoyed, passing from 20.3% to 22.1%. To make sure that this is a real trend and not an artefact, I have checked the status is 2016. Processor was weighting 63.8% when Interface was only 16.9%.

The remaining two groups are “Other Physical”, aggregating various categories, “SRAM memory compiler”, “other memory compiler”, “physical library”, “Analog & Mixed-Signal” and “Wireless Interface” and the last group is “Other Digital”. Both groups are relatively stable, which means that they have grown at the same rate that the rest of the IP market, a bit more with “other physical” moving from 18% to 18.8%.

The interface IP market is the big winner in term of growth and market share, weighting $870 million in 2019. It’s not a surprise for IPnest, who will deliver the 12th version of the “Interface IP Survey & Forecast” in June 2020!

In 2009 Interface IP category was weighting $220 million, it has been multiplied by 4X in 10 years!

If we think in term of application or market segment, the evolution of interface IP is illustrating the move from wireless phone to data centric. In 2010, a large part of interface IP business was generated by smartphone SoC integrating protocols like USB, memory controller, HDMI, DP, SATA and MIPI, but no PCIe or Ethernet.

In 2019, we think the data-centric applications represent the largest share of interface IP business. Data-centric like data center, server, wired networking and 4G/5G base stations. In all these applications you can find advanced memory controller (DDR4, HBM2, GDDR6), PCIe and Ethernet requiring high-speed SerDes (up to 56G if not 112G) and emerging die-to-die (D2D) solutions.

Smartphones will obviously continue to integrate USB, HDMI or LPDDRx memory controller, but we expect the growth of IP market to be generated by data centric application during the 2020 decade.

If we need an example to illustrate this trend, let’s take the GPU IP usage in smartphone. The two market leaders, Samsung and Apple, have changed from GPU IP supplier (ARM or IMG) to a non-IP vendor. Apple has decided to develop GPU internally and Samsung has closed a deal with AMD to use their GPU.

On the other hand, IPnest think about creating a new sub-category in the Interface IP, to be specific related to Die-to-Die (D2D) interconnects. D2D protocols are in discussion, it can be based on massive parallel interface or high-speed SerDes (40G to 112G) and chips are already in production from AMD or Intel. We can expect D2D adoption to generate good business in the mid-term as chiplet can be a good work around for Moore’s law limitation…

To come back to the IP market successful companies, as already mentioned, they can be ranked in two groups. Large EDA companies offering a one-stop-shop IP portfolio, Cadence and Synopsys, and very focused vendors, leader on one (or very few) product.

Let’s mention a few examples.
– Arteris IP with the Network-on-Chip (NoC), who has made 60% YoY growth, joining the Top 15 with revenues above $30 million in 2019.

– Silicon Creations, leader of the Analog Mixed-Signal (AMS) category in 2019 and 2018, the company being about ten years old and now #1 before Synopsys.

– Alphawave has been created in 2017 by a serial entrepreneur, Tony Pialis, being part of the Snowbush starting team, creating Vsemiconductor, acquired by Intel. Alphawave now enjoy $25 million revenues in 2019, based on advanced SerDes, after just two years !

– SST offering NVM IP, undisputed leader of the category, with revenues passing $100 million or more than two times the #2 revenues.

What is the secret for these IP vendors? Quality of the design (and product) is certainly the #1, being able to offer an innovative and advanced solution come right after. I should come back on these success stories, as each of them is like a novel, you want to turn the page!

FYI, IPnest will deliver in June 2020 the “Interface IP Survey 2015-2019 – Forecast 2020-2024”, as every year since 2009.

Eric Esteve from IPnest

To buy this report, or just discuss about IP, contact Eric Esteve
(eric.esteve@ip-nest.com)

Also Read:

Chiplet: Are You Ready For Next Semiconductor Revolution?

IPnest Forecast Interface IP Category Growth to $2.5B in 2025

Design IP Sales Grew 16.7% in 2020, Best Growth Rate Ever!


Short vs Long Term Covid19 Impact

Short vs Long Term Covid19 Impact
by Robert Maire on 04-19-2020 at 10:00 am

Covid Semiconductor

-Short term Covid19 impact is primarily logistics related
-Longer term impact is more systemic/demand driven
-Impact will wind through supply chain over several qtrs
-Other issues, such as trade, remain an overhang

Short term versus long term in the semiconductor industry
The stocks declines over the last months seem to indicate the semiconductor industry flying off a cliff without leaving any skid marks behind. Reality may not be quite as bad as other industries such as airlines, restaurants, hotels etc; as the semiconductor industry is by nature a longer term, slower moving, inherently cyclical animal.

The food chain in semiconductors is fairly long as it can take months to produce chips and the entire life cycle from design to production is usually well over a year. There is a lot of inventory and buffer in the supply chain and unlike the food industry, nothing has a short shelf life.

Airline seats, hotel rooms and food all have a very definitive shelf life which goes to zero value on expiration.

The semiconductor industry doesn’t instantly react to short term changes in demand as those near term changes are absorbed by the supply chain buffer. There is an added “shock absorber” of pricing, which rises and falls depending upon demand and inventory levels.

The semiconductor equipment industry is even more long term in nature, than the chips themselves, as new fabs and fab expansions can take years to plan and even just rolling in one piece of equipment can take several quarters from order to install.

This suggests the semiconductor industry as a whole has the momentum a a very large oil tanker that takes a very long time to either accelerate or stop.

Near term Covid19 issues are primarily logistics
The primary Covid19 impact to the semiconductor industry in Q1 2020 is due to logistical issues of moving people and materials around.

In general, the fabs kept operating for the most part. Fabs tightened down on access by outside persons to the fabs for fear of infection. Tool shipment and installs were slowed due to transport and access issues.

Tool manufacture was impacted by supply chain issues (moving sub components around) as well as people.

The semiconductor manufacturing base relies on free, easy and quick movement of materials and people around the globe and was obviously impacted when that slowed.

To be very clear, we have not heard of any major change in fab plans, expansions, upgrades, and technology advancement that has been impacted in a big way so far. Its not like a foundry is going to cancel its next gen process or significantly delay it.

There have been reports of Samsung delaying its 3NM from 2021 to 2022 and blaming Covid19. While its clear that Covid19 is causing one to two quarter delays in equipment installs and EUV tools were cited as one issue, we think that Samsung has historically been more than overly optimistic in its projections in beating TSMC to the next gen. Samsung has missed most of its prior projections of technology readiness.

When all is said and done we expect a one to two quarter overall delay or “hiccup” in the march of Moore’s law, caused by primarily logistics issues related to Covid19.

Longer term, demand driven issues, harder to determine

We think the bigger variable, and one that is harder to project, is demand driven issues caused by Covid19.

One of the reason’s why this is difficult is that we are still at the very beginning of economic impact with wildly varying estimates of economic damage and impact.

In general, semiconductor laden devices are “less essential” goods than food, shelter, transport & energy (though some may argue they need their smart phone more than food…).

While there may be a near term spike in demand for laptops and servers due to remote work and learning, we are more concerned about reduced demand for TVs, cars, smart phones, 5G etc; as those purchases tend to be more “marginal” and vulnerable to high unemployment or business cutback in spending.

Slowing of semiconductor demand will only be felt over the next several quarters and not felt in Q1 as we haven’t yet seen significant demand driven issues and we have the above described supply chain buffer to delay the impact.

We remain very concerned about the precarious balance of supply and demand in the commodity like memory markets and would watch those with extreme interest. We have already seen some warning signs in memory pricing.

We also remain concerned about the iPhone 12 launch in the fall, which has always been timed for holiday purchases. Getting pushed out by a quarter would essentially miss the holiday window of sales.

We would look to the 2008/2009 financial crisis as a bit of a guide for potential impact on semiconductors, which was significant.

Except for the recent, self inflicted, memory oversupply driven down cycle, the semiconductor industry has been in a positive overall trend since 2008/2009. If we hadn’t over built memory supply, we would likely have still been in the longest up cycle ever.

This most recent down cycle lasted about a year and a half and most prior cycles lasted two years or more.

While the short term, logistics driven impact may only last one or two quarters at most, the longer term, demand/economic driven impact will likely last one to two years.

Right now the depth of the impact cannot be determined but its safe to say that the long term impact will last at least as long as the overall economic impact.

Samsung, Intel & TSMC still spending for now
We continue to hear positive things about spend levels. In fact it sounds like Samsung may be planning on ramping spending in a similar fashion as they did in the prior upturn.

We have also heard that Intel continues to spend to get capacity it has been short of as well as take advantage of near term spikes in demand.
TSMC also continues its roll out of new technology and is remaining on track with prior plans for the most part.

The bottom line is that so far, no major player in the semiconductor industry has taken their foot off the gas (for now).

Between Apple, AMD, Intel, Qualcomm & Huawei among others, TSMC seems to have more than enough demand to keep it busy. Our concern here is that TSMC has broad exposure across the consumer industry and obviously more exposure to 5G roll out which could be impacted.

Samsung is obviously very exposed to memory pricing but in the past has spent up and until memory prices collapsed in their face, then put the brakes on instantly. Samsung behaves in a much more binary way as it seems to be either full on the gas or full on the brakes with not a lot in between.

Intel seems to be a more consistent spender, and if anything, likely too conservative as evidenced by delays and shortages of parts. Of the big three, we think Intel is least at risk to change their capital spending plans and perhaps more at risk for an up tick in spend.

Early Q1 signals mixed- ASML & ACLS
Early signals coming out of the equipment industry are mixed. On one hand we have heard that ASML will miss expectations due to logistics issues of shipping and installing tools which is totally expected and obviously beyond their control. On the other hand we have just heard this morning that Axcelis will exceed the high end of guidance with a great quarter despite Covid19. Obviously shipping and installing scanners is much different from ion implanters and the customer base and locations are significantly different between the two companies.

We think that impact on tool companies will vary depending upon customer locations and complications associated with tools. We think that Axcelis is one of the few companies that will see relatively no impact. Most will see some sort of impact.

In general, materials suppliers remain a defensive bet as they will likely have the shortest term impact related only to any fab slow downs which are few.

Those companies with the widest and longest supply chains that are most exposed to logistics will see the most impact, especially those with more Asia based manufacturing.

The Stocks….Beware the bounce…..
The stocks have bounced off a sharp decline as worst case fears seem to have abated. Initial reports are coming in better than expected and we expect will continue to come in better than worst fears.

We also expect that guidance for Q2 will probably also be better than expected as much of the business pushed out of Q1 will wind up in Q2 so it will make up for any weakness and potentially look better than originally expected for many companies.

As we have pointed out here, we think near term issues are primarily logistics based and by their nature, short term. As such, the stocks will discount these issues as one time, delays in an otherwise intact model. When we add the likely positive Q2 guide the stocks should see a short term “pop”

We are more concerned about business one to two or more quarters out, driven by demand issues.

So while we have experienced a near term “dead cat bounce” off a low bottom we are concerned that the stocks could drift down in the longer run after having a “relief rally” when investors realize that short term impact is just that.

We also remain very concerned about non Covid19 issues, such as Huawei/China trade, which has all but been forgotten about by investors. The current administration could look to China as a scapegoat for Covid19 and try to punish China through Huawei or some other trade impacting mechanism.

In short we may try to take advantage of a short term, quarter driven pop in the stocks but then take some money off the table as the future looks a bit more uncertain post the pop of the quarter.


Wave Computing and MIPS Wave Goodbye

Wave Computing and MIPS Wave Goodbye
by Mike Gianfagna on 04-19-2020 at 8:00 am

Screen Shot 2020 04 17 at 7.42.27 PM

Word on the virtual street is that Wave Computing is closing down. The company has reportedly let all employees go and will file for Chapter 11. As one of the many promising new companies in the field of AI, Wave Computing was founded in 2008 with the mission “to revolutionize deep learning with real-time AI solutions that scale from the edge to the datacenter.”  Classified as a late stage venture, the company was founded by Dado Banatao and Pete Foley. Mr. Banatao serves as chairman of Wave Computing and is also a managing partner at Tallwood Venture Capital. Sanjai Kohli is the current CEO. Mr Kohli took the helm at Wave Computing in September 2019 from Art Swift, who held the position for only four months. The story was reported in EE Times here.

The story speculated that there were performance issues with Wave’s AI dataflow processor. Did that contribute to their early exit?  At present, the reasons for their exit are speculative. Wave Computing offered a broad product line. Billed as a “scalable, unified, AI platform,” Wave Computing utilized MIPS processors to offer dataflow processing technology that scaled “from the edge to the datacenter.”

To make things more interesting, MIPS Technologies is owned by Wave Computing, who acquired it from Tallwood MIPS Inc., a company indirectly owned by Tallwood Venture Capital. What now happens to MIPS?

In December of 2018 Wave announced the MIPS Open Initiative  to expand adoption of MIPS via open (free) licensing only to close it one year later:

“Wave Computing, Inc. and its subsidiaries (‘Wave’) regretfully announce the closing of the MIPS Open Initiative (‘MIPS Open’), and hereby give Notice of the same effective November 14, 2019 (‘Effective Date’),” the company’s brief email to registered MIPS Open users reads. “Effective immediately, Wave will no longer be offering free downloads of MIPS Open components, including the MIPS architecture, cores, tools, IDE, simulators, FPGA packages, and/or any software code or computer hardware related thereto, licensed under any of the (i) MIPS Open Architecture License Agreement (ver. 1.0), (ii) MIPS Open Core License Agreement ver. 1.0 For the microAptiv UC Core, (iii) MIPS Open Core License Agreement ver. 1.0 For the microAptiv UP Core, and/or (iv) MIPS Open FPGA License Agreement ver. 1.0 (collectively, ‘MIPS Open Components’. In addition, all MIPS Open accounts will be closed as of the Effective Date.”

Was Wave trying to do too much at once? Is narrower focus a better strategy in the emerging AI market? Again, speculation that will likely be brought into focus in the coming days and weeks. Did the current pandemic play a role? I believe those stories are yet to be told, it is likely too early for that.

The AI and deep learning market is exploding with many new companies offering novel approaches. Any new market typically experiences this growth, followed by a consolidation phase. Does the news from Wave Computing signal we are already entering the consolidation phase? Time will tell.

About Wave Computing
Wave Computing, Inc. is revolutionizing artificial intelligence (AI) with its dataflow-based solutions. The company’s vision is to bring deep learning to customers’ data wherever it may be—from the datacenter to the edge—helping accelerate time-to-insight. Wave Computing is powering the next generation of AI by combining its dataflow architecture with its MIPS embedded RISC multithreaded CPU cores and IP. More information about Wave Computing can be found at https://wavecomp.ai.


TSMC COVID-19 and Double Digit Growth in 2020

TSMC COVID-19 and Double Digit Growth in 2020
by Daniel Nenni on 04-17-2020 at 10:00 am

Mark Liu CC Wei TSMC


TSMC has had an incredible run since its founding in 1987 which spans most of my 36 year semiconductor career. Even in these troubled times TSMC is a shining bellwether with double digit growth expectations while the semiconductor industry will be flat or slightly down. Let’s take a close look at the TSMC Q1 2020 conference call and see what else we can learn.

“On March 18, we found one employee who tested positive for COVID-19 and immediately began receiving appropriate care. Today, this employee has recovered, is out of the hospital and is staying at home for additional quarantine. We were able to suitably trace all the other individuals who were in contact. The neighboring employees have all tested negative, while all other employees who were in contact has entered and completed the 14-day self-quarantine and now back to work. As a result of the strict preventive measures taken by TSMC, we have not seen any disruption of our fab operations so far.”

This does not surprise me at all. Taiwan learned a very important lesson during the SARS outbreak in 2002. I remember traveling during this time and going through extra medical checks at the TPE airport. Taiwan installed medical imaging equipment that took our temperatures after we got off the planes. It is easy to remember since I had to remove my hat and got to see how big my brain is. It really is big, hat size XL.

One thing you can say about TSMC is that they have built their business on experience and humility, absolutely.

Dr. C.C. Wei:

“Looking ahead to the second half of this year. Due to the market uncertainty, we adopt a more conservative view as we expect COVID-19 to continue to bring some level of disruption to the end market demand. For the whole year of 2020, we now forecast the overall semiconductor market, excluding memory growth, to be flattish to slightly decline, while foundry industry growth is expected to be high single-digit to low-teens percentage.”

In my opinion we will see a hockey-stick-like semiconductor recovery in Q4 2020. Never before have we seen the entire world united in a common cause. Never before have we seen such worldwide compassion and cooperation. COVID-19 really is a globally uniting event and it could not have come at a better time in my opinion. The world will be a much safer and more productive place in 2021 and beyond, that is my heartfelt belief.

“Now let me talk about the progress and development of 5G and HPC. With the recent disruption from COVID-19, we now expect global smartphone units to decline high single digit year-over-year in 2020. However, 5G network deployment continues and OEMs continue to prepare to launch 5G phones. We maintain our forecast for mid-teens penetration rate for 5G smartphone of the total smartphone market in 2020.”

It is understandable that the edge devices will take a pause this year but remember we are in a data driven society. With the entire world sheltering in place the amount of data generated is increasing exponentially. SemiWiki traffic alone is up 30%. Our webinar series is breaking registration and attendance records. The world wide communications infrastructure is being upgraded like never before and that means semiconductor strength.

There has been a lot of fake news of late surrounding the TSMC process technology so let’s get this straight from the horse’s mouth (American idiom for the truth):

“Now let me talk about the ramp-up of N7, N7+ and the status of N6. In its third year of ramp, N7 continue to see very strong demand across a wide spectrum of products for mobile, HPC, IoT and automotive applications. Our N7+ is entering its second year of ramp using EUV lithography technology while paving the way for N6. Our N6 provides a clear migration path for next-wave N7 products, as the design rules are fully compatible with N7.”

“N6 has already entered its production and is on track for volume production before the end of this year. N6 will have one more EUV diode than N7+ and will further extend our 7-nanometer family well into the future. We expect our 7-nanometer family to continue to grow in its third year and reaffirm it will contribute more than 30% of our wafer revenue in 2020.”

“Now let me talk about our N5 status. N5 is already in volume production with good yield. Our N5 technology is a full node stride from our N7, with 80% logic density gain and about 20% speed gain compared with N7. N5 will adopt EUV extensively. We expect a very fast and smooth ramp of N5 in the second half of this year driven by both mobile and HPC applications. We’ll reiterate 5-nanometer will contribute about 10% of our wafer revenue in 2020.”

“N5 is the foundry industry’s most advanced solution with best PPA. We observed a higher number of tapeouts, as compared with N7 at the same period of time. We will offer continuous enhancements to further improve the performance, power and density of our 5-nanometer technology solution into the future as well. Thus, we are confident that 5-nanometer will be another large and long-lasting node for TSMC.”

“Finally, I will talk about our N3 status. Our N3 technology development is on track, with risk production scheduled in 2021 and target volume production in second half of 2022. We have carefully evaluated all the different technology options for our N3 technology, and our decision is to continue to use FinFET transistor structure to deliver the best technology maturity, performance and costs.”

“Our N3 technology will be another full node stride from our N5, with about a 70% larger density gain, 10 to 15 speed gain and 25% to 30% power improvement as compared with N5. Our 3-nanometer technology will be the most advanced foundry technology in both PPA and transistor technology when it is introduced and will further extend our leadership position well into the future.”

If you have questions about this please post in the comments section and let the SemiWiki community of experts answer. Just say no to fake news….


Lithography Resolution Limits – Arrayed Features

Lithography Resolution Limits – Arrayed Features
by Fred Chen on 04-17-2020 at 6:00 am

Lithography Resolution Limits Arrayed Features

State-of-the-art chips will always include some portions which are memory arrays, which also happen to be the densest portions of the chip. Arrayed features are the main targets for lithography evaluation, as the feature pitch is well-defined, and is directly linked to the cost scaling (more features per wafer) from generation to generation. To that end, this article (second in the series on lithography resolution limits) focuses on the lithography resolution limits of arrayed feature patterning.

Minimum pitch resolution
A lithography tool is specified by the wavelength it uses, e.g., 193 nm for ArF, 13.5 nm for EUV, as well as its numerical aperture, i.e., the power of its final optic element (lens for ArF, KrF, i-line, mirror for EUV). The formula for the ideal minimum pitch between two lines in an array is

This result is derived from the grating equation [1]. Basically, the minimum pitch is realized by the interference of two beams which form the maximum angles with the optical axis, whose sines differ by wavelength/pitch. The difference of sines is at most equal to twice the numerical aperture – this gives the previously stated ideal minimum pitch. Realistically, though, the difference of sines must deduct the finite angular tolerance of the beams. The actual minimum pitch should therefore be

Hence, while for a wavelength of 193 nm, numerical aperture of 1.35, we ideally expect a minimum pitch of 71.5 nm, in reality it is 76 nm. Likewise for the EUV tool with nominal wavelength of 13.5 nm, numerical aperture of 0.33, the minimum pitch was recently demonstrated to be 24 nm [2], not the ideal 20.45 nm.

For two-dimensional arrays (square arrays, rectangular arrays, triangular arrays), the patterns can be generated by crossed line arrays, with best results achieved by using an attenuated (~5%) phase-shifting effect by the mask [3], so the same minimum pitch resolution limit, given by equation (2), applies as for lines.

In the previous article [4], it was noted that for a pair of features, the Rayleigh criterion (0.61 wavelength/numerical aperture) is used to determine the resolution. With arrayed features, although the pitch is already predetermined, the Rayleigh criterion applies if the array pitch is much wider than the distance set by that criterion; otherwise, it is the pitch (specifically, the half-pitch) that decides the resolution.

Self-aligned patterning: the ideal opportunity for arrayed features
When the minimum pitch needs to go below 0.5 wavelength/numerical aperture, a single exposure is not sufficient to pattern the array. A second exposure, such as the previously described LELE (litho-etch-litho-etch) approach [4], can achieve half the pitch, but alignment between the two exposures cannot be guaranteed. Self-aligned patterning approaches would be better. The most commonly practiced approach is Self-Aligned Double Patterning (SADP). Its earliest comprehensive description is given in US Patent 5328810, assigned to Micron after being filed in 1992 [5].

Figure 1 shows the first steps of basic SADP.

Figure 1. Basic SADP flow following standard lithography.

In this drawing, it is indicated clearly that the top of the spacer is eroded during the process. Also, it is the cost-reducing preference to use photoresist as the starting feature, rather than another etched material.

Figure 2 shows the completion of the SADP process.

Figure 2. Completion of SADP process.

The new feature pitch on the substrate is now half the original photoresist feature pitch. Hence, this allows a doubling of line density, without an additional exposure. Sharp eyes may note that the distance between features in the center is a little wider than the distance between features where the photoresist was originally located. This effect is known as “pitch walking” [6]. This can arise from the original photoresist pattern, in combination with the spacer thickness, and the amount of spacer erosion. To manage the pitch walking the critical dimension (CD) of the starting photoresist feature must be in sync with the spacer thickness and erosion rate. Alternatively, a gapfill material may be deposited after the spacer film is deposited [5,7].

This protects the exposed spacer side from erosion, but leaves extra spacer material to be removed later along with the gapfill material, as well as the starting core feature. This can be extended, however, to more than doubling feature density. For example, Samsung’s US Patent 7842601 [8] describes the double spacer approach to reducing line pitch to one-third its original value (Figure 3). This allows a 78 nm pitch (~22nm foundry node design rule) to be immediately reduced to 26 nm (<5nm foundry node design rule) in a single exposure, without using EUV.

Figure 3. Self-aligned triple patterning (SATP) by the use of two spacers.Two-dimensional self-aligned patterning

When the SADP process is applied to two-dimensional patterns, the possibilities expand. For example, in Figure 4, features on a square lattice are doubled in density.

Figure 4. Two-dimensional SADP on a square lattice doubles feature density.

The central added feature is expected to round out like the original corner features of the lattice cell. Going even further, a triangular or hexagonal lattice allows feature density to be tripled.

Figure 5. Two-dimensional SATP on a triangular lattice triples feature density.

The latter approach has already been used in Samsung’s 20nm DRAM [9] for the honeycomb capacitor patterning.

Double SADP/SATP in 2D?
By repeating the SADP/SATP processes described above, the arrayed feature density increases in leaps and bounds. Double SADP quadruples density for line arrays and square lattices; hence, this is also referred to as self-aligned quadruple patterning (SAQP). Double SATP in two dimensions noncuples (multiplies 9x) density for triangular lattices.

The feasibility of double SADP is tied to the process complexity. The complexity of double SADP is increased over that of single SADP, but having several consecutive etch steps which can be executed at the same etch station is easier to manage. The etch rates of three materials (core, spacer, substrate) are considered simultaneously in any case. On the other hand, a new EUV resist process flow may involve added deposition and treatment steps inserted (Figure 6). In particular, the new underlayer material being etched could have its own station, as it may be organic [10] or metal-based [11]. The underlayer benefit is expected from the effects of secondary electrons released by EUV light [12].

Figure 6. EUV resist process steps can still be of comparable complexity compared to double 2D SADP/SATP.

It is quite clear that self-aligned spacer patterning is a very powerful patterning techniques for arrayed features. In upcoming articles, the use of self-aligned patterning for specific cases involving complicated array layouts will be examined.

References
[1] https://en.wikipedia.org/wiki/Diffraction_grating

[2] https://www.imec-int.com/en/articles/imec-demonstrates-24nm-pitch-lines-with-single-exposure-euv-lithography-on-asml-s-nxe-3400b-scanner

[3] A. K-K. Wong, Optical Imaging in Projection Microlithography (SPIE, 2005), p. 87.

[4] https://www.linkedin.com/pulse/lithography-resolution-limits-paired-features-frederick-chen/

[5] T. A. Lowrey, R. W. Chance, D. A. Cathey, US Patent 5328810, assigned to Micron, filed Nov. 25, 1992.

[6] https://www.semiconkorea.org/en/programs/STS/S4.-Plasma-Science-and-Etching-Technology/SAQP-Pitch-Walking-Improvement-Path-Finding-by-Simulation-

[7] A. E. Carlson, US Patent 8101481, assigned to the Regents of the University of California, filed Feb. 25, 2008.

[8] J-Y. Lee, J-S. Park, S-G. Woo, US Patent 7842601, assigned to Samsung, filed Apr. 20, 2006.

[9] J. M. Park et al., “20nm DRAM: A new beginning of another revolution ,” IEDM 2015.

[10] J. Li et al., “A Chemical Underlayer Approach to Mitigate Shot Noise in EUV Contact Hol9e Patterning,” Proc. SPIE 9051, 905117 (2014).

[11] A. De Silva et al., “High-Z metal-based underlayer to improve EUV stochastics,” Proc. SPIE 11147, 111470W (2019).

[12] https://spie.org/news/6518-successes-and-frontiers-in-extreme-uv-patterning?SSO=1

Related Lithography Posts


Cadence – Defining a Roadmap to the Future

Cadence – Defining a Roadmap to the Future
by Mike Gianfagna on 04-16-2020 at 10:00 am

Screen Shot 2020 04 08 at 7.46.46 PM

Cadence recently published a position paper that details a set of enabling technologies that will be needed for product design going forward. Entitled Intelligent System Design, the piece describes the changing landscape of system design and the requirements for success. Cadence has built a branded approach to address these needs called, appropriately, the Intelligent System Design™ strategy. There is a short discussion of Cadence’s capabilities at the end of the piece, but most of the discussion is a thoughtful overview of what is changing in system design and what needs to be done to facilitate those changes.

I have a few comments and observations about what Cadence is up to, but I’ll hold that until later. The vision conveyed by this position paper is far bigger than any specific product.

In my view, Intelligent System Design hits home in meaningful and relevant ways on many fronts. The piece begins by setting the stage for the current wave of innovation. To effectively compete, system companies are designing their own chips and semiconductor companies are delivering software stacks along with their silicon to enable competitive differentiation.

Cadence decomposes these trends in a hierarchical way, examining the requirements for design excellence, system innovation and pervasive intelligence. You really need to read the paper to get the full impact, it’s only five pages long by the way. To whet your appetite, I’ll provide a quick summary of each the three areas treated.

Design Excellence: The bread and butter of EDA was, for a long time, logic design, logic synthesis, place and route, timing closure, design rule check, test generation and tapeout. While those items are still necessary, there is now a lot more to deal with. Process variation, IP reuse, power and signal integrity, software interactions and complex system validation are just some of the new requirements that must all be co-optimized to achieve a successful tapeout. Cloud computing factors into the discussion as well.

System Innovation: Co-optimization comes into play here as well. A successful SoC must be analyzed and optimized in the context of the system for which it is intended. The PCB and the complex and potentially 2.5 or 3D package must be co-analyzed and optimized along with the chip itself. There are plenty of signal integrity challenges to addresses here. Software is also part of system innovation. To make it more interesting, design teams must develop the software for a new SoC before the SoC exists.

Pervasive Intelligence: Deep learning is finding its way into all kinds of everyday products. The challenges to accomplish the design-in of this technology may not be as well known. Power and latency are requiring a lot of these new technologies to be resident in a more local sense, at the edge of the cloud if you will vs. in the cloud. Doing this in a cost-effective way is very challenging. It turns out EDA tools and design flows can be improved to make deep learning design easier by using deep learning in the design process itself. Something of a recursive process.

The Cadence strategy: At the end of the paper, Cadence briefly discusses their strategy to address the three areas mentioned above. You can certainly learn a lot more about their approach by visiting the Cadence website. There’s lots of new and fresh content there.

In closing, I want to touch briefly on the third item, pervasive intelligence. This is an area where I believe Cadence is truly practicing what they preach. I recently posted a conversation with Cadence’s Paul Cunningham on machine learning at Cadence. In it, Paul detailed the Cadence vision of how machine learning can be used to both improve EDA algorithms and leverage learning from prior runs to make the flow better for future runs. Soon after that discussion, Cadence issued a press release about their new digital full flow. That flow uses machine learning in the ways Paul described. Having a good strategy is important. Actually, using it is also important, but often difficult.

I think Cadence expresses some great visions in this new position paper, visions that can be implemented thanks to the technology available today. I’ll keep watching as this unfolds.


Breker Tips a Hat to Formal Graphs in PSS Security Verification

Breker Tips a Hat to Formal Graphs in PSS Security Verification
by Bernard Murphy on 04-16-2020 at 6:00 am

Breker security tables

It might seem paradoxical that simulation (or equivalent dynamic methods) might be one of the best ways to run security checks. Checking security is a problem where you need to find rare corners that a hacker might exploit. In dynamic verification, no matter how much we test we know we’re not going to cover all corners, so how can it possibly be useful? Wouldn’t formal methods be much better?

Dave Kelf (CMO for Breker) makes a point that security verification is inherently a negative verification problem. Unlike positive testing where you’re checking that a specific scenario works as expected, in security verification you need to check all possibilities, as you ideally would in negative testing. For example, in a positive test, we would check the key can be read through the crypto block. In security, we have to ask, “is there any other way that this can be done?”. The strength of formal is that it can analyze that entire state space and find paths you had not considered.

But while formal is ideal for completeness, it’s limited in scope – by the size of the state space and by the degree to which you have to abstract and decompose complex problems, leaving you to wonder what you might have overlooked in all that complexity. Formal also can’t work with software, a real problem for embedded system validation. Conversely, simulation doesn’t care – you can run whatever size system you have with whatever mixed levels you need.

Nevertheless, the completeness of the graph-approach is appealing. Breker have developed a way to build a conceptually similar graph at the system level, not automatically from RTL as a formal tool would but semi-manually / semi-automatically from a series of tables describing key aspects of the SoC system architecture.

Then PSS becomes a pretty logical bridge to testing complete negative intent on a high-level graph rather than conventional formal gate-level paths. Breker has an app for that. In the security TrekApp, you can define a security policy through tables, in master/slave connectivity, security/privilege options and memory address zones.

An advantage in starting with these tables is that it’s easy to see what might be missing – trivially that you missed a master/slave option, you forgot to specify whether an access/privilege on the master and an access/privilege option on the slave is a valid (permitted) combination or not.

Going one level deeper, you can also define, in another table, various memory regions with corresponding secure and privilege accessibilities. These definitions are essential for later dynamic tests to check that it isn’t possible, through some unapparent sequence of actions (again a negative test), to read from or write into a secure/privileged memory region from a transaction not allowed to perform those actions.

Think for example of an ARM TrustZone environment in which one or more masters may at times be operating in a secure mode with a certain level of privilege, or a non-secure mode. Meanwhile slaves, some secure with low privileges, some secure with higher privileges are communicating with masters and trying to read from or write to regions in memory, each of which also have assorted privilege and secure settings. That’s a lot of combinations to worry about. Are you sure your tests are really going to cover them all?

The Breker security TrekApp will map the master/slave, secure/privilege and memory region tables into the Trek internal format, then build a graph – in effect a system-level state graph – which can generate tests for all possible transactions across that graph. Their test suite synthesis will then map that to realized sequences of tests, which you can then plug into your UVM testbench or software driven SoC test. A comprehensive sequence of tests that can cover all paths through the graph, including those you might not consider but a hacker may attempt.

That looks like a pretty valuable capability to me. You can learn more about the security TrekApp HERE.

Also Read

Verification, RISC-V and Extensibility

Build More and Better Tests Faster

Taking the Pain out of UVM


The Story of Ultra-WideBand – Part 5: Low power is gold

The Story of Ultra-WideBand – Part 5: Low power is gold
by Frederic Nabki & Dominic Deslandes on 04-15-2020 at 10:00 am

Wide Band Series SemiWiki

How can ultra-wideband done right do more with less energy

In the previous part, we discussed how the time-frequency duality can be used to reduce the latency. When you compress in time a wireless transmission, you reduce the time it takes to hop from a transmitter to a receiver. Another very interesting capability enabled by the time-frequency duality is the possibility to reduce the power consumption, to a level never seen before.

In a world where everything goes wireless and all devices are required to be remotely controlled, the importance of power consumption is growing significantly. In a simple sensor node composed of four parts (sensor, microcontroller, PMU and transceiver), the wireless transceiver is the main contributor to the total power consumption by a large margin. Indeed, the percentage of the power used for the wireless function can exceed 90% of total power consumption. Power consumption of wireless headsets, game controllers, and computer keyboards and mice is dominated by the wireless transceiver.

Power reduction has been driving the development wireless chips over the last 15 years. After years of development, BLE was ratified in 2006 to address the power consumption of Bluetooth. More recently, Bluetooth 5.2 added features to reduce consumption for different applications, including audio. However, these modifications are mostly incremental. Fundamentally, the reduction in power consumption is physically limited by the architecture; a carrier-based transceiver will always require a significant amount of power to start, stabilize and maintain its RF oscillator. After two decades of optimization, Bluetooth has reached its point of diminishing return. This is true for all narrowband technologies: gaining an order of magnitude requires a new paradigm in wireless transmission. Here’s why:

The Narrowband Penalty
In the chart above, you can see the two significant power penalties inherent in all narrowband radio architectures like Bluetooth:

  • Crystal oscillator overhead (lower left) cripples low data rate performance: Bluetooth uses a ~20 MHz crystal oscillator, which requires a few milliwatts to power up and stabilize. UWB radios, like the one developed by SPARK Microsystems, can operate using impulses that don’t require a high frequency crystal oscillator and can be designed to operate with a low timing power consumption overhead.
  • Carrier overhead (upper middle) penalizes high data rate performance: Transmitting a large amount of data over a narrow bandwidth channel such as that used in Bluetooth radios requires lots of time and power, as explained in part 4. Large amounts of data can be transmitted far more quickly when spread across a wide bandwidth, keeping the transmitter on for a much shorter duration and reducing power consumption significantly. This means for the same amount of consumed power, UWB can transmit much more data. (far upper right)

How UWB Avoids the Narrowband Penalties
If you start with a blank page to design a short range (50-100m) wireless protocol that minimizes power consumption and latency and maximizes data rate, you would probably go through this thought process:

  • First, minimize the time the transmitter and the receiver are powered on. To do that, each symbol should be as short as possible. From the time-frequency duality we know that a signal that is short in time has a wide bandwidth, so the solution will utilize wideband communications, hence the choice of the unlicensed UWB spectrum.
  • Second, ensure that the transmitter and receiver can be started and shutdown as quickly as possible. This makes it difficult to use transceivers that use traditional high accuracy RF oscillators. The optimal architecture to minimize power consumption is the use of an UWB impulse radio that forgoes the need for an RF carrier per se.

As you can see from data on the previous graph, that approach delivers the lowest possible power profile for short range communications. This is the approach SPARK Microsystems has taken for its UWB transceivers.

UWB’s Advantages
Because UWB does not use a high-frequency carrier oscillator, UWB transceivers can be turned on very quickly and transmit a far higher data rate than a narrowband radio for a given power level. This, coupled with the low latency described in Part 4, makes UWB an ideal solution for the next generation of low-power wireless applications.

Why did Narrowband Prevail in the 1920’s?
Although ships were required to install spark gap radios after the Titanic disaster, as discussed in part 1, wideband technology of the time had two major drawbacks:

  • They were extremely noisy, with poor frequency control. Transmission had to stop to enable reception on nearby frequencies. Interference was thus a big problem.
  • They could not be easily modulated to handle voice or other higher data rate communications

By the 1920’s, vacuum tube technology and superheterodyne circuits enabled narrowband radios to take over rapidly escalating demand for voice and other communications.

In the final part of this series, we will summarize how military and commercial technology developments, along with worldwide spectrum allocations, have created a unique opportunity for UWB to dominate short range communications in the 2020’s and beyond.

About Frederic Nabki
Dr. Frederic Nabki is cofounder and CTO of SPARK Microsystems, a wireless start-up bringing a new ultra low-power and low-latency UWB wireless connectivity technology to the market. He directs the technological innovations that SPARK Microsystems is introducing to market. He has 17 years of experience in research and development of RFICs and MEMS. He obtained his Ph.D. in Electrical Engineering from McGill University in 2010. Dr. Nabki has contributed to setting the direction of the technological roadmap for start-up companies, coordinated the development of advanced technologies and participated in product development efforts. His technical expertise includes analog, RF, and mixed-signal integrated circuits and MEMS sensors and actuators. He is a professor of electrical engineering at the École de Technologie Supérieure in Montreal, Canada. He has published several scientific publications, and he holds multiple patents on novel devices and technologies touching on microsystems and integrated circuits.

About Dominic Deslandes
Dr. Dominic Deslandes is cofounder and CSO of SPARK Microsystems, a wireless start-up bringing a new ultra low-power and low-latency UWB wireless connectivity technology to the market. He leads SPARK Microsystems’s long-term technology vision. Dominic has 20 years of experience in the design of RF systems. In the course of his career, he managed several research and development projects in the field of antenna design, RF system integration and interconnections, sensor networks and UWB communication systems. He has collaborated with several companies to develop innovative solutions for microwave sub-systems. Dr. Deslandes holds a doctorate in electrical engineering and a Master of Science in electrical engineering for Ecole Polytechnique of Montreal, where his research focused on high frequency system integration. He is a professor of electrical engineering at the École de Technologie Supérieure in Montreal, Canada.


Artificial Intelligence in Micro-Watts: How to Make TinyML a Reality

Artificial Intelligence in Micro-Watts: How to Make TinyML a Reality
by Mike Gianfagna on 04-15-2020 at 6:00 am

Eta Compute ECM3532

TinyML is kind of a whimsical term. It turns out to be a label for a very serious and large segment of AI and machine learning – the deployment of machine learning on actual end user devices (the extreme edge) at very low power. There’s even an industry group focused on the topic. I had the opportunity to preview a compelling webinar about TinyML. A lot of these topics were explained very clearly, with some significant breakthroughs detailed as well.

The webinar will be broadcast on April 21, 2020 at 10AM Pacific time. I strongly urge you to register for Artificial Intelligence in Micro-Watts: How to Make TinyML a Reality here.

The webinar is presented by Eta Compute. The company was founded in 2015 and focuses on ultra-low power microcontroller and SoC technology for IoT. The webinar presentation is given by Semir Haddad, senior director of product marketing at Eta Compute. Semir is a passionate and credible speaker on the topic of AI and machine learning, with 20 years of experience in the field of microprocessors and microcontrollers. Semir also holds four patents. In his own words, “all of my career I have been focused on bringing intelligence in embedded devices.”

The webinar focuses on the deployment of deep learning algorithms at the extreme edge of IoT and presents an innovative new chip from Eta Compute for this market, the ECM3532. Given the latency, power, privacy and cost issues of moving data to the cloud, there is strong momentum toward bringing deep learning closer to the end application. I’m sure you’ve seen many discussions about AI at the edge. This webinar takes it a step further, to the extreme edge. Think of deep learning in products such as thermostats, washing machines, health monitors, hearing aids, asset tracking technology and industrial networks to name a few. The figure below does a good job portraying the spectrum of power and performance for the various processing nodes of IoT.

A power budget of ~1MW is daunting and this is where the innovation of Eta Compute and the ECM3532 shine. Semir does a great job explaining what the challenges of ultra-low power and ultra-low cost deployment for deep learning are. I encourage you to attend the webinar to get the full story. Here is a brief summary to whet your appetite.

Traditional MCUs and MPUs operate in a synchronous nature. Getting timing closed on a design like this over process, voltage and temperature conditions is quite challenging. As power consumption is proportional to the square of the operating voltage, lowering the voltage can reduce power. But this approach will reduce operating frequency to allow timing closure. An impossible balancing act to get to low power and high performance. Dynamic voltage and frequency scaling (DVFS) is one way to address this problem, but the impacts of approaches like this across the chip continue to make it difficult to achieve the optimal balance of power and performance for a synchronous design.

Eta Compute approaches the problem in a different way with continuous voltage and frequency scaling (CVFS). They are the inventor of this technology, with seven patents for both hardware and software, with more patents in the pipeline. The key innovation here is a major re-design of the processor architecture to allow self-timed performance on a device-by-device basis. This allows easier timing closure and results in higher performance for the same voltage when compared to traditional approaches. Their approach also allows frequency and voltage to be controlled by software. For example, if the user sets the frequency for a particular workload, the voltage will adjust automatically.

The bottom line is a 10X improvement in energy efficiency, which is a game changer. Eta Compute also examined what was needed for TinyML from an architectural point of view. It turns out that DSPs are better at some parts of deep learning for IoT and CPUs are better for other parts. So, the ECM3532 supports a dual core architecture, with both Arm M3 and dual MAC DSPs on board that can operate at independent frequencies. There is a lot more in-depth discussion on this and other topics during the webinar.

I will leave you with some information on availability. An ASIC version of the architecture, the ECM3531 and an evaluation board is available now. Samples of the full ECM3532 AI platform and evaluation board will be available in April 2020 with full production in May 2020. Eta Compute is also working on a software environment (called the TENSAI platform) to help move your deep learning application from the bench to the ECM3532 with full access to all the optimization technologies.

There is a lot more eye-popping power and performance information presented during the webinar. I highly recommend you register and catch this event here.