RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

CEO Interview: Mike Wishart of Efabless

CEO Interview: Mike Wishart of Efabless
by Daniel Nenni on 10-08-2021 at 6:00 am

Mike Wishart
Mike Wishart

Mike Wishart has a storied career in investment banking and technology. His resume includes stints at Smith Barney, Lehman Brothers and Goldman Sachs after his Stanford MBA. He has served as a director at Brooktree, Spansion, Cypress and now Knowles. Mike is also a venture partner at Tyche Partners, a venture capital firm focused on transformative “hard tech” companies.  In 2014, he co-founded Efabless with Mohamed Kassem, formerly of Texas Instruments. Mike saw something that brought him out of semi-retirement.  I wanted to find out what that was, and what the future holds for Efabless.

Who is Efabless?

Professional, company, academic, maker or hobbyist –  expert or non-expert.  The Efabless creator platform is your fast, simple, inexpensive way to create your own chips for your own needs.

Efabless is a YouTube for chips.  YouTube, as you know, turns everyone with a phone into a creator of published video content.  Efabless turns tens of millions of software and hardware developers into creators of manufactured chips.  The result is 1000x more designers, 1000x more designs and ultimately 1000x more successes. Prototypes can be done for crazy low cost and custom silicon is easy.  Machine learning and billions of connected devices become much better when software and hardware designers can optimize their entire solution from the chip up and do it themselves.

How do we do it? We took pages from the software book and created a full end-to-end, design-to-fabrication-to-packaged parts solution that applies open-source EDA, open-source reference designs and community engagement.  Like software, we make development super-fast, super-inexpensive and most importantly super-simple.  Design / fail / redesign becomes the norm.  Sharing of designs is also made easy.  As our friend Tim Ansell of Google said in his recent talk at an IEEE Solid State Circuits Direction Workshop, “The reason open source has succeeded in software so much … is because it makes it easy for people to collaborate”. 

Absolutely critical to applying the creator model to a physical chip, we address the “last mile” by providing the path to manufacturing in a way that is invisible to the designer, invisible to the manufacturer and completely automated to Efabless.  In that regard, we offer free idea-to-prototype offerings for open-source designs (partnering with Tim Ansell and Google) called Open MPW and an extreme-low-cost offering for proprietary designs called chipIgnite.  Over the past nine months, our community, including global 100 companies like IBM, professionals, academics and even a college freshman, have used Efabless to design over 160 IC’s and IP blocks, with 100 designs going to tape out.

At the end of the day, we enable lots of designs, the industry grows dramatically, and everyone wins.

So why did you do this?

It’s about finally fulfilling a two-decades-old dream.  My years in technology investment banking convinced me that innovation was actually broad and geographically dispersed, but the ability to create a sustainable company from that innovation was hard and required luck and access. Certainly, there was a great untapped opportunity to enable innovators to receive value for their ideas without having to create a company and even more value from harnessing the collective creative power of a global community of such people. 

There were no “crowd sourcing” or “creator economy” concepts back then, but that is what it was – sort of a Metcalf’s law for networking applied to people.  After I retired from Goldman, Jack Hughes, founder of TopCoder and crowdsourcing, and Lucio Lanza, one of the visionary investors in EDA, knew of my idea and connected me with Mohamed Kassem, a brilliant person from TI who had very much the same idea and had already done very good work on it.  So, we came together and off we went. 

Since then, we built a team of incredible people with chips, systems and cloud backgrounds sharing a common purpose.  It has been a once-in-a-lifetime experience.  I am more excited and energized than I can recall in my career. It’s been a great ride so far, and there’s so much more to come.

Can you provide some examples of industries using open innovation?

 Open innovation engages and connects global communities from all disciplines to collaborate and deliver productive, creative, market-relevant, and extreme solutions. Software has long benefited from open community collaborative innovation in the form of open-source models.  Today it would be hard to find any major software company for which open source is not central to its strategy or business model. Online marketplaces fostered the proliferation of apps by opening the distribution of software.  Companies like Lego, Unilever and PepsiCo engage customer communities to discover new product ideas and refine their existing products.  The latest instantiation of open innovation is the content creator platform – think YouTube and Instagram – where tens of millions of users cost effectively create, share, and monetize content.

Why hasn’t this idea worked for chips before Efabless?

Lots of reasons, but three in particular come to mind. First, there was only limited demand by customers to design custom chips.  Most of the hardware was standard platforms – think phones, desktops and laptops. The differentiation was provided entirely through software, and this was more than sufficient. Second, there was broadly held skepticism about the community delivering the reliability and accountability across the many disciplines required to make a chip. Third, someone needed to solve the logistics and licensing requirements to provide a simpler path to getting potentially millions of designs, and even wacky high-risk unconventional designs, from concept all the way through to manufacturing.  Having the complete path is essential if innovators are to commence an iterative software-style process of building on one another’s ideas in a tangible way.  

So, what has changed? 

Machine learning and IoT brought forward an explosion in hardware platforms that needed to be optimized from the chip up to meet specific requirements for specific applications.  The traditional semiconductor ecosystem is not agile enough to be responsive to the demand for custom silicon.  The logical answer is to build internal capability, but few companies had the necessary capital or know-how. Something had to change and the concept of freely sharing technology and capability offered the promise to address the problem.

Efabless was founded to untie this knot and drive change to simplify the design, access to tools and affordable manufacturing. We reimagined the problem and developed a creator platform for chip design that knocks down the traditional business and technical obstacles to creating silicon.  The platform provides immediate access to everything needed to design a chip at no up-front cost. This includes EDA tools, foundry PDKs, IP libraries and reference designs.  While supporting experienced IC designers, the platform also enables the much larger community of designers with embedded hardware and software backgrounds to successfully create designs through streamlining the design process and reducing complexity. 

And then, Google became involved and provided credibility for open source for chips and, hence, Efabless.  This included the ambitious Open MPW program where Google funded the manufacturing of open-source designs. Efabless designed and managed the program.  It is based on a standardized SoC (called Caravel) of our creation. Caravel is basically a harness into which a user can drop their design, making it possible for the user to implement their specific idea without building the entire SoC from scratch.  Efabless, also created the OpenLane fully automated RTL to GDS flow based on the DARPA-funded OpenROAD project. SkyWater was a big part of the success of the program by open sourcing their PDK.

Importantly we solved the last mile problem by providing a streamlined, affordable, and scalable path to manufacturing.  This includes our cloud-based pre-checkers enabling designers to correctly check and correct their own designs before being integrated on the shuttle. That’s a key ingredient to platform scalability. We aggregate the designs and even add an automated fill process.  Designers’ jobs are made easy, and foundries are freed from the support requirements of many designs.

The results were immediate and dramatic.  Forty-five designs were created in thirty days for MPW-ONE, the first of six shuttles.  Fifty-six designs were submitted for the second, MPW-TWO. We have then followed this with our chipIgnite program where users can go from idea to packaged parts and evaluation boards for under $10K, a dramatic saving in time, cost, and complexity.  

What is the Efabless strategy? 

 We are focused on maximizing design activity and community growth on chipIgnite and Open MPW.  The two key areas of focus are to (i) make hardware and software developers aware of the simplicity of creating prototypes and custom chips on Efabless and (ii) build ubiquitous adoption at universities for both research and course work. Let’s face it.  The value proposition is so dramatic that we need to make people aware of the opportunity.  Over time, we will progress to more advanced nodes and additional foundry support.  We will also provide additional layers of community enabled design, including chiplets, PCB and firmware.

What do people not know about Efabless? 

We will be integrating crowdfunding into our creator platform to match open innovation and application discovery with commercial demand.

We are orchestrating a revolution in chip design and chip sourcing. We’re just getting started. Your readers can find out more about what we’re up to at https://efabless.com. Check it out, join the revolution.

Also Read:

CEO Interview: Maxim Ershov of Diakopto

CEO Interview: Gireesh Rajendran CEO of Steradian Semiconductors

CEO Interview: Tim Ramsdale of Agile Analog


DARPA Toolbox Initiative Boosts Design Productivity

DARPA Toolbox Initiative Boosts Design Productivity
by Tom Simon on 10-07-2021 at 10:00 am

DARPA Toolbox Initiative

When you think of the Defense Advanced Research Projects Agency (DARPA), this first thing that comes to mind is the development of the internet. And indeed, if you look at their website’s historic timeline, the development of ARPANET, as it was known at the time, is shown prominently in 1969. Incidentally, I actually used one of the handful of computers on ARPANET at UCLA back then. However, in looking at the other important projects on their timeline, it becomes apparent that they have played a major role in projects that have advanced semiconductor and electronic design. The list of such projects includes phased array radar, the first weather satellite, the first nuclear explosion detector, miniaturized GPS receivers, MMIC, radar mapping and the Schottky IR imager. And these only cover up to 1996. In the decades that followed many important electronics programs have been nurtured under their guidance.

DARPA Toolbox Initiative

Looking at the dozens of projects active today, it is easy to see that semiconductors play a significant role in many of them. DARPA realized that establishing a so-called Toolbox of selected vendors for software and IP used in electronic design would bring significant benefits to these projects. So, over the last year DARPA has been adding vendors to this toolbox. Recently Siemens EDA was invited to join this initiative. As the first EDA vendor to join, Siemens will be providing a wide range of tools and solutions.

Contractors and sub-contractors performing on DARPA contracts can access Siemens tools preconfigured and affordably priced in these packages:

  • Catapult High Level Synthesis and Power Estimation
  • Tessent Design for Test
  • Expedition Board and Package Design
  • Functional Design and Verification
  • Tessent Embedded Analytics
  • Mature Node Design and Verification
  • Advanced Node Design and Verification
  • Silicon Photonics
  • Analog Simulation and Custom Layout

Siemens has a long history as a provider of trusted products and services to the US government agencies. This new partnership extends this legacy and also highlights how Siemens has expanded its EDA product lineup over the years. It is expected that the dozens of innovative programs that are now active at DARPA will be able to bring in these Siemens solutions and rapidly ramp up their project activities without lengthy evaluations or contract and pricing negotiations. The proven and affordable flows can be adopted with minimal bureaucratic overhead.

Because of the breadth of the Siemens EDA offering, the tools in the combined flows will allow for complete silicon and system level design, including simulation, test and verification. The flows are suitable for mature nodes, and also notably for advanced nodes. There is even support for silicon photonics. There is special emphasis on analog and RF, which can support many of the communication and sensor related projects that DARPA is sponsoring.

Semiconductor and electronics innovation are vital to US national security. Having tier-one tools from Siemens EDA available to DARPA projects will improve the overall quality and lower risk for these essential projects. Hopefully we can expect to see new milestones on their agency timeline that are attributable in part to the contributions that the Siemens tool can make. More information about the DARPA toolbox initiative and Siemens participation can be found on the Siemens website.

Also Read:

Heterogeneous Package Design Challenges for ADAS

Electromigration and IR Drop Analysis has a New Entrant

Formal Methods for Aircraft Standards Compliance


Semiconductors – Limiting Factors; Supply Chain & Talent- Will Limit Stock Upside

Semiconductors – Limiting Factors; Supply Chain & Talent- Will Limit Stock Upside
by Robert Maire on 10-07-2021 at 6:00 am

Semiconductor shortage

– If chips are “as good as it gets” so are the stock prices
– Are we at a near term ceiling that stocks have bounced off of?
– If growth slows do valuations also slow?
– Are we in a holding pattern waiting for a down cycle?

Second order derivative investing

As we have said many times in the past, investors don’t just look at growth rates they look at whether the rate of growth is increasing or decreasing ( the second order derivative) especially in cyclical industries such as semiconductors, as way of trying to gauge where we are in the cycle and whether or not it is beginning to turn.

While its hard to imagine a down cycle when things are so amazingly good, we have to keep reminding ourselves that all good things (and cycles) come to an end. The semiconductor road is littered with the bodies of management and analysts who at one time said “the industry is no longer cyclical”, or “its different this time”.

While we are not saying the cycle is over, we are saying that the second order derivative , or rate of increasing growth may be slowing due to a number of limiting factors.

ASML analyst day – Maybe perfect isn’t good enough

ASML is in the best position of almost anyone in the industry. A virtual monopoly with huge demand with all cylinders firing. Then why did the stock falter? Its clear that investors were looking for ever higher guidance and ever faster growth and there was no blood left in the stone to give.

There is clearly a finite limit on how fast the company can grow given its hugely complex supply chain and need for more incredible talent to make it happen.

The reaction of the stock should be a bit of a warning to other companies in the industry as to investors reaction to slowing growth or at the very least, lack of accelerating growth.

Semiconductor supply chain is the most complex in the world

As compared to any other industry on the planet, the manufacture of semiconductors is by far the most complex, interconnected and widespread. We have also found out more recently exactly how delicate the supply chain can actually be.

A $10B fab has tools and parts and materials and workers from every corner of the globe. There is no country that can build semiconductors without the help and cooperation of many other countries suppliers. The auto industry is very, very simple by comparison.

That supply chain has depended upon an unimpeded flow of goods and services to keep the industry running….that is until COVID hit. Surprisingly semiconductor production hasn’t seen the hit that auto production has seen and in fact has grown versus last year (which is an unfair comparison but even without the unfair compare).

JIT -“just in time” and Kanban…all but dead

The concept of having components arrive just in time as needed by production was started by Toyota in the 1940’s and hit its peak a while back in the electronics industry. It works when the supply chain is very reliable and flexible. It reduces waste (inventory) in the supply chain and improves overall efficiency. The auto industry did a great job with this method.

We have now heard that aside from double and triple ordering of parts that manufacturers are stocking in some cases up to a years supply of components so they don’t get bit again.

This stocking up is just another form of over ordering that hurts the overall system and will cause a huge letdown when it ends.

This also hurts the overall supply chain as it has to accommodate this new cyclicality and uncertainty and there’s a price associated with it.

Fighting over limited supplies

As we have previously pointed out the auto industry shouldn’t be too shocked when they are a low priority for chip makers. Making low margin, 50 cent chips, is not a turn on nor priority for chip executives.

In the near term , prices will go up as manufacturers take advantage of the shortages…but not forever.

Semiconductor value in the bill of materials for an auto exceeded the value of steel in an auto years ago and EVs accelerate that trend.

Tesla appears to have a better handle on chip supply perhaps because its headquartered in silicon valley and understands the technology supply chain better than people in Detroit who are closer to the mid western steel mills.

Limited supply limits company successes

We had pointed out in a recent note that other types of supply chain limits may limit the future of some companies.

Most notably Intel and EUV tools supplied by ASML. If Intel wants to retake the Moore’s Law lead and be a real foundry player its needs those EUV tools that are in short supply.

Right now we would guess that Intel has about 6 (or so) viable EUV litho tools while TSMC has more like 60ish tools in its fleet , or roughly ten times Intels capacity.

ASML will likely ship about 50 EUV tools in 2021. This means that Intel would have to buy up an entire years production of ASML with others, such as TSMC, Samsung and memory makers buying zero, just to catch up to where TSMC is today….

The math simply doesn’t add up for Intel……..Intel can’t catch TSMC’s capability unless it can catch up in EUV tools. With plans for $100B in spend, TSMC isn’t going away any time soon.

While Intel could catch TSMC in Moore’s Law technology they can’t really catch TSMC in leading edge capacity due to the disadvantage in EUV tool count…and thats what’s needed for foundry dominance.

We also remain concerned about Intel’s current pace in trying to catch TSMC on the technology side as we think that Intel’s yields in its 1278 process (whatever that has been renamed to) are in the single digits. Gelsinger is on the right track but needs to step even harder on the accelerator as the honeymoon period will be over soon.

Supply Chains woes are clear but talent woes are not yet recognized

The press, analysts and the market in general have gotten the message about the supply chain problems. Those problems have been there for a very long time but no one paid any attention until they rose to the surface by the Covid Crisis which was the catalyst that lit the fuse.

A larger and more difficult to solve problem in the semiconductor industry, particularly in the United States, is that of lack of talented manpower needed to push Moore’s law and the industry to its limits. Supply chain issues can eventually get fixed but talent issues are much harder and longer term in nature.

Back , many years ago, when I was graduating as a newly minted electrical engineer, getting a job offer from Intel was a dream come true and the pinnacle of the profession coveted by all. Today, not so much.

If someone graduating with a technical degree today had a choice of working at a hot quantum computing start up in Palo Alto, that could change the course of computing, or work inside a fab in Portland, babysitting an expensive temperamental litho tool, the choice is pretty clear.

The average tenure of a fab worker in Asia is several times that of one in the US.

Many years ago when working on the IPO of SMIC in Shanghai I was at the fab on a “hiring day” when there was a huge long line of people (mainly female) stretched around down a hallway. Everyone was clamoring for a job and an opportunity to live in dormitory next to the fab so they could roll out of bed to their jobs. It was heaven as compared to the more rural areas of China that most had made the trek from. SMIC had their pick of the best and brightest.

One of the key problems that Global Foundries had that contributed to its failure was attracting semiconductor talent to the rust belt area of upstate New York. New York did a great job with building CNSE around SUNY Albany and trying to build a critical mass of technology know how. But Global Foundries essentially killed off most of that when it gave up on advancing Moore’s Law, fired all its advanced R&D people and sold off its EUV tools, shutting down the future. It would be difficult if not impossible for GloFo to try to get back in the race as the most talented in the industry would much rather go to Arizona or Texas to companies in the race already.

The fight for talent in Arizona is going to be very, very tough as applicants will have two fiercely competing juggernauts, Intel & TSMC to woo them.

Obtaining great semiconductor talent will be harder than the fight for EUV tools.

Morris Chang, the founder of TSMC in Taiwan recently was in the news for making negative comments about US talent by saying “The United States stood out for cheap land and electricity when TSMC looked for an overseas site but we had to try hard to scout out competent technicians and workers in Arizona because manufacturing jobs have not been popular among American people for decades,”

While many in the US were offended by those comments he is not entirely wrong……..we just haven’t realized it yet…..

Even if we could get our hands on the EUV tools needed, we may not be able to get the right people to operate them…..

The Stocks

The reaction of the market to ASML’s analyst day underscores our concern that the semiconductor sector is past its peak.

This is not even taking into account the current “tech wreck” of the overall tech stock sector.

Although there’s still a chip shortage and lots of chips and chip tools to be sold, investors may have already tired of the group as a whole or at least tired of paying high growth multiples for a group whose growth is as good as it gets and isn’t getting any better from here.

I see no near term or even medium term reason to own Intel as we don’t think there is any positive news versus where we are now and the downside beta is higher than the upside beta as we could start to hear news of the renewed effort running into problems.

ASML still has European investors willing to support the stock but the upside may be lacking a investors clearly voted with their feet on less than stellar guidance from analyst day. We don’t expect a more positive tone coming out of the September quarter report so again likely more downside beta.

We think that equipment companies such as AMAT, LRCX and KLAC are going to be hard put to come up with positive enough outlook or guidance to continue to support the higher valuations.

Chip shortages will continue to stay in the news flow but chip and equipment companies will not see their valuations driven up further as the salvation of those shortages as we are over that story.

Over the longer run investors need to be careful that chip and chip equipment companies may finally be caught up in the supply chain (and talent) shortages that thus far has only impacted their customers and not them.

Also Read:

ASML is the key to Intel’s Resurrection Just like ASML helped TSMC beat Intel

KLA – Chip process control outgrowing fabrication tools as capacity needs grow

LAM – Surfing the Spending Tsunami in Semiconductors – Trailing Edge Terrific


Podcast EP41: A First Look at DAC 2021

Podcast EP41: A First Look at DAC 2021
by Daniel Nenni on 10-06-2021 at 10:00 am

Dan and Mike are joined by Harry Foster, Chief Scientist Verification at Siemens Digital Industry Software, Co-Founder and Executive Editor for the Verification Academy and 2021 Design Automation Conference General Chair. In this first in a series of DAC podcasts, we explore several dimensions of DAC 2021 with Harry, including overall live/ virtual format, submission topics/trends and ways to access DAC.

https://www.dac.com/

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


IBM and HPE Keynotes at Synopsys Verification Day

IBM and HPE Keynotes at Synopsys Verification Day
by Bernard Murphy on 10-06-2021 at 6:00 am

Synopsys Verification Day 2021 View Ondemand min

I have attended several past Synopsys verification events which I remember as engineering conference room, all-engineer pitches and debates. Effective but aiming for content rather than polish. This year’s event was different. First it was virtual, like most events these days, which certainly made the whole event feel more prime-time ready. Also each day of the two day event started with a keynote, further underlining the big conference feel. Finally many of the pitches, mostly from big-name customers, looked very topical – verification in AI, continuous integration, cloud architectures. Exciting stuff!

IBM on AI hardware, implications for verification

Kailash Gopalakrishnan spoke on this topic. Kailash is an IBM Fellow and Sr. Manager in the accelerator architectures and ML group at the T.J. Watson Research Center. He started by noting the rapid growth in AI model size over the past 3-5 years, more than 4 orders of magnitude. One direction he is pursuing to attack this is approximate computing. Reducing word sizes for both integer and floating point in training and in inference. This helps with both performance and power, critical for embedded applications such as in-line fraud prevention.

For such AI accelerators a wider range of word sizes increases complexity for formal methods. He sees rising formal complexity also in use of software managed scratchpads with complex arbiters. Advanced accelerators have more high bandwidth asynchronous interfaces, driving yet more increases in verification runtimes and coverage complexity. Such designs commonly build on rapidly evolving deep learning primitives and use-cases. Many more moving parts than we might normally expect when building on more stable IP and workloads for regular SoCs.

Big AI designs for datacenters are following similar paths to servers. Massively arrayed cores on a chip, with PCIe support for DMA and coherent die-to-die interfaces, ganging together many die (or chips) for training. These giants must support virtualization, potentially running multiple training tasks in a single socket. All of this needs verification. Verifying complex software stacks (TensorFlow, PyTorch on down to the hardware) running on a virtual platform together with emulation or FPGA prototyping for the hardware.

In next generation chips, modeling and verification will need to encompass AI explainability and reasoning, also secure execution. Analog AI will become more common. Unlike mixed signal verification today (e.g. around IO cores) this analog will be sprinkled throughout the accelerator. Which may raise expectations for AMS verification fidelity and performance. Finally, also for performance and power, 3D stacking will likely drive need for help more help in partitioning between stacked die. Not a new need but likely to become even more important.

HPE on growing design team methodologies

David Lacey is Chief Verification Technologist in HP Enterprise Labs and was making a plea for more focus on methodologies. In part referring to opportunities for EDA vendors to provide more support, but much more for verification teams to graduate up the verification maturity curve. Here I imagine vigorous pushback – “our process is very mature, it’s the EDA vendors that need to improve!” David isn’t an EDA vendor so his position should carry some weight. I’m guessing he sees a broad cross section, from very sophisticated methodologies to quite a few that are less so. Especially I would think in FPGA design, even in ASIC in teams with backgrounds in small devices.

David walked through 5 levels of maturity, starting from directed testing only. I won’t detail these here, but I will call out a few points I thought were interesting. At level 3 where you’re doing constrained random testing, he mentioned really ramping up metrics. Coverage certainly but also compute farm metrics to find who may be hogging an unusual level of resource. Performance metrics especially in regressions. Generally looking for big picture problems across the project as well as trends by block (coverage not converging for example).

He stresses automation, taking more advantage of tool features, adding in-house scripting to aggregate data after nightly runs so you can quickly see what needs extra attention. Eventually moving to continuous integration methodologies, using Jenkins or similar tools. Mature teams no longer practice “Everybody stop, now we’re going to integrate all checkins and regress”. He also stressed working with your EDA vendor to implement upgrades to simplify these and other tasks.

Finally, the ultimate stage in maturity. Using emulation to shift left to enable SW/HW co-development for system design. Taking advantage of the ML options we now see in some verification flows. These don’t require much ML understanding on your part but can offer big advances in getting to higher coverage quicker, simplifying static testing, accelerating root cause analysis on bugs and reducing regression run-times. Consider also the ROI of working with your current compute farm versus upgrading servers, exploiting the cloud or a hybrid approach. From one generation to the next, server performance advances by 50%. Per unit of throughput, a server upgrade is much cheaper than adding licenses. Moving to the cloud has flexibility advantages but you need to actively manage cost. And EDA vendors should add as-a-service licensing models to make EDA in the cloud a more attractive option.

Lots of good material. The whole session was recorded, I believe you can watch any of the talks through the end of the year. I’ll be posting more blogs over the next 3 months on other sessions in this valuable and virtual conference.

Also Read:

Reliability Analysis for Mission-Critical IC design

Why Optimizing 3DIC Designs Calls for a New Approach

Using Machine Learning to Improve EDA Tool Flow Results


Blur, not Wavelength, Determines Resolution at Advanced Nodes

Blur, not Wavelength, Determines Resolution at Advanced Nodes
by Fred Chen on 10-05-2021 at 10:00 am

Blur not Wavelength Determines Resolution at Advanced Nodes

Lithography has been the driving force for shrinking feature sizes for decades, and the most easily identified factor behind this trend is the reduction of wavelength. G-line (436 nm wavelength) was used for 0.5 um in the late 1980s [1], and I-line (365 nm wavelength) was used down to 0.3 um in the 1990s [2]. Then began the era of deep-ultraviolet (DUV), during which two wavelengths, KrF (248 nm) and ArF (193 nm) dominate even today. Subsequent wavelengths were practically impossible to find. Both F2 (157 nm) and EUV (13.2-13.8 nm) suffer from strong absorption in air. 157 nm required a dry nitrogen ambient [3] whereas EUV could not even be operated using projection lenses. 157 nm was eventually dropped as it was supplanted by the extension of 193 nm by using water as an immersion fluid. Today, EUV requires all-reflective optics in a vacuum with a background hydrogen ambient [4].

Resolution in optical projection systems can be pushed down to a 0.3-0.4 x wavelength/(numerical aperture). Numerical aperture (NA) cannot be increased beyond the refractive index of the imaging medium (1.44 for water, 1 for air or vacuum), so wavelength reduction is the expected solution. As the EUV wavelength is so much smaller than DUV, it has been expected to be able to support many generations of process nodes. However, for advanced processes, wavelength has lost its influence over determining resolution.

The reason is feature size has started approaching scales where blur becomes important. Blur here refers to the smoothing of chemical distributions within the resist after it has been exposed to light in the lithography tool. Smoothing makes edge definition, and hence feature size, more difficult to control by dose. The largest source of blur has been acid diffusion in chemically amplifed DUV resists [5]. However, for the case of EUV, another blur mechanism arises, namely secondary electrons [6]. Blur in EUV resists had been measured as large as over 5 nm [7,8], but more recently blur as low as 2 nm had been considered [9].

To quantitatively assess the impact of blur, it is most convenient to first fit images of interest with Gaussian curves. In Figure 1, dense line pitches of 20 nm, 30 nm, and 40 nm were fit with Gaussian curves with sigma=4.3 nm, 6.3 nm, and 8.4 nm, respectively. The fitting sigma is found to be excellently fit as a linear function of pitch.

Figure 1. Gaussian fits to dense line images from 20 nm to 40 nm pitch.

The blur is itself modeled as a Gaussian function. The blur function is convolved with the original image function to produced a blurred image curve. The higher the blur sigma, the more the image is changed. The degradation of image quality is quantified by the NILS (Normalized Image Log-Slope) metric. d[log(dose)]/d[x/feature width] at full-width half-maximum (FWHM) value. A NILS value of 2 has a special meaning: a 10% change in dose results in a 10% change in feature width. This can be considered a dividing line between “bad” and “good” images. Figure 2 shows a reference Gaussian and the effect of blurring with another Gaussian whose sigma is 0.66x the reference sigma (taken to be 1).

Figure 2. The image is degraded to borderline case (NILS=2) by a blur value of 0.66 times the original image sigma.

Combining the results of Figure 1 and 2, it is found that for a given pitch, the maximum allowed blur sigma is 0.14x pitch in the 20-40 nm pitch range. Conversely, the minimum pitch for a given resist would be given by blur/0.14. For example, a 5 nm blur sigma would limit resolution to 36 nm pitch.

Figure 3. Minimum dense line pitch (at which NILS=2) as a function of blur sigma.

The most important point to take away from this is that resist blur, which is not directly related to wavelength, is becoming the key factor in determining resolution. The electron blur is itself not easily pinned down, and contributes to the stochastic nature of EUV lithography.

References

[1] K. Eguchi et al., “0.5µm Lithography Using A G-Line Stepper With A 0.6 Numerical Aperture Lens,” Proc. SPIE 0922 (1988).

[2] K-Y. Kim et al., “Implementation of I-line lithography to 0.30 tm design rules,” Proc. SPIE 2440, 76 (1995).

[3] https://www.laserfocusworld.com/optics/article/16556523/challenges-remain-for-157nm-lithography

[4] https://www.spiedigitallibrary.org/journals/journal-of-micro-nanopatterning-materials-and-metrology/volume-20/issue-03/033801/EUV-induced-hydrogen-plasma–pulsed-mode-operation-and-confinement/10.1117/1.JMM.20.3.033801.full

[5] G. M. Gallatin, “Resist Blur and Line Edge Roughness,” Proc. SPIE 5754 (2005).

[6] T. Kozawa and S. Tagawa, “Normalized Image Log Slope with Secondary Electron Migration Effect in Chemically Amplified Extreme Ultraviolet Resists,” Appl. Phys. Exp. 2, 095004 (2009).

[7] R. Gronheid et al., “EUV Secondary Electron Blur at the 22 nm Half Pitch Node,” Proc. SPIE 7969, 796904 (2011).

[8] A. Chunder et al., “Separating the optical contribution to line edge roughness of EUV lithography using stochastic simulations,” Proc. SPIE 10146, 101460B (2017).

[9] Z. Belete et al., “Stochastic simulation and calibration of organometallic photoresists for extreme ultraviolet lithography,” J. Micro/Nanopattern. Mater. Metrol. 20, 014801 (2021).

Related Lithography Posts


On-Device Tensilica AI Platform For AI SoCs

On-Device Tensilica AI Platform For AI SoCs
by Kalar Rajendiran on 10-05-2021 at 6:00 am

Varying On Device AI Requirements 1

During his keynote address at the CadenceLIVE 2021 conference, CEO Lip-Bu Tan made some market trend comments. He observed that most of the data nowadays is generated at the edge but only 20% is processed there. He predicted that by 2030, 80% of data is expected to be processed at the edge. And most of this 80% will be processed on edge devices as AI workloads. This migration is already happening rapidly and is calling for different types of on-device AI SoCs.

During the same conference, president Anirudh Devgan presented Cadence’s strategy guiding their next wave of innovations and technology offerings. One of the three prongs of the strategy is enabling design excellence for its customers. Cadence delivers it through its EDA tools, emulation platforms, semiconductor IP and productivity software tools.

Cadence has been executing its strategy and announcing new capabilities at a nice pace. In July, it announced Cerebrus, the Intelligent Chip Explorer. Cerebrus falls under the EDA tool category of its design excellence drive. This month, Cadence announced its Tensilica AI Platform for accelerating AI SoC development.  The related press release mentioned three product families optimized for varying on-device AI requirements. The products fall under the category of semiconductor IP solutions for design excellence. This is the context for this blog.

I had an opportunity to discuss this product announcement with Pulin Desai. Pulin is the group director of product marketing and management for Tensilica Vision & AI at Cadence. The following is a synthesis of what I gathered from our conversation.

Pulin stated that Cadence is focused on bringing the most energy-efficient on-device IP solutions for AI SoCs. And it wants to do this for all types of workloads over a broad range of market segments. This in turn requires the product to be configurable, extensible and scalable across performance and energy parameters. Cadence took a platform approach that can deliver solutions to address these varying requirements.

Market Requirements

The supported market segments range from simple intelligent sensors to IoT audio/vision, mobile and automotive/ADAS. These market segments are looking for a solution that offers/enables low cost, rapid development, longer battery life and quick product differentiation. Customers also want configurable and extensible software-hardware platform that makes it easy for them to scale their products to meet different segments of their markets.

Product Requirements

The above identified market requirements translate into the following product-level requirements. It is no longer just the Tera Operations Per Second (TOPS) rating that matters. It is optimum TOPS per Watt and TOPS per sq.mm of silicon area. It is performance/speed at the lowest latency. It is the ability to

interface with varying types of workloads that operate on fixed or floating-point data. It is the capacity to process data from single or multi-mode sensors and execute concurrently. Refer to the figure below to see the very wide range of performance, power and workload interfaces that need to be addressed.

 

Tensilica AI DSPs

The AI SoCs that are to implement the above on-device AI product requirements need AI engines as well as DSP capabilities. The DSP functions are needed for processing the inputs received from multiple sensors. Cadence already has a long track record of successful Tensilica DSPs with AI ISA extensions based on the time-tested Xtensa® configurable and extensible processor platform. Tensilica DSPs are shipping in volume production in numerous SoCs and end products. It made strategic sense to expand these AI solutions further based on this strong DSP foundation to offer a wider range of AI-enabled products. Refer to figure below.

 

Tensilica AI Platform

The Tensilica AI Platform supports three AI product families to satisfy a broad range of market requirements: AI Base, AI Boost and AI Max. The comprehensive common software platform enables ease of scalability across these product families. The configurability and extensibility features allow some markets/applications to be addressed by multiple Tensilica AI solutions. The platform includes a Neural Network Compiler which supports industry-standard frameworks such as: TensorFlow, ONNX, PyTorch, Caffe2, TensorFlowLite and MXNet for automated end-to-end code generation; Android Neural Network Compiler; TFLite Delegates for real-time execution; and TensorFlow Lite Micro for microcontroller-class devices.

AI Base: The low-end product including Tensilica DSPs with AI ISA extensions targets voice/audio/vision/radar/LiDAR related applications. It can deliver up to 30x better performance and 5x-10x better energy efficiency compared to a regular CPU based solution. Refer to figure below for some benchmark results.

 

AI Boost: The mid-level product can be used with any of the AI Base applications when performance and power need to be optimized. It integrates the AI Base technology with a differentiated sparse compute Neural Network Engine (NNE). The initial version NNE 110 can scale from 64 to 256 GOPS and provides concurrent signal processing and efficient inferencing. It consumes 80% less energy per inference and delivers more than 4X TOPS per Watt compared to industry-leading standalone Tensilica DSPs. Refer to figure below for some benchmark results.

 

AI Max: The high-end product integrates the AI Base and AI Boost technology with a Neural Network Accelerator (NNA) family. The NNA family currently includes single core (NNA 110), 2-core (NNA120), 4-core (NNA 140) and 8-core (NNA 180) options. The multi-core NNA accelerators can scale up to 32 TOPS, while future NNA products are targeted to scale to 100s of TOPS. Refer to figure below for some benchmark results.

 

Summary

The Cadence Tensilica AI Platform enables industry-leading performance and energy efficiency for on-device AI applications. It is built upon the mature, volume-production proven Tensilica Xtensa architecture. The low-end, mid-level and high-end product families cover the full spectrum of PPA and cost points for various market segments. The solution currently scales from 8 GOPS to 32 TOPS, with additional products expected  to deliver 100s of TOPS to meet future requirements.

For more information on the Tensilica AI Platform and new AI IP solutions, visit their product page by clicking here. To read the full press release, click here.

Also Read:

Cadence Tensilica FloatingPoint DSPs

Features of Short-Reach Interface IP Design

112G/56G SerDes – Select the Right PAM4 SerDes for Your Application


Heterogeneous Package Design Challenges for ADAS

Heterogeneous Package Design Challenges for ADAS
by Tom Simon on 10-04-2021 at 10:00 am

Hetergeneous Package Design

Increasingly complex heterogeneous packaging solutions have proved essential to meeting the rapidly scaling requirements for automotive electronics. Perhaps there is no better example of this than advanced driver-assistance systems (ADAS) that are found in most new cars. In a recent paper published by Siemens EDA, they detail the current technology trends that are creating design challenges. The paper titled “Methodology and Process for Heterogeneous Automotive Package Design” mentions shrinking bump pitch, increasing bump density, decreasing ball pitch, high current consumption and high frequency issues as factors that must be addressed to meet system level requirements in this market.

The Siemens paper is focused on the adoption of Xpedition Substrate Integrator (xSI) from Siemens by the Back-End Manufacturing Technology Team at ST Microelectronics in Agrate Italy. The design of their current and future ADAS products requires a tool that can handle increasing package connectivity density and deal with design data from chip and PCB tools as well as the package design. The paper outlines a well-structured three phase flow for package design.

In the early stage there is focus on design rule identification, implementation technology, cost optimization, and design strategy verification. All aspects of the design are explored to come up with a preliminary implementation. The areas considered are package dimensions, substrate stack up, ball-out definitions, break out strategies and preliminary connectivity assignment. With this information, cost and performance metrics can be estimated.

This is followed by the intermediate stage, where attention is paid to physical design, debugging and optimization. A lot happens here, starting with finalization and optimization of the substrate routing. Key aspects such as manufacturability, interfaces, power requirements and more are assessed. The final stage involves verification and sign off of every aspect of the finished design. This includes thermal, signal integrity, power integrity and manufacturing verification leading to the tape-out handoff to manufacturing.

The Siemens paper points out that each of these design stages rely heavily on co-design, co-simulation and co-optimization. Without these a siloed non-convergent design process would result. It is necessary for the package design flow to take a system level approach. This is the only way that system connectivity can be optimized to meet all requirements. Data must flow from IC and PCB tools. xSI helps eliminate the use of spreadsheets for passing preliminary data by providing a feature known as connectivity tables to capture system definition during development. This has the added benefit of permitting easy what-if analysis.

Hetergeneous Package Design

Siemens xSI was attractive to ST because of its extensive support for early prototyping during the package design process. Missing devices can be created from scratch, and complex bump and ball pitch and positions can be defined for sophisticated design exploration. Because the design is not static, xSI supports fast connectivity updates when design elements are changed via ECO. The article describes several other innovative capabilities that influenced ST’s decision.

The article also describes in some detail the test case that was used by ST. At the end of the article the authors review the most important aspects of the flow that xSI offers. They point out that xSI enables hierarchical construction of the complete package assembly with step-by-step han­dling of multiple parts, including reuse of parts. In order to meet the needs of high-speed interface design and accommodate complex bump-out geometries, xSI includes efficient integration with external tools, and flexibility during IC-package connectivity planning and optimization.

Package design is by its very nature heavily constrained. Both physically, by its role in between die and board, but also because of interface and interconnection requirements. There are thermal, power, signal integrity, area and many other requirements to satisfy. There is also no argument that ADAS systems impose some of the strictest requirements on system operation. They often include high speed video streams, display output, as well as other sensor input and control output. ADAS systems also include extremely advanced conventional and AI processors. Siemens offers a lot of information about why ST chose xSI in the article, which is available here on the Siemens website.

Also Read:

Electromigration and IR Drop Analysis has a New Entrant

Formal Methods for Aircraft Standards Compliance

Verifications Horizons 2021, Now More Siemens


What the Heck is Collaborative Specification?

What the Heck is Collaborative Specification?
by Daniel Nenni on 10-04-2021 at 6:00 am

Git Commit

It’s been quite a while since I talked with Agnisys CEO and founder Anupam Bakshi, when he described their successful first user group meeting. I reached out to him recently to ask what’s new at Agnisys, and his answer was “collaborative specification.” I told him that I wasn’t quite sure what that term meant, and he offered to spend some time with me to explain. I’d like to share some of our conversation.

Can you please tell us what collaborative specification means?

It’s a term used in slightly different ways in multiple industries, but to us it’s similar to parallel design. The concept of multiple design engineers working together on a single chip is well established, and in fact it’s essential given the size and complexity of today’s devices. Distributed design teams and the ongoing pandemic mean that engineers are rarely working side by side physically, so a highly automated online system is required. The same argument applies to chip verification; modern testbenches are incredibly complicated and require many verification engineers collaborating. We’ve extended this idea to specification of the design, with many architects and designers working together virtually.

Why is collaborative specification different than collaborative documentation?

Well, if your design specification is just another text document, there’s really no difference at all. But the Agnisys approach is based on executable specifications that our tools use to automatically generate RTL designs, Universal Verification Methodology (UVM) testbenches, assertions for formal verification, C/C++ code for firmware and drivers, and end user documentation. Collaborating on executable specifications is a rigorous process that offers more opportunities for automation.

Are you talking about multiple engineers working on a single specification at the same time?

Yes, that’s sometimes the case, so it’s important to have a great change tracking and control system in place. I should comment that the way these systems have evolved is one of the biggest changes in software and hardware development over the years. Revision control systems used to be based on a locking model, where any engineers wishing to edit a file (schematics, RTL code, testbenches, programs, etc.) would “check out” the file and have exclusive access to it during the editing period. If someone else wanted to make a change, the first engineer would have to “check in” the edits first. For quite a few years now, the dominant approach has been the merge model, in which multiple engineers might be editing the same file at the same time. Resolving any inconsistencies has become an essential part of any version control system flow. Of course, we support the merge model in our IDS NextGen  (IDS-NG) solution.

So is IDS-NG a version control system?

No, there are excellent solutions out there and we saw no reason to “reinvent the wheel.” In talking with our customers and partners, we found that the open source Git distributed version control system is extremely popular and very powerful. IDS-NG now has a tight, native integration with Git so that our users can create and edit executable specifications that are saved in a repository and managed by an industry-standard system.

Can you give an example of how this works?

Sure! The user creates and edits a specification using the intuitive graphical environment and language-aware editor within IDS-NG. They may import existing specification information from common standard formats such as IP-XACT and SystemRDL, or they may leverage the highly configurable standard IP blocks available in our SLIP-G library. Once they have reached a good checkpoint, they can easily “commit” their changes to a local branch of the design specification and then “push” these changes to the main development branch in the project-wide repository. Other users can then “pull” the new version of the file for use in their local branches.

What if someone else has edited the same file in parallel and already pushed those changes?

In that case, Git reports that there are conflicting edits, but it cannot resolve the differences by itself. This is where the cleverness of IDS-NG really kicks in. If the user pulls the other updated version of the file, IDS-NG performs a comparison (“diff”) of the two design specifications and reports the results. Some changes can be merged automatically, but others require user input. For example, if two users change the width of the same register in their respective edits, one of the users needs to decide which is the correct value. This information is displayed very clearly in the IDS-NG interface and it’s easy for the user to review and resolve any differences. This unique approach is easier and more intuitive than doing a simple text comparison on two SystemRDL files.

Is seems to me that this capability would be very useful in a CI-based workflow; is that correct?

Yes indeed. In continuous integration (CI), which Git supports well, all committed changes are pushed to the main development branch frequently, perhaps at the end of every work day. The motivation for this approach is finding conflicting edits sooner rather than later, so that they can be resolved before two versions of the specification diverge wildly. You could argue that the “compare documents” function in Microsoft Word played a big role in enabling collaborative documentation. With the recent Git integration and diff capability in IDS-NG, we’ve done the same to enable collaborative specification.

Will you be demonstrating these new features at the upcoming virtual DVCon Europe show?

Yes, we are a Silver Sponsor for this important event, and we have a very nice collaborative specification demo and video available. We are looking forward to sharing them with the online DVCon Europe attendees. We’re excited at the advanced project workflows we now enable, and we expect a lot of user interest.

Anupam, thank you for the updates and the technical information.

Thanks, Dan. It’s always a pleasure to talk with you.

Also read:

AUGER, the First User Group Meeting for Agnisys

Register Automation for a DDR PHY Design

Automatic Generation of SoC Verification Testbench and Tests


Autonomous Vehicle Rationale Breaks Down

Autonomous Vehicle Rationale Breaks Down
by Roger C. Lanctot on 10-03-2021 at 6:00 am

Autonomous Vehicle Rationale Breaks Down

The latest SmartDrivingCars podcast raised fundamental questions regarding the rationale for developing autonomous cars while debating the various paths to market adoption. The discussion took place between Alain Kornhauser – faculty chair of autonomous vehicle engineering at Princeton University and Adriano Alessandrini, a professor at the University of Florence.

Ostensibly, the conversation between Kornhauser and Alessandrini was to be focused on the need to improve road systems and infrastructure to support autonomous mobility. The wide ranging discussion detoured into the various assumptions and business models intended to justify and enable the adoption of autonomous vehicle tech.

The discussion ultimately and inadvertently challenged the fundamental assumptions behind the efficacy and purpose of autonomous driving. The conversation pointed toward a single justification for developing autonomous vehicle tech: to serve physically or financially disadvantages populations.

One tends to arrive at this conclusion by considering the various autonomous vehicle adoption scenarios, most of which simply breakdown upon closer scrutiny.

  1. Communities will dedicate lanes for autonomous vehicles along existing roadways providing privileged access for these vehicles. These vehicles might be equipped with technology designed to surrender vehicle control to infrastructure-based guidance systems.  Counter Argument: If communities choose to dedicate lanes to autonomous vehicles – as in the case of Michigan creating its “Road of the Future” to Ann Arbor – it is clear that tracks for trains would be a superior choice.  “Specially” equipped cars suitable to operate in such privileged lanes would end up being more expensive and likely only accessible to the rich.
  2. Provide incentives for robotaxis or roboshuttles to operate alongside existing transit solutions. Counter Argument: Robotaxis or roboshuttles are likely to gravitate to the most popular and profitable routes of an already subsidized public transit system, further undermining the already challenging finances of that system. (An example of this is the Lyft-MBTA transit trial in Boston which revealed this tendency.) This will, in turn, put pressure on the financial viability of less popular routes – including those serving disadvantaged neighborhoods. (The MBTA estimates it loses $20M in fares annually to Uber/Lyft riders – and that Uber and Lyft contribute to traffic congestion.)
  3. Launch autonomous vehicles in suburbs. Counter Argument: Waymo has already demonstrated that offering robotaxis or roboshuttles in suburbs is not viable due to the saturation of vehicle ownership. Consumers in typical middle class suburbs simply aren’t interested in the robotaxi proposition.
  4. Introduce robotaxis in cities. Counter Argument: Robotaxis are too slow and are likely to be too expensive. Human driven taxis do a better job. Also, robotaxis are likely to skim off the most popular and lucrative routes (putting financial pressure on providers serving less popular routes and disadvantaged neighborhoods)  unless they are programmed to serve disadvantaged neighborhoods.
  5. Exclude cars from city centers and only allow robotaxis and roboshuttles. Counter Argument: In such circumstances, the cities that choose this path will have, in effect, created a public concession with related bidding processes and funding. Such circumstances, by their nature, will become public transportation subject to the financial challenges of existing public transportation and potentially in conflict with existing solutions and also subject to the same demands of serving the entire population, regardless of the financial constraints.

The unavoidable conclusion is that autonomous vehicle technology is most ideally suited to serving financially or physically disadvantaged populations. Bringing autonomous vehicle tech to other parts of the city – as suburban deployment does not appear to make much sense – only makes sense as a direct replacement of public transportation – not as a competing alternative.

In this context, the most notable conclusion from the podcast was Professor Kornhauser’s return to his two main themes – consistent throughout all of his podcasts:

  • Existing infrastructure to support autonomous vehicle operation is lousy. The problem starts with the poor application and maintenance of paint and local authorities and industry constituencies need to start fixing that first.
  • Cars need to do a better job assisting drivers with the driving task. Professor Kornhauser is outraged at the failure to see wider and more aggressive application of emergency braking technology to avoid collisions at higher speeds.

Ultimately, the key takeaway from the SmartDrivingCars podcast is that more attention needs to be paid to adopting and deploying ADAS-type (advanced driver assist system) technologies for lane keeping, blind spot detection, adaptive cruise control, and emergency braking. It’s likely that robotaxis intended for wide deployment in cities will evolve as public transit propositions.

The long development cycle of autonomous technology and the enduring interest in vehicle ownership are likely to render suburban areas as hostile territory for autonomous vehicles indefinitely.  All driving environments are fertile ground for deploying driver assistance technology. On that we can agree.

SmartDrivingCars Podcast Episode 233 – Making mobility happen in Europe, Trenton and beyond – https://tinyurl.com/tw7jf7na