webinar banner2025 (1)

Radiation Tolerance. Not Just for ISO 26262

Radiation Tolerance. Not Just for ISO 26262
by Bernard Murphy on 04-30-2020 at 6:00 am

Satellite

Years before ISO 26262 (the auto safety standard) existed, a few electronics engineers had to worry about radiation hardening, but not for cars. Their concerns were the same we have today – radiation-induced single event effects (SEE) and single event upsets (SEU). SEEs are root-cause effects – some form of radiation, might be cosmic, might be generated on earth, smacks into a chip die causing an ionization cascade. That may lead to a single event upset (SEU) where a bit in the logic is flipped. SEEs can also trigger latchup, gate rupture and other damage. But most efforts on rad hardening today, that I know of, focus on SEUs.

Two factors amplify the importance of SEUs – radiation flux intensity and the sensitivity of the circuit. Radiation flux at ground level, mostly neutrons triggered by cosmic ray events in the upper atmosphere, wasn’t energetic enough in most applications to be an issue until we got to smaller fabrication geometries where a bit can be flipped by a single ionization event.

Obviously above the atmosphere and higher in the atmosphere, cosmic ray energy and flux is less moderated by traveling through miles of air, which means that satellites and aircraft need a higher level of hardening. Also on the ground, some applications such as the European ITER fusion reactor need to use specially hardened FPGAs. The same applies to instrumentation around nuclear reactors.

Mentor recently released a white paper, “RETHINKING YOUR APPROACH TO RADIATION MITIGATION”, talking about a general methodology towards handling this need, particularly directed to the FPGA design so common in these aerospace and nuclear applications. Interestingly this paper doesn’t push any tools or even classes of tools. It’s one of those happiest finds among vendor white papers – a commercial-free information resource!

The paper starts with a common FPGA development flow for high radiation environments. This should look familiar to ISO 26262 aficionados, with a parallel flow for FMEDA, fault analysis, fault protection and fault verification. I’m thinking we may already be used to a decent level of automation in this flow in the automotive domain. There seems to less of this in aerospace and nuclear or perhaps less for FPGA design in general; maybe since FPGA design methodologies often follow behind those for mainstream SoC design?

Whatever the reason, it looks like designers in these domains depend mostly on expert-driven and largely manual fault analysis. The theme of the paper is to argue the benefits of moving towards a more automated, exhaustive (to some level) and scalable approach which will work not only with in-house designed logic but also with embedded 3rd-party IP.

The paper walks in some detail through the challenges in conventional approaches to fault analysis, through metrics for fault coverage and FIT, and the structural analysis that must be performed to assess these metrics, from low-level logic up to a full design. It also talks about common fault mitigation approaches, parity, CRC, ECC, TMR, duplication and lockstep checking, you know the list.

The next topic is fault protection, with a nod to fail-operational behavior (also becoming more common in ISO 26262). The main emphasis here is on the error-prone nature of manually inserting mitigation techniques and the challenge in re-verifying those changes did not break mission-mode functionality. This implies a need for more automated equivalence checking.

The final section is on fault verification and the challenges in intelligently faulting a sufficient set of nodes to ensure a high level of coverage while keeping that set to a manageable level (since fault simulation is going to burn a lot of compute cycles).

An interesting insight into the needs of the aerospace and nuclear electronics design communities, who should definitely find it a good backgrounder. You can read the paper HERE.

 

 

 


Can TSMC Maintain Their Process Technology Lead

Can TSMC Maintain Their Process Technology Lead
by Scotten Jones on 04-29-2020 at 10:00 am

TSMC Process Lead Slides 20200427 Page 1

Recently Seeking Alpha published an article “Taiwan Semiconductor Manufacturing Company Losing Its Process Leadership To Intel” and Dan Nenni (SemiWiki founder) asked me to take a look at the article and do my own analysis. This is a subject I have followed and published on for many years.

Before I dig into specific process density comparisons between companies, I wanted to clear up some misunderstandings about Gate All Around (GAA) and Complimentary FET (CFET) in the Seeking Alpha article.

Gate All Around (GAA)
Just as the industry switched from planar transistors to FinFETs, it has been known for some time that a transition from FinFETs to something else will eventually be required to enable continued shrinks. A FinFET has a gate on three sides providing improved electrostatic control of the devices channel compared to a planar transistor that has a gate on only one side. Improved electrostatic control provides lower channel leakage and enables shorter gate lengths. FinFETs also provide a 3D transistor structure with more effective channel width per unit area than planar transistors therefore providing better drive current per unit area.

It is well established that a type of GAA device – horizontal nanosheets (HNS) are the next step after FinFETs. If the nanosheets are very narrow you get nanowires and significantly improved electrostatics. The approximate limit of gate length for a FinFET is 16nm and for a horizontal nano wire (HNW) is 13nm, see figure 1. Shorter gate lengths are a component of shrinking Contacted Poly Pitch (CPP) and driving greater density.

Figure 1. Contacted Poly Pitch CPP Scaling Challenges.

Please note that in Figure 1, the 3.5nm TSMC HNW is just an example of how dimensions might stack up, we know they are doing FinFETs at 3nm.

The problem with a HNW is that the effective channel width is lower than it is for a FinFET in the same area. The development of HNS overcame this problem and can offer up to 1.26x the drive current of FinFETs in the same area although they sacrifice some electrostatic control to do it, see figure 2.

Figure 2. Logic Gate All Around (GAA).

Another advantage of HNS is the process is essentially a FinFET process with a few changes. This is not meant to understate the difficulty of the transition, the HNS specific steps are critical steps and the geometry of a HNS will make creating multiple threshold voltages difficult, but it is a logical evolution of FinFET technology. Designers are used to FinFETs with 4 and 5 threshold voltages available to maximize the power – performance trade off, going back to one or two threshold voltages would be a problem, this is still an area of intense HNS development and needs to be solved for wide adoption.

At the “3nm” node Samsung has announced a GAA HNS they call a Multibridge, TSMC on the other hand is continuing with FinFETs. Both technologies are viable options at 3nm and the real question should be who delivers the better process.

Complementary FETs (CFET)
In the Seeking Alpha article there is a comment about a CFET offering 6x the density of a 3 fin FinFET cell, that isn’t how it works and in fact the comparison doesn’t even make sense.

Logic designs are made up of standard cells, the height of a standard cell is given by metal 2 pitch (M2P) multiplied by the number of tracks. A recent trend is Design Technology Co Optimization (DTCO) were in order to maximize shrinks the number of tracks has been reduced at the same time as M2P. In a 7.5 track cell, it is typical to have 3 fins per transistor but as we have transition to 6 track cells available at 7nm from TSMC and 5nm from Samsung, the fins per transistor is reduced to 2 due to spacing constraints. In order to maintain drive-current the fins are typically taller and optimized in other ways. As the industry moves to 5 track cells, the fins per transistor will be further reduced to 1.

Figure 3. Standard Cell layouts

CFETs are currently being developed as a possible path to continue to scale beyond HNS. In a CFET an nFET and pFET are stacked on top of each other as HNS of different conductivity types. In theory CFETs can scale over time by simply stacking more and more layers and may even allow lithography requirements to be relaxed but there is a long list of technical challenges to overcome to realize even a 2 deck CFET. Also, due to interconnect requirements going from a HNS to a 2 Deck CFET is approximately a 1.4x to 1.6x density increase, not 2x as might be expected. For the same process node, a 2 deck CFET would likely offer a less that 2x density advantage over an optimized FinFET, not 6x as claimed in the Seeking Alpha article.

2019 Status
In 2019 the leading logic processes in production were Intel’s 10nm process, Samsung’s 7nm process and TSMC’s 7nm optical process (7FF). Figure 5 compares the three processes.

Figure 4. 2019 Processes.

In figure 4, M2P is the metal 2 pitch as previously described, tracks are the number of tracks and cell height is M2P x Tracks. CPP is the contacted poly pitch and SDB/DDB is whether the process has a single diffusion break or double diffusion break. The width of a standard cell is some number of CPPs depending on the cell type and then DDB adds additional space versus a SDB at the cell edge. The transistor density is a weighted average of transistor density based on a mix of NAND cells and Scanned Flip Flop cells in a 60%/40% weighting. In my opinion this is the best metric for comparing process density, it isn’t perfect, but it takes designs out of the equation. A lot of people look at an Intel Microprocessor designed for maximum performance and compare the transistor density to something like an Apple Cell Phone Process with a completely different design goal and that simply doesn’t provide a process to process comparison under the same conditions.

It should be noted here that Samsung has a 6nm process and TSMC has a 7FFP that both increase the transistor density to around 120MTx/mm2, In the interest of clarity I am focusing on the major nodes.

2020 Status
At the end of 2019, Samsung and TSMC both began risk production of 5nm processes and both processes are in production in 2020.

5nm is where TSMC really stakes out a density lead, TSMC’s 5nm process has a reported 1.84x density improvement versus 7nm whereas Samsung’s 5nm process is only a 1.33x density improvement. Figure 5 compares Intel’s 10nm process to Samsung and TSMC’s 5nm processes since 10nm is still Intel’s densest process in 2020.

Figure 5. 2020 Processes.

The values for Samsung in figure 5 are all numbers that Samsung has confirmed. The TSMC M2P is an incredible 28nm, a number we have heard rumored in the industry. The rest of the numbers are our estimates to hit the density improvement TSMC has disclosed.

Clearly TSMC has the process density lead at the end of 2020.

2021/2022
Now the situation gets fuzzier, Intel’s 7nm process is due to start ramping in 2021 with a 2.0x shrink. Samsung and TSMC are both due to begin 3nm risk starts in 2021. Assuming Intel hits their date, they may briefly have a production density advantage but Intel’s 14nm and 10nm process have both been several years late. With COVID 19 impacting the semiconductor industry in general and the US in particular, a 2021 production date for Intel may be even less likely.

Figure 6 compares 2021/2022 processes assuming that within plus or minus a quarter or two all three processes will be available, I believe this is a fair assumption. Intel has said their density will be 2.0x 10nm, TSMC on their 2020-Q1 conference call said 3nm will be 70% denser than 5nm so presumably 1.7x, Samsung has said 3nm reduce the die size by 35% relative to 5nm and that equates to a approximately 1.54x denisty.

In order to make Intel’s numbers work I am assuming an aggressive 26nm M2P with 6 tracks, an aggressive 47nm CPP for a FinFET and SDB.

For Samsung they have disclosed to SemiWiki a 32nm M2P for 4nm and I am assuming they maintain that for 3nm with a 6-track cell. For CPP with the change to a GAA HNS, they can achieve 40nm and SDB.

In the case of TSMC they are shrinking 1.7x off of a 5nm process that is a 1.84x shrink from 7nm and they are bumping against some physical limits. With them staying with a FinFET I don’t expect the CPP to be below 45nm for performance reasons and even with SDB they will have to have a very aggressive cell height reduction. By implementing a buried power rail (BPR) they can get to a 5-track cell, BPR is a new and difficult technology and then an M2P of 22nm is required. Frankly such a small M2P raises issues with lithography and line resistance and BPR is also aggressive so I think this process will be incredibly challenging but TSMC has an excellent track record of execution.

Figure 6 summarizes the 2021/2022 process picture.

Figure 6. 2021/2022 Processes.

Some key observations from figure 6.

  1. The individual numbers in figure 6 are our estimates and may need to be revised as we get more information, but the overall process densities match what the companies have said and should be correct.
  2. In spite of being the first to move to HNS, Samsung’s 3nm is the least dense of the three processes. The early move to HNS may make it easier for Samsung to shrink in the future but for their 3nm node isn’t providing the density advantage that you might expect from HNS.
  3. Yes Intel is doing a 2.0x shrink and TSMC only a 1.7x shrink, but TSMC is doing a 1.84x shrink from 7nm to 5nm and then a 1.7x shrink from 5nm to 3nm in roughly the same time frame that Intel is doing a 2.0x shrink from 10nm to 7nm. A 1.7x shrink on top of a 1.84x shrink is a huge accomplishment, not a disappointment.

What’s Next
Beyond 2021/2022 I expect Intel and TSMC to both adopt HNS and Samsung to produce a second generation HNS. This will likely be followed by CFETs around 2024/2025 from all three companies. All of these confirmed numbers and projections come from the IC Knowledge – Strategic Cost and Price Model. The Strategic Cost and Price Model is not only a company specific roadmap of logic and memory technologies into the mid to late 2020s, it is also a cost and price model that produces detailed cost projections as well as material and equipment requirements.

Interested readers can see more detail on the Strategic Cost and Price Model here.

Conclusion
TSMC took the process density lead this year with their 5nm process. Depending on the exact timing of Intel’s 7nm process versus TSMC 3nm Intel may briefly regain a process density lead but TSMC will quickly pass them with their 3nm process with over 300 million transistors per millimeter squared!

Also Read:

SPIE 2020 – ASML EUV and Inspection Update

SPIE 2020 – Applied Materials Material-Enabled Patterning

LithoVision – Economics in the 3D Era


Starting a Chip Company? Silicon Catalyst and Arm Are Ready to Help

Starting a Chip Company? Silicon Catalyst and Arm Are Ready to Help
by Mike Gianfagna on 04-29-2020 at 6:00 am

Screen Shot 2020 04 27 at 5.59.56 PM

Anyone who has started a company knows that landing the seed round of investment is just the beginning. There are many decisions to face.  When to start building a sales team?  What parts of the company’s infrastructure to outsource? How to price and promote your product? These are just a few of the questions to be answered. If your end product is a chip, you also face a maze of tasks regarding access to process technology, packaging and test, EDA tools and semiconductor IP, none of which is particularly easy to choose or inexpensive.

This is why a press announcement that crossed the wire today caught my attention – Silicon Catalyst Collaborates with Arm to Accelerate Semiconductor Startups.

Silicon Catalyst is an incubator that focuses exclusively on chip companies. They offer a broad range of support to get your company off the ground. I believe their singular focus on chip startups and their breadth of support make them unique in the world.

Silicon Catalyst is a new member of the SemiWiki family, and I got a chance recently to speak with Peter Rodriguez, CEO at Silicon Catalyst, about the organization and the significance of its press release with Arm. Pete is no stranger to the semiconductor business, with over 35 years of executive experience. He was formerly VP & GM of Interface and Power at NXP Semiconductors. Prior to NXP, Pete was CEO of Exar Corporation, CEO of Xpedion Design Systems, Chief Marketing Officer at Virage Logic and Major Account Manager at LSI Logic. Mr. Rodriguez also retired from the US Naval Reserves with the rank of Commander. He brings a wealth of technical and business leadership to Silicon Catalyst.

The primary news in the press release is that Arm has joined Silicon Catalyst as both a Strategic and In-Kind Partner, giving startup companies being incubated by Silicon Catalyst zero-cost access to trusted IP and support from Arm.

Silicon Catalyst’s In-Kind Partner Program offers a wide range of design tools, simulation software, design services, foundry PDKs and MPW runs, test program development, tester access and semiconductor IP, now including industry-leading IP from Arm. Arm becomes the 33rd In-Kind partner for the incubator, joining the likes of TSMC, Synopsys, Mentor, Advantest, and Keysight, to name a few. Companies accepted into the incubator have two years of no-cost or significantly discounted access to these tools and services during the incubation period, resulting in a dramatic reduction in the cost of chip development.

Pete explained that since 2015, over 300 startup companies have engaged with Silicon Catalyst and the organization is closing in on 30 companies that have been admitted to the incubator program. Pete also discussed the many other programs at Silicon Catalyst and how they help chip startups. The Strategic Partner Program provides participants early access to review and help select the silicon startups seeking to participate in the Silicon Catalyst Incubator. Arm has also joined the Strategic Partner Program, making it the first company to join as both an In-Kind Partner and a Strategic Partner.

Silicon Catalyst maintains a growing network of over 150 seasoned Silicon Valley veterans who are available to advise portfolio companies. Their skills span technology, manufacturing, business development, sales, staffing, finance and legal matters. The Silicon Catalyst Angels was launched in July 2019 as a separate organization to provide access to seed and Series A funding for Silicon Catalyst portfolio companies.

I also had a chance to speak with Jim Hogan, Silicon Catalyst board member and semiconductor/EDA industry veteran. Jim has invested in many technology startups and has helped a lot of them achieve a successful exit. He knows the challenges of getting a chip startup off the ground well. Jim described the typical investment for launching a chip company to include $3M – $5M to get to proof of concept and perhaps another $20M or so to engage with customers. The Silicon Catalyst incubator can dramatically reduce these numbers, thanks to all their preferred tool and service access as well as the expert guidance of their advisor network.

In Jim’s words, it’s all about value preservation. Reducing the previously mentioned investment amounts allows the founding team to keep more of their company, making Silicon Catalyst portfolio companies a significantly more viable investment, and that’s good for everyone. Having seen many chip startup business plans, Jim was also able to put the current press release in context. He explained, “most of the startups I speak with say two things when they walk through the door – we need TSMC and we need Arm.” Jim sits on the board of five Silicon Catalyst portfolio companies, so he invested in the Silicon Catalyst model at multiple levels.

After my discussions with Pete and Jim it became clear that Silicon Catalyst has come a long way since its kickoff in 2015. Pete also explained that, while headquartered in Silicon Valley, Silicon Catalyst is expanding internationally. There is currently a joint venture in Chengdu, China and a presence in Israel through local partners with extensive semiconductor industry experience. The organization is also exploring expansion in Europe and India.

Simply put, if you want to start a chip company, Silicon Catalyst and Arm have you covered. Pete concluded our discussion by telling me the tagline for Silicon Catalyst. It’s a sentiment that is simple, direct and especially in the current times, a very important one I believe:

it’s about what’s next.

If you have a great idea and want to explore it with Silicon Catalyst, you can start the process here.

Also Read:

Silicon Catalyst Fuels Worldwide Semiconductor Innovation

Webinar: Investing in Semiconductor Startups

Silicon Catalyst Hosts an All-Star Panel December 8th to Discuss What Happens Next?


Webinar: Real-time In-Chip Monitoring to Boost multi-core AI, ML, DL Systems

Webinar: Real-time In-Chip Monitoring to Boost multi-core AI, ML, DL Systems
by Daniel Payne on 04-28-2020 at 10:00 am

hot spots

During the COVID-19 pandemic I’m using Zoom and attending more webinars to keep updated on semiconductor industry trends, and one huge trend is the importance of AI applied to SoCs. Using more cores to handle ML and DL makes sense, but then how do you keep the chips within their power and reliability limits while at the same time achieving the greatest data throughput?

I’ve read about AI chips that have billions to even trillions of transistors, and that’s a huge challenge in several areas:

  • Localized junction temperatures impact performance and reliability
  • IR drop caused by transient switching currents increases timing delays
  • Process variations are localized and effect performance

In the 1970s we placed process monitor IP into the scribe lines of each wafer in order to answer some of these questions about process variation, but at the 40nm node and smaller nodes, we really need to have IP embedded within an SoC to understand what the local junction temperature is, how the VDD level is responding to noise, and which process corner the transistors are operating under.

Moortec is a UK-based IP provider that has delivered embedded in-chip sensors and monitoring to address these challenges for SoC design across several disciplines:

  • AI
  • Data Center
  • HPC
  • Automotive
  • Consumer

Multi-core Chip with Hotspots

With an accurate in-chip voltage monitor your design engineers can implement appropriate voltage scaling approaches:

  • Static Voltage Scaling
  • Dynamic Voltage Scaling
  • Adaptive Voltage Scaling

Multiple Voltage Monitors

Webinar

Register to view this webinar on Thursday, May 7th, 10AM PDT (6pm BST). There are two presenters and Daniel Nenni from SemiWiki is the host:

 About Moortec

The company have been providing innovative embedded subsystem PVT IP solutions for over a decade, empowering their customers with the most advanced monitoring IP on 40nm, 28nm, 16nm, 12nm, 7nm & 5nm.

Related Blogs


AI, Safety and Low Power, Compounding Complexity

AI, Safety and Low Power, Compounding Complexity
by Bernard Murphy on 04-28-2020 at 6:00 am

Hoc in a low-power ASIL-D design

The nexus of complexity in SoC design these days has to be in automotive ADAS devices. Arteris IP highlighted this in the Linley Processor Conference recently where they talked about an ADAS chip that Toshiba had built. This has multiple vision and AI accelerators, both DSP and DNN-based. It is clearly aiming for ISO 26262 ASIL D certification since the design separates a safety island from the processing island, pretty much the only way you can get to ASIL D in a heterogenous mix of ASIL-level on-chip subsystems.

Equally clear, it’s aiming to run at low power – around 2.7W for the processing island (the bulk of the functionality). It’s all very well to be smart but when you have dozens of smart components scattered around the car, that adds up to a lot of power consumption. The car isn’t going to be very smart if it runs its battery flat.

These are to some extent competing objectives. I’ve talked before about AI and safety and the need for a safety island to deliver ASIL D performance around AI accelerators. I’ll come back to that. But first I want to talk about power management and safety in on-chip networks.

Low-power design is one of those messy realities that spans all levels of design and this affects the on-chip networks, frequently NoCs in an SoC (there has to be a rap lyric in there somewhere), as much as any other aspect of the design. Down at the atomic level of a NoC, most of it is combinational, therefore as low power as you can reasonably reach in a synchronous world. Those DFFs that are needed are created by the network generator and can be done so in a way that is friendly to low-power synthesis, letting EDA synthesis tools infer clock-gating on banked registers. This covers around 95% of the DFFs in a NoC according to Kurt Shuler (VP Marketing at Arteris IP).

NoCs are generated with pre-defined unit-level building blocks – network interface units (NIU), arbiters and the like. Each of these can have built-in additional gating control so that, for example when an NIU is inactive it can be completely gated. All of this is zero-latency control managed by a little logic in each function. And each of these building blocks supports ASIL D duplication where needed so you get power efficiency and safety.

Power management at the next level up in the NoC – the SoC level – gets a lot more interesting. For a NoC entirely contained within a power or voltage/frequency domain (for DVFS), expectations are no different than they are for any other logic entirely within that domain. They need to support voltage up/down and/or frequency up/down as demanded.

But some NoCs cross between domains. Parts of the NoC may switch off or change voltage and frequency while other part remains active or don’t change. That requires intelligent interfacing at domain boundaries within the NoC. Now you need a NoC power controller in each domain, communicating with the SoC power controller. You also need elements at interfaces to handle handshaking between domains, so for example a request from an on-domain to an off-domain will trigger wake-up and wait for the off-domain to be ready. Equally, appropriate level-shifting and data-buffering will be used between DVFS domains. The cool thing is that the NoC generation tooling automatically takes care of configuring all of this and tying it all together based on your higher level system requirements.

Which brings me back to system-level safety. First, to get to a high-level of safety at the SoC level, you’ll want to use duplication as needed. But duplication burns more power, so there’s a balance between safety and power, In Arteris IP NoCs, this balance is managed carefully, especially through optimizing unit-level duplication. Second, the power-down and DVFS scaling support for low-power has an added benefit for safety in a safety-island-supported architecture. The safety island can initiate a power down for a full reset when, for example, an AI accelerator misbehaves.

One other interesting point Kurt told me. The Linley Processor Conference described how Toshiba uses Arteris IP FlexNoC and Ncore interconnects to implement their SoC architecture, and Toshiba used temperature monitoring to throttle processing performance. Naturally they use the NoC to manage this.

Obviously managing AI, safety and low power is a delicate but achievable balance in a NoC-centric SoC, judging by this Toshiba ADAS design. You can learn more details if you attended the Linley Spring Processor Conference 2020 by downloading the proceedings HERE. Arteris IP will also host the presentation on their www.arteris.com/resources web page next month.

Also Read:

That Last Level Cache is Pretty Important

Trends in AI and Safety for Cars

Autonomous Driving Still Terra Incognita


Synopsys – Turbocharging the TCAM Portfolio with eSilicon

Synopsys – Turbocharging the TCAM Portfolio with eSilicon
by Mike Gianfagna on 04-27-2020 at 10:00 am

Screen Shot 2020 04 18 at 2.21.03 PM

About 90 days ago, Synopsys completed the acquisition of certain IP assets from eSilicon. The remaining entirety of eSilicon was acquired by Inphi Corporation. I was the VP of marketing at eSilicon during that acquisition so it’s very interesting to me to find out how things are going with those certain IP assets.  I got an opportunity to find out recently.

I spent some time speaking with Rahul Thukral, senior product marketing manager at Synopsys. Rahul has spent a lot of time in memory design at Mentor Graphics, Virage Logic and Synopsys. We had a spirited discussion about those certain IP assets from eSilicon as the main focus of the team acquired by Synopsys was memory design.

First of all, Rahul reported that 90 days in, the team is completely integrated into Synopsys, including a Google cloud-based design environment that was developed at eSilicon and subsequently became part of the Synopsys acquisition. I’m sure many of you have either been through an acquisition of watched one (on more) closely. To be fully integrated and productive in 90 days or less is quite an accomplishment. I see it as a strong endorsement for the integration skills of Synopsys and the solid methodology and design talent of eSilicon.

Rahul was very complimentary of the eSilicon team – a strong addition to the Synopsys memory design capability that was right on the mark.  I know and respect the eSilicon team and it was nice to hear this assessment from an independent point of view. The acquisition included several memory IP titles, including TCAMs and multi-port memory compilers, as well as interface IP with high bandwidth interface (HBI) support. I will cover the comments around ternary content-addressable memories (TCAMs) here. This is a key growth area for Synopsys, as it was for eSilicon and is a good proxy for the other memory products.

For the uninitiated, a TCAM essentially operates in an inverse manner of a regular memory.  In a regular memory, one provides an address and the memory returns the contents of that address. In a TCAM, one provides the content of interest and the TCAM returns the address(es) where that content is stored. TCAMs find widespread use in networking applications, where it’s important to quickly keep track of source and destination addresses for network packets. Rahul explained that the addition of eSilicon’s TCAM products had an “instant impact” on the Synopsys portfolio.

Prior to the acquisition, Synopsys was focusing on high-density TCAMs and eSilicon was focusing on high-speed TCAMs. Merging these two differentiating capabilities makes for a strong market position. Synopsys had a design philosophy of building software compilers to create the various instances of their memory products. eSilicon had the same philosophy, making the integration task easier. Synopsys can now offer greater than 2 GHz operation in the latest technology – a strong result. Beyond performance, power is also an important consideration. Rahul explained that some networking chips can have hundreds of TCAMs. If they all start firing at once, a phenomenon known as “power ringing” can occur. This essentially creates a nightmare signal integrity problem. The eSilicon team had a strong focus on power optimized designs, as did the Synopsys team. More good synergy.

I probed a bit with Rahul about other applications that benefit from TCAMs. It turns out automotive is also a hot market for this technology. There are multiple electronic control units (ECUs) in a typical car today. The powertrain, passenger comfort, infotainment and driver assistance are just a few examples of on-board ECUs that must all be networked together to create a unified driving experience. If this is starting to sound like a networking application, it is, and TCAMs help a lot here.

Thanks to the Synopsys focus on automotive functional safety and reliability, TCAM technology can be deployed in the automotive market. There is a substantial investment in certification to address this market, such as ISO 26262. Synopsys has made that investment. The addition of built-in self-test (BIST) for TCAMs for the consumer and automotive markets is an important growth area as well and one that Synopsys is also focused on.

Overall, I felt quite good after my discussion with Rahul. When we first examined the potential transaction between Synopsys and eSilicon, the compatibility and synergy of the two teams seemed quite strong on paper.  It’s nice to see it worked out that way in real life.

Also Read:

Synopsys is Changing the Game with Next Generation 64-Bit Embedded Processor IP

Security in I/O Interconnects

IP to SoC Flow Critical for ISO 26262


SiFive’s Approach to Embedding Intelligence Everywhere

SiFive’s Approach to Embedding Intelligence Everywhere
by Tom Simon on 04-27-2020 at 6:00 am

SiFive Embedding Intelligence

Before the advent of RISC-V, designers looking for embedded processors were effectively limited to a handful of proprietary processors using ISAs from decades ago. While the major ISAs are being updated and enhanced, they also are facing limitations from many decisions made over many years.  RISC-V was conceived with a clean well thought out architecture and designed for expansion that would not create inconsistencies. Because it is open source, there is a rich set of tools and products that support it.

SiFive is one of the leading exponents of RISC-V and has been producing IP based on the RISC-V ISA for years now. Their product offerings have expanded significantly, now addressing everything from edge/IoT to server applications. Their recent webinar titled “Embedding Intelligence Everywhere with SiFive 7 Series Core IP” talks about how intelligence is needed in each market.

The webinar is divided into two parts. The first, presented by Jack Kang, Senior Vice President of Customer Experience at SiFive, offers a look at how embedded intelligence is becoming prevalent everywhere from the cloud to the edge. He also offers an overview of SiFive’s embedded RISC processors and how they fit current market needs.

Jack first talks about how AI is moving from the cloud to the edge, creating the need for additional processing capabilities in a wide range of devices. At the edge AR, VR and sensor fusion are driving the need for expanded real time processing. Jack also points out that there is also a need for increased intelligence in storage. He touches on how intelligence is supporting caching schemes, cryptography, memory maps and even in-cluster application processors. Likewise making networking smarter facilitates higher bandwidth and other activities such as the implementation of 5G stacks.

The cores offered by SiFive are grouped by their application areas. The E Cores are suitable for 32-bit embedded uses. For heavier workloads there are the S Cores, which add 64 bit capabilities. The most powerful processors are the U Cores, which are 64-bit application processors for high end computing.

Their smallest and most efficient RISC-V processor IP is the 2 Series, the E2 and S2, which are 32 and 64 bit respectively. They are core and memory configurable to customer specific needs. They also feature ultra-low latency interrupts for servicing real world events.

The 3 and 5 Series are their most widely deployed products. The E3 offers 32 bit performance for mid range embedded applications. The S5 adds 64 bit and the U5 is the top end offering higher performance. These cores can be used in multicore configurations and have Hard Real Time capabilities.

The 7 Series cores are the topic of the later second section of the webinar. Jack touches on them before talking about the U8, an extremely scalable high performance out-of-order core with their highest performance per watt. The U8 is also very area efficient. The combination of area and power efficiency make them very attractive for high end computing systems.

The second section, presented by Jahoor Vohra, Director of Field Application Engineering, is titled “SiFive 7 Series Core IP”. His presentation discusses the features of the three members of the Core IP 7 Series: E7, S7 and U7. This includes an overview of the 7 Series microarchitecture, focusing on performance, scalability and the detailed specifications of each.

The 7 Series are scalable up to 8+1 for each cluster. Like other RISC-V processors the instruction set is extensible through custom instructions. Their memory is configurable and tightly integrated for low latency. If called for, they support mixed precision arithmetic. They also feature enhanced determinism to better support demanding real time applications. There are also functional safety features provided by built-in fault tolerance mechanisms. These are just a few of the highlights that Jahoor brings up.

To gain a full appreciation and understanding of the SiFive offerings, across the board and about the 7 Series in particular, I highly suggest viewing the informative webinar. I have been following RISC-V and SiFive for a number of years now. The level of adoption and progress has been extraordinary. The entire effort is supported by some brilliant dedicated minds. The results speak for themselves.

SiFive is also giving a webinar soon on the topic of Rapid Embedded Prototyping with SiFive Software. These webinars are all part of the SiFive Connect webinar series, which aims to provide educational content in an interactive format. A full list of these webinars can be found here.


Preventing a Product Security Crisis

Preventing a Product Security Crisis
by Matthew Rosenquist on 04-26-2020 at 12:00 pm

Preventing a Product Security Crisis 1

The video conference company Zoom has skyrocketed to new heights and plummeted to new lows in the past few weeks. It is one of the handful of communications applications that is perfectly suited to a world beset by quarantine actions, yet has fallen far from grace because of poor security, privacy, and transparency. Governments, major companies, and throngs of users have either publicly criticized or completely abandoned the product. In a time of unimaginable potential growth, Zoom is sputtering to stay relevant, fend off competition, and emerge intact.

Avoiding Total Loss of Product Confidence
There are lessons to be learned, applicable to all product and service companies, to avoid such gruesome misfortune. Leadership of every organization should be taking an introspective look to understand how they can best prevent such missteps and determine how they might respond in times of such crisis.

Zoom is a teleconference platform that has proven to be scalable and effective at bringing groups together to collaborate remotely. It is in a competitive field where features, time-to-market, performance, and usability are crucial to success. This is true for so many products, services, and businesses. Often in such environments, management possesses a razor-sharp focus being competitive which means getting products and new features out to the market as fast as possible.

There are costs to such a narrow focus. Accuracy in marketing messages can be overlooked. Documentation quality is often sacrificed. More importantly, it is very common that security is also deprioritized as an acceptable tradeoff. This is where the shortsightedness begins.

Security is a foundation for trust. What is easily seen as a distraction by engineers and executives during the frantic development cycles, that can be addressed ‘later’, will introduce fundamental weakness that compound over time which can be exploited.

This is where Zoom is at. The organization is feeling the pain and chaos of decisions made far earlier, during product development, that are now emerging due to the rapid growth and adoption of their solution.

A number of issues have arisen that have customers, governments, and stockholders questioning the leadership and confidence in the product. There was a privacy issue that harvested user data and sent it to Facebook without consent. Default designs that allowed incidents of harassment, called “Zoombombing”, to the embarrassment and fury of users. The inaccuracy of marketing claims of End-to-End (E2E) security and an inaccurate privacy policy. The architecture design and code that has many vulnerabilities and that does not protect E2E the privacy of sessions between parties. Then there was the choice to use data center assets in China where they stored sensitive information but did not inform customers who are very uncomfortable to such configurations. Now Zoom faces grave and very public concerns regarding the trust in management’s commitment for secure products, the respect for user privacy, the honesty of its marketing, and the design decisions that preserve a positive user experience.

Learning from Failures
The lesson is straightforward. All the issues Zoom is facing could and should have been addressed earlier, well before they have exploded in spectacular fashion. This is the key takeaway for everyone: a lack of investment for security and privacy in the development phases can manifest into devastating consequences. Every organization should be evaluating their DevOps security programs. They should be re-evaluating the role and value of security during product design, development, updates, and sustaining operations. Zoom is showcasing the severe consequences of ignoring proper risk management. They aren’t the first, but the world is changing and peoples’ tolerance and patience for such issues is evolving to be less forgiving. Zoom and every other product company must adapt to meet the growing expectations for security, privacy, and safety.

How can Zoom recover?
For those interested in how Zoom should be addressing the systemic issues they face during their product crisis, I recommend the Zoom in crisis: How to respond and manage product security incidents article on HelpNetSecurity, where I break down a number of issues and steps for resolution.


COVID-19 Cars as an Essential Service

COVID-19 Cars as an Essential Service
by Roger C. Lanctot on 04-26-2020 at 10:00 am

COVID 19 Cars as an Essential Service

The Automotive News reported Friday that updated guidance from the Department of Homeland Security’s Cybersecurity and Iinfrastructure Security Agency had identified cars as an essential service. AN reported: “The new guidelines include “workers critical to the manufacturing, distribution, sales, rental, leasing, repair, and maintenance of vehicles and other transportation equipment, including electric vehicle charging stations, and the supply chains that enable these operations to facilitate continuity of travel-related operations for essential workers.”

Automotive News report: https://www.autonews.com/dealers/auto-sales-listed-essential-service-updated-federal-guidance?utm_source=daily&utm_medium=email&utm_campaign=20200417&utm_content=article2-headline

The announcement had a Pyrrhic quality to it as millions of Americans were rapidly coming to grips with the fact that they could, indeed, live without cars. In fact, they could live without moving around at all. Their very lives might depend upon not moving as the more moving a person might do, the more likely they were to become infected with COVID-19.

The government-ese is the problem here. Cars are “essential.” Really? By now we know that oxygen, water, food, family, and friends are essential. And maybe toilet paper. But cars?

Politicians have tried to poo-poo the pandemic by talking about how many Americans are killed in traffic incidents and by the “common” flu. But COVID-19 has outpaced the fatality rates from those two analogs.

COVID-19 is killing 1,800 Americans/day. That’s more than heart disease (1,774) and cancer (1.641). And cars? Cars, on a typical day in the U.S., kill about 100 people. One hundred daily fatalities is pretty horrible, but COVID-19 is slaughtering 18x that daily figure.

Restoring vehicle production and sales, though, assumes demand for vehicles will be strong and is, in fact, pent up – waiting to break free. The reality may be something quite different with more than 22M Americans already having filed for unemployment and stay-at-home orders in place across much of the country.

In fact, some cities and states still stand in the path of car sales in spite of the Federal designation. Los Angeles has yet to allow retail vehicle sales. The State of Pennsylvania doesn’t even allow online sales of cars. (Pennsylvania reversed its stance against online vehicle sales Tuesday afternoon.)

All of this has contributed to a steep plunge in used car prices further threatening the viability of Ford Motor Company and General Motors – which have billions of dollars of loans and leases on their books. The decline in used car prices was a further blow to Hertz which itself is reportedly teetering on the verge of bankruptcy.

Dealers will now be in position to test the theory of cars as “essential.” In fact, dealers themselves are facing a major test of their own viability during a pandemic of undetermined longevity.

It won’t be enough for dealers to open their doors. It won’t be enough for car makers to pump out cheery advertising messages and crazy incentives. Dealers will need to get creative.

The good news is that there are a host of marketing partners pushing out new tools to engage with customers remotely, digitally, virtually. Video tools for dealers are currently the hottest feature in the service space, according to some industry veterans.

Several companies are out in front including Xtime, MyKaarma, CITNow, Dealer-FX, UpdatePromise, and Text2Drive. Video conferencing for payment is only one part of the process and requires a strong technical infrastructure that can handle payments electronically. MyKaarma has the best solution followed by Xtime, Text2drive, and UpdatePromise, according to one observer.

COVID-19 has introduced more than the usual level of trepidation into a dealer visit. The average new or used car buyer will do a fair amount of research and will already know his or her price and financing plan. The days of test drives and hand shakes are practically pointless in a post-COVID-19 environment.

Dealers wanting to truly test the “esssential” quality of a car will reach out to potential customers, offer online vehicle evaluation and demonstration tools, and, most importantly, allow for an entirely online purchasing process with to-the-door delivery. Dealers are witnessing nothing less than the digitalization of the sales process. We may be only years away from the demise of the showroom and the rise of the virtual demo and close.

Doubters need look no further than virtual vehicle sales leader Carvana jumping to the fourth spot on Automotive News’ ranking of used vehicle sales leaders to appreciate the power of digital. Traditional dealers actually have more tools for creating a more interactive customer experience to go along with a full service back-end operation.

COVID-19 has forced all human beings to question what is essential. By now, we all know that cars are definitely not essential. The truly essential things today are those that the government can’t seem to give us as part of some “stimulus” package.

We need family and friends, food, empathy, and maybe a small helping of truth. Only truth will really free us of these COVID-19 bonds. Until then, we’ll have to settle for some innovative digital car retailing to rescue our economy in these dark times.


LRCX Supply constrained by Covid Crisis

LRCX Supply constrained by Covid Crisis
by Robert Maire on 04-26-2020 at 8:00 am

LCRX Lam Research 2020 COVID

Lam Q1 revenue soft by roughly 15% due to supply side
Demand remains solid for Q2 but beyond that, dubious
No guide but Q2 could be => Q1 revenues
NAND solid, China big @ 32%, Foundry remains great

Lam reported a solid quarter but light on revenues…
It was no surprise that Lam reported revenues of $2.5B versus their original guide of $2.8B +- $200M and solid earnings of $3.98.

The clear message is that while demand was solid, and outstripped Lam’s ability to supply, Lam was hobbled by sub supplier issues that limited production.

As we have previously underscored, the supply chain for complex semiconductor equipment spans the globe, especially into Asia and is susceptible to disruptions as many parts are single sourced or have limited manufacturers.

Given that Covid19 really had its worst impact well into the quarter there was not enough time to fix or try to mitigate the supply issues prior to the end of the quarter.

While the company did not give “offical” guidance for Q2, they did suggest that Q2 revenue could potentially be better than Q1 as some of the limiting issues get worked out.

The company repeated many times that it was not a demand issue and they saw no change in customer orders and the shortfall was all on Lam not being able to produce and ship 100% due to Covid19 related issues.

No change in demand (Yet)…. Demand will be down..just a question of when and how much…
While there was no near term indication of any changes in demand/orders from customers, its clear that there will be some sort weakness driven by the overall economic declines, especially on the consumer side….the company all but stated that outright.

We have suggested in our prior notes that there is the short term logistics impact of Covid19  (clearly on display in both ASML and Lam’s results) and the longer term, as yet unknown, demand impact.

While short term impact stops parts from getting to Lam immediately, it will take months and quarters before laid off workers and other negative economic impact trickles down through electronics makers, then through chip makers and finally to chip tool makers.

Yes, we will keep spending on technology advances…but capacity related purchases will be vulnerable.

Taking prudent steps…
As we heard from ASML, Lam is also being financially prudent by stopping buy backs, even though they have tons of cash and depressed stock price. They have pulled cash from their credit line and continue to put downward pressure on expenses.  All correct and conservative as the future is quite uncertain right now.

China a third of revs…NAND continues comeback…
China, at 32% of revenues was roughly a third of business and bigger than any other segment. Roughly half or more of China business was for indigenous companies.

We remain concerned of the overhang of a potential blockade of exporting US semiconductor tools to China…especially in light of an administration looking to punish China or deflect attention to other matters.

While we feel very good about CPU demand impacting Intel and AMD due to “work at home” we remain more concerned about NAND demand and pricing as we could get back into an oversupply especially if consumers slow.

We remain most concerned about the fall roll out of Apple’s Iphone12 and associated next generation of both Apple CPU’s as well as 5G modem devices.

Right now both we and Lam management have a hard time guessing about what demand will look like going forward. Lam management demurred on the call when talking about future demand but we might be a bit more direct in assuming its down.

Trickle down of demand will take a while….
As we have previously mentioned, fab capacity planning is very long term by its nature and has all the maneuverability of a supertanker. Many of the fab expansion plans taking place right now are “bounce back” reactions from the previous down cycle and less of a “steady state” indication of demand.

While we have heard of some inventory build in the channel, we are not yet “stuffed” with inventory that would cause chip makers to hit the brakes hard. We probably won’t see the trickle down to equipment makers until a least Q3 and more realistically Q4 when chip makers get a better sense of the full economic impact and start to adjust 2021 build plans.

The stock roller coaster continues….
After Lam’s stock was down big yesterday, it was up twice as big today only to fall off after hours with less than strong beat results. The volatility will obviously continue with larger issues and macro events driving the overall tone of the market.

Whether chip equipment companies or chip companies report good or bad earnings seems to matter little as most bets are off due to Covid19 issues. While the stock is cheap compared to prior expectations, the uncertainty of demand will likely haunt things for several quarters. The company continues to do a great job of execution but unfortunately can’t control the larger picture that they are a small part of.