DAC2025 SemiWiki 800x100

Podcast EP94: Wally Rhines Comments on the latest SEMI Electronic Design Market Data Report

Podcast EP94: Wally Rhines Comments on the latest SEMI Electronic Design Market Data Report
by Daniel Nenni on 07-15-2022 at 10:00 am

Dan is joined by Dr. Walden Rhines, former CEO of Mentor Graphics, which is now Siemens EDA, and current CEO of Cornami. Wally is also the Executive Sponsor of the SEMI Electronic Design Market Data Report, which is the topic of this podcast.

Wally reviews the latest report, including the backstory of how the report is generated and how it can be used. Spoiler alert: it’s another positive growth quarter overall.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Shai Cohen of proteanTecs

CEO Interview: Shai Cohen of proteanTecs
by Daniel Nenni on 07-15-2022 at 6:00 am

proteanTecs Image

Shai Cohen is an entrepreneur and industry veteran, with vast experience in building technology companies from the ground up. He is a co-founder and CEO of proteanTecs, which develops revolutionary Universal Chip Telemetry™ for electronic systems throughout their entire lifecycle. Prior to founding proteanTecs, Shai co-founded Mellanox (acquired by NVIDIA), a global leader of end-to-end InfiniBand and Ethernet interconnect solutions for servers and storage. He served as Mellanox’s Chief Operations Officer from 2011 and before that as VP of Operations and Engineering, from 1999. While at Mellanox, Shai oversaw all internal operations and production, and co-led the company’s research and development activities. He served as a member of the Mellanox Board of Directors from 2015 to 2018. From 1989 to 1999, Shai worked at Intel Corporation, where he was a senior staff member in the Pentium processors department and a circuit design manager in the cache controllers group. Shai holds a B.Sc cum laude in Electrical Engineering from the Israel Institute of Technology, Technion.

Can you tell us a little about proteanTecs?
proteanTecs was founded with a mission to enable the electronics industry to continue to scale. We’ve developed deep data analytics for advanced electronics in the Datacenter, Automotive, Communications and Mobile markets.

The company provides solutions for health and performance monitoring, in production and during lifetime operation, based on Universal Chip Telemetry™ (UCT). By applying machine learning to novel data created by on-chip UCT agents, our customers gain visibility and actionable insights on the cloud or edge – leading to new levels of performance, quality and reliability at scale.

How do you provide deep data insights on chips and systems?
The technology is comprised of several key pillars. First and foremost is the deep data generation. We’ve developed Universal Chip Telemetry (or UCT) which provides on-chip monitoring, based on agents that are built for analytics. These UCT agents operate in both test and mission modes and provide extremely high coverage of key parameters at every stage. They are strategically placed during design, using automated insertion tools, after a thorough analysis of the design and process technology.

Measurements from the agents are extracted and uploaded to a software platform, for data fusion and domain-infused machine learning inference. At the end of the day, our customers get advanced analytics, with actionable insights and alerts, on a cloud-based platform, for continuous health and performance monitoring. We also provide applications that are deployed at the edge: whether on the tester during production, or on-board when the system is in the field.

How essential is electronics analytics and predictability?
We are addressing application markets that are mission-critical, uptime-critical and safety-critical. Datacenters, automotive and communications – all of these markets are transitioning to as-a-service business models. They have zero tolerance for unplanned downtime or errors, and require extremely high performance, low power and increasing functionality.

So, manufacturers are introducing highly advanced technologies to enable this. We’re talking about sophisticated architectures, complex designs, shrinking process technologies, heterogeneous packaging, and more.

All of these create new challenges in the industry. Quality and reliability are harder to achieve, especially without giving up performance or competitiveness. Costs are rising, and mainly there is a lack of visibility throughout the lifecycle. Add to this a very fragmented value chain, with data siloes in and between the different stages, and what we’re seeing, are new problems.

We see very long development cycles, a huge dependency on product warranty, and if something does go wrong – an extremely high rate of “No Problem Found” with inconclusive and long root cause analysis. And at the end of the day, we also see problems and reliability issues in the field.

What types of challenges can this solve?
We’re hearing more and more from hyperscalers and OEMs about new issues that they encounter in the field, all stemming from undetected or latent manufacturing defects or issues that manifest in the field and are hard to predict and prevent.

In the case of advanced electronics, system performance becomes a sensitive matter. Add to this the fact that certain parameters will change after extended use, especially given application stress and environmental effects, and you have the recipe for issues that can be as difficult to pinpoint as they are costly to endure.

In some cases, these errors happen in a way that cannot immediately be detected or flagged.  Sometimes a calculation will give the wrong result.  Other times an instruction doesn’t behave exactly as it should. In certain cases, the error is inconsistent, making it even more difficult to find.

These service providers want for their systems to be able to recognize these errors and report them. It would be even better if they could be predicted. Imagine that you could see the actual performance margins of the electronics while the application is running, and act on it.

What are the benefits of your platform and how is it different from what others are doing in the industry?
We’re introducing a new approach. proteanTecs has developed a way to provide end-to-end visibility, based on deep data analytics. With this new technology, manufacturers and service providers gain a new understanding of design, production, system, applications, and environmental issues, throughout the lifecycle of the system.

Since the same agent-based technology is applied at every stage, starting from chip characterization, qualification and volume production, through to system integration and optimization, and then to in-field operation – it creates a baseline of common datasets throughout the industry. This provides backward-and-forward correlations, insights and predictions, and for the first time creates a common ‘data’ language for the value chain.

What are the use cases for proteanTecs’ technology?
We’re giving chips the ability to report on their own health and performance so that at every stage, from production to the field, users gain significant insights and benefits.

During production, chip and system vendors can reduce Defect Parts Per Million (DPPM) by 10x, optimize power-performance per application, improve performance yields, optimize and track reliability margins and significantly shorten time to market.

Once deployed in the field, service providers can be alerted on faults before failures, significantly lowering maintenance costs, optimize system performance, and extend product lifetime.

Who are some of your current customers/users?
We serve leading electronics vendors across multiple industries, including Datacenter, Cloud Computing, Automotive, AI and Communications. Our customers include first tier customers for chips that include AI, switches, servers, storage, HPC, communications, and ECUs.

How much funding have you raised in total to date?
Nearly $200M. We are backed by some of the leading investors in the electronics and SaaS industries. I am happy to share that we’ve recently closed two extensions to our Growth Equity round, with the addition of Porsche-SE, MediaTek and Advantest to our investor portfolio – each one being a market leader in their own industry vertical.

Are there any previous or upcoming milestone you’d like to talk about?
We recently expanded into mobile and have launched solutions for supply chain security and real-time power-performance management.

Also read:

CEO Interview: Vaysh Kewada of Salience Labs

CEO Interview: Chuck Gershman of Owl AI

CEO Interviews: Dr Ali El Kaafarani of PQShield


Alchip Technologies Offers 3nm ASIC Design Services

Alchip Technologies Offers 3nm ASIC Design Services
by Kalar Rajendiran on 07-14-2022 at 10:00 am

Alchip Design Technology Roadmap

Throughout its history, the ASIC industry has had its ups and downs. With feast and famine cycles, the ASIC business model is not for the faint of heart. Some companies tread boldly while others dread the cycles and stay away from this business model. Those who are consistently successful have to overcome many challenges thrown at them. This in turn requires focused dedication to the model and ongoing strategic investments to stay on top. ASIC companies stay consistently successful by judiciously overcoming the many challenges thrown at them.

ASIC Business Model Challenges

Supply Chain

In a sense, it was easier when ASIC companies were vertically integrated with their own foundries. Not only did they gain early access to the latest process technology details, all their customers’ wafer volume aggregated to the same foundry as well. This is no longer the case, with ASIC companies having to tap third party foundries. And this points extends to other parts of the supply chain too.

Customers

In addition to the PPA benefits of an ASIC, customers go the ASIC route for advantageous pricing compared to choosing an ASSP. It is interesting to highlight that the term ASIC itself is a misnomer. The product is a customer-specific product targeted toward an application.  Naturally, every customer is likely going to aggressively push the ASIC provider in non-overlapping ways on both technical and commercial aspects.

Alternate Solutions

While technically many products could benefit from an ASIC implementation, depending on the end market and the commercial terms for an ASIC, customers may choose an alternate non-ASIC solution. For example, customers may be willing to go an ASSP, FPGA or a GPU route to save on the upfront NRE investment and/or design cycle time and time to market. This is one of the things that happens on a regular basis and certainly when the ASIC market goes through the famine part of the feast-famine cycle.

Strategic Investments

Significant design capabilities and infrastructure investments are expected off of ASIC providers to support HPC, cloud computing, edge computing, AI, automotive and other applications. With the slowing down of Moore’s law benefits on advanced process nodes, large monolithic chips are giving way to chiplet based implementations. With chiplets, one can have the best of both worlds. Leading edge process for some chiplets and main stream/trailing edge process for others. An ASIC provider will be faced with supporting heterogeneous chiplets integration. Investments in developing capabilities and methodologies to support 2.5/3D packaging and high-speed Die-to-Die interfacing are critical to keep up with the trends. Being involved in standardization efforts such as the Universal Chiplet Interface Express (UCIe) is critical.

Design Technology and Infrastructure

Whether an ASIC or an ASSP, PPA, cost, and time to market are the lowest common denominator requirements. Accordingly, design service organizations hone their design technology, methodology and infrastructure on an ongoing basis. The added challenge for ASIC design service organizations is that different customers are going to stress their infrastructure differently. For example, clock methodology that may yield optimal results on one customer’s chip may yield sub-optimally on another customer’s chip, impacting performance. The P&R methodology may run into different issues depending on the chip and thereby impacting die size. And so on. All of these impact time to market for the customer and revenue and profitability impact for both the customer and the ASIC provider.

Consistently successful ASIC providers have top-notch infrastructure and methodologies that can accommodate varied demands from multitude of customers.

Success Requires Focused Dedication

For overcoming the challenges described in the above section and the business model challenges section, a successful ASIC company needs:

  • Robust yet flexible design methodology
  • Flexible engagement model (both commercial and technical)
  • Best-in-class IP portfolio (access to third-party IP and in-house IP/customization)
  • Heterogenous chiplet integration capability
  • Advanced packaging and test capabilities

In order to deliver all of the above in a viable manner, one needs to pick a focus in terms of what markets they serve.

Alchip Treads Boldly

Alchip picked the high-performance markets as their dedicated focus many years ago and stayed the course. They have made strategic investments to stay with the trends and developed design technology and infrastructure to service their customers. Their dedication has yield everything that is identified in the above section. Alchip has consistently stayed on top of supporting the latest process nodes from TSMC, the leading foundry. Not only have they developed capability to support 2.5D/3D packaging in general but also been qualified to support TSMC’s CoWoS packaging technology. They had developed APLink family of Die-to-Die interface IP to support chiplets integration well before the UCIe efforts began. And now they have joined the UCIe consortium as a contributing member to drive the evolution of the chiplet interface standard.

Alchip Profile and Scorecard

The following slide provides a succinct profile of Alchip. An 81% slice of revenue coming from HPC markets is proof of their dedicated high-performance market focus.

Recently, Alchip announced that its high-performance computing ASIC services are now taking 3nm designs and targeting their first test chip for Q1 2023. The new service targets TSMC’s latest N3E process technology. With this announcement, Alchip becomes the first dedicated high-performance ASIC company to announce total design readiness of their design and production ecosystem for 3nm process. Their press announcement mentioned the other assets in place include a complete library of best-in-class 3rd party IP covering DDR5, GDDR6, HBM2E, HBM3, PCIe5, and 112G SERDES IP from Tier 1 providers.

You can learn more at www.alchip.com. You will find a compendium of Alchip related articles and press releases on SemiWiki here.

Also read:

The ASIC Business is Surging!

Alchip Reveals How to Extend Moore’s Law at TSMC OIP Ecosystem Forum

Alchip is Painting a Bright Future for the ASIC Market

 


Arm Aims at Mobile Gaming

Arm Aims at Mobile Gaming
by Bernard Murphy on 07-14-2022 at 6:00 am

ARM gaming min

Clearly unfazed by the collapse of the proposed merger with Nvidia, Arm just announced products in support of, what else, mobile gaming. Nvidia turf. Of course Nvidia’s gaming strength is in tethered platforms or laptops. However, understand that 50% of video gaming revenue in 2020 came from smartphone games and that growth is accelerating. Arm is clearly happy to grab for themselves what Nvidia maybe hoped to corral in the merger. A question came up in a recent briefing, “Won’t streaming make all but simple local graphics irrelevant on the phone?” Paul Williamson (SVP and GM at Arm) disagreed, and I’m 100% with him. Even with 5G (or 6G) bandwidths, power demand in streaming communication will kill effective gaming time. And latency overhead from phone to cloud back to phone will kill an interactive gaming experience with immersive pose-responsive 3D. Gaming demand on Android platforms (Apple has their own hardware) is ripe for the picking.

Graphics platforms

The announcement includes a new flagship GPU Arm are calling Immortalis-G715. This springs from Mali, with optimizations for 3D and with hardware-based ray tracing. Ray tracing is a powerful addition to immersive experiences, supporting realistic reflections and lighting which changes as you move through the scene. It’s also computationally very expensive, hence the need for hardware support. Given lack of support in mobile gaming to this point, Arm seems to be making a forward bet. That game developers will come to embrace how this can enhance user experiences.

Mali-G715 includes variable rate shading for improved resolution around user gaze and reduce resolution outside that area, to improve performance and reduce power. And Mali-G615 upgrades the earlier 610 release. Overall, this GPU lineup is adding 15% performance over last year at 15% better efficiency.

Armv9 updates

High performance GPUs must be matched by high performance CPU clusters to deliver an end-to-end gaming experience. This release introduces the Cortex -X3, the highest performance CPU to date in their portfolio, delivering 25% performance over latest Android smartphones and 34% performance advantage over latest laptops. The Cortex-A715 offers only a 5% performance advantage over the earlier 710 but at 20% improved energy efficiency, making this a very effective “little” match to the Cortex-X3 “BIG”. Another nice plus for mobile power/ performance management.

Beyond smartphones

Another interesting question in the briefing Q&A probed Arm plans outside established handset and compute server markets. What about Arm graphics engines in the datacenter or Arm PCs? Paul brushed off the datacenter graphics question as not an area of focus for Arm today but was more open to discussing the other topics.

MediaTek is at least one company that has announced plans to build Arm-based chips for Windows. Microsoft announced earlier this year their project Volterra, an Arm-based desktop PC with additional goodies. Paul said that now developers will now be able to develop on Arm, for Arm in Windows which he sees as a step change in the ecosystem. He added that with Cortex-X it is now possible to bring the performance/power advantages of Arm-based architectures to the PC world. Despite our earlier reservations, they pulled it off in the server space so why not here too? Market motivation for PCs may be a little different – perhaps a “green” advantage could be a major driver.

Finally, what about untethered xR devices? Here Paul was a bit more circumspect. Certainly, power and performance needs should play well to this GPU and CPU lineup. The issue he sees is more around a great diversity of use cases. He sees the market as still nascent. Product can build on these new platforms, helping to drive clarity in where there may be high volume use-cases with more clarity in needs. Until then, we’ll all be watching with interest.

You can learn more about the release HERE.

Also read:

Synopsys Tutorial on Dependable System Design

Arm Shifts Up With SOAFEE

Arm Announces Neoverse Update, Immediately Following V9


New Mixed-Signal Simulation Features from Siemens EDA at DAC

New Mixed-Signal Simulation Features from Siemens EDA at DAC
by Daniel Payne on 07-13-2022 at 10:00 am

Symphony Pro for mixed-signal verification

It’s the second day of DAC, and the announcements are coming in at a fast pace, so stay tuned to SemiWiki for all of the latest details. As a long-time SPICE user and industry follower, I’ve witnessed the progression as EDA vendors have connected their SPICE simulators to digital simulators, opening up a bigger world of Analog Mixed-Signal (AMS) verification. Engineers designing chips for automotive, imaging, IoT, 5G, HPC and storage devices all need AMS verification tools. Siemens EDA has a rich history in both the SPICE and digital simulator worlds, so it’s no surprise that their AMS tool would also be offered and updated, especially as design challenges and standards emerge. I was able to view a presentation from Sumit Vishwakarma of Siemens EDA to get an update at DAC of their new mixed-signal verification features.

Accellera has created a UVM-AMS working group, and Tom Fitzpatrick from Siemens EDA is the Chair of the group, so know that they are on top of emerging standards. As the working group prepares and proposes standards, then EDA vendors will start to implement the standards, so that the design community has some common ground and ensure interoperability between EDA vendors.

Symphony

Siemens EDA has been offering the co-simulation of two simulators for awhile now, dubbed Symphony, and it’s well-suited for AMS applications:

Symphony Pro

What’s new for DAC this year is the Symphony Pro has some major new features:

  • Expanded support for the Universal Verification Methodology (UVM-AMS)
  • Expanded support for the Unified Power Format (UPF)
  • Visualizer MS environment – better debug for AMS designs

Here’s a visual on the improvements just launched in Symphony Pro:

The new thing with Symphony Pro is that the mixed-signal (MS) info is now saved in a new database, plus both the analog and digital waveforms can be viewed together in Visualizer MS. Here’s a diagram showing the new MS design database (yellow), and the  MS analog and digital waveforms (red and blue):

The fun part is using the new Visualizer MS, because it pulls together all of the new features of Symphony Pro in an integrated environment:

In one place you now have both the SPICE world and digital world combined, for faster, more efficient analysis, debug and verification. This is the expanded Field of Use. Logic cones allow an engineer to quickly debug to find the source of any waveform, either digital or analog. With the Visualizer MS users can now enjoy:

  • Mixed-signal hierarchy browser
  • Source code viewer, Schematic viewer, support UPF
  • Unified mixed-signal waveform viewer
  • Results annotation
  • Trace connectivity
  • Integration with existing verification and debug infrastructure

Customer Feedback

STMicroelectronics has said that, “We look forward to using Symphony Pro as our sign-off solution for present and future mixed-signal verification projects.

Jayanth Shreedhara, senior CAD manager at Silicon Labs said that Symphony Pro is, “enhancing our verification productivity from days to hours and dramatically improving our coverage closure.”

Summary

The Symphony technology works well with Siemen’s own Questa digital simulator, plus all of the other major EDA vendor simulators, so it’s kind of like Switzerland, playing the neutrality card, but for simulation. The Symphony product lives on, while the Symphony Pro tool is just an expanded version of the tool that works for AMS designs. Right now, Symphony Pro only works with Siemens EDA simulators, so not yet at the simulator agnostic stage.

If you’re at DAC this week, plan to stop by the Siemens EDA booth and ask for Sumit Vishwakarma , or just contact your local team to get deeper insight.

Related Blogs


Silicon Catalyst Angels Turns Three – The Remarkable Backstory of This Semiconductor Focused Investment Group

Silicon Catalyst Angels Turns Three – The Remarkable Backstory of This Semiconductor Focused Investment Group
by Mike Gianfagna on 07-13-2022 at 8:00 am

Silicon Catalyst Angels Turns Three – The Remarkable Backstory of This Semiconductor Focused Investment Group

The Silicon Catalyst Angels investment group recently announced the completion of three years of operation. There are great statistics associated with the organization and its financial results.  This includes an over 4X increase in members since inception and an impressive list of investments. One of the noteworthy attributes of the Silicon Catalyst organization is its ability to build a collaborative ecosystem to nurture the startups in the incubator. Silicon Catalyst Angels has a similar DNA, which is quite unique in the investment community. Read on to better understand the remarkable backstory of this semiconductor focused investment group.

The Angel group is a separate corporate entity from Silicon Catalyst, with membership by accredited investors that have deep strategic and operational experience from their many years in the semiconductor industry. You can learn more about Silicon Catalyst on SemiWiki. Recently, I had the opportunity to explore the history and accomplishments of Silicon Catalyst Angels with some of its leadership. I was able to speak with Pete Rodriguez, CEO of Silicon Catalyst, and the board of directors at Silicon Catalyst Angels, a rather impressive team. What follows is a summary of our discussion.

The Beginning

Pete explained that when he joined Silicon Catalyst in 2017 the organization was focused on helping to craft a strong business plan for the early-stage companies in the incubator. Another goal was to provide In-Kind Partner services to remove the immediate use of scarce funds for EDA and TSMC shuttles. As is the case, the need for working capital to cover the operational costs of startups is always front and center – true then and even more so in today’s markets.

Early in the history of Silicon Catalyst, there were 20 investment groups (institutional and corporate VCs) in their ecosystem. Among them was Sand Hill Angels. Laura Swan, one of the partners at Silicon Catalyst, was also part of Sand Hill Angels, as were several other partners at Silicon Catalyst and its advisor network. Building on the procedures developed by the established Silicon Valley investment groups, Sand Hill Angels and Band of Angels, in early 2019 the board of managers at Silicon Catalyst decided to create an angel investment group, based on the plan that was developed by Richard Curtin, one of the managing partners at Silicon Catalyst. Silicon Catalyst Angels (SCA) investment group was then launched in July 2019, as a separate corporate entity, to enable accredited investors to help fund the semiconductor startups.

The SCA board of directors includes Amos Ben-Meir as president, a long-time member of Sand Hill Angels, bringing deep investment experience, with over 200 angel investments under his belt. Additional board members include Raul Camposano and Michael Joehren, along with Laura Swan, as VP of operations. The deal flow was initially targeted to hear pitches from the companies in the Silicon Catalyst Incubator but has now expanded to selectively invite other semiconductor startups not currently in the Incubator. These organizations pitch for seed funding from Silicon Catalyst Angels members. With low membership investment commitments and low annual dues, the Silicon Catalyst Angels investment group was off and running.

Building Momentum

The initial challenge was to attract new members to the investment group. The feeling was that many of the technology investors in the Silicon Catalyst ecosystem didn’t get involved with other angel groups because the fees and investment requirements were too high and, most importantly, they would rarely see a semiconductor company as a candidate, the area that this community knew a lot about. The laser focus of the Silicon Catalyst Angels group on semiconductor investments seems to have paid off. While most of the investments have been for Silicon Catalyst incubator companies, startups outside the organization have approached Silicon Catalyst Angels because of its reputation as being semiconductor savvy. This becomes another avenue for the startups to explore joining the incubator, as well as to pursue seed funding.

As any savvy investor understands, whether in semis or elsewhere, due diligence is key. With the extensive semiconductor industry expertise of the members, the Silicon Catalyst Angel group uniquely offers a level of drill-down to assist in de-risking potential investments.

The syndication of investments by angel groups is a potent strategy that is rare elsewhere, but common for Silicon Catalyst Angels. At present, this group has made multiple rounds of investments in 12 companies, with an above-industry-average IRR. The investments from the syndication of these SCA deals are an even greater dollar amount – heavily leveraging the investor ecosystem to the benefit of the startup.

Laura Swan, VP of business operations at Silicon Catalyst Angels, provided additional perspective on this during her time at Sand Hill Angels, prior to joining Silicon Catalyst. As an “outsider” at that time, there were several reasons to join a deal associated with Silicon Catalyst and its angel investment group:

  • Laser-focus on semiconductors, a complex and difficult subject to master for an investment team who typically looks at software deals
  • Silicon Catalyst’s advisor network further reduces risk
  • The Silicon Catalyst In-Kind Partner program delivers substantial monetary value, reducing the amount of investment capital needed and increasing investment leverage

All members of Silicon Catalyst Angels also get access to the Angel Capital Association, a highly respected and valuable resource to help educate angel investors and to ensure that industry best practices are well understood by its members.

Thanks to the chip shortage and responses at the political level, such as the CHIPS Act, semiconductors have become more mainstream and more widely known. It turns out Silicon Catalyst Angels is at the epicenter of this renewed interest.

What’s Next?

We also touched on the type of deals being looked at by Silicon Catalyst Angel members. To borrow from the Silicon Catalyst tagline, it’s about what’s next. And Silicon Catalyst Angels sees the very newest ideas which suggest what’s next. As an R&D solutions architect at Renesas Electronics, Silicon Catalyst Angels board member Michael Joehren was able to offer an industry view. The group is seeing far more measurement technology that leverages MEMS advances. Generally, the focus is on the blending of biotech and healthcare with a dose of high-performance bandwidth for information delivery.

Michael explained that while there is continued interest in things like glucose meters, semiconductor-assisted measurement is expanding. Chemical analysis for air quality is a good example.  There are many new standards in this area and vast networks of sensors are the only practical way to achieve compliance. Raul Camposano, another board member, explained that materials research is focused on other areas such as battery design. He also offered that the ability to literally print molecules takes massive compute capability that drives a lot of new work as well, with interesting investment opportunities at the intersection of life sciences and semiconductors.

I concluded my discussion with a “crystal ball” question. What impact will the current financial climate have on the future of semis and the associated technology? There was a lot of perspectives offered, but the general feeling was that the semiconductor industry is cyclic in nature. We are just experiencing another cycle. Yes, and now there are geopolitical aspects to also consider. At the end of the day, the growing combination of semiconductors and life sciences will continue to drive innovation, growth and the opportunity for financial and social gain.

So, there’s a bit of the remarkable backstory of this semiconductor focused investment group. There are exciting times ahead. If you’re interested, to learn more about membership details or for startups looking to pitch, contact Silicon Catalyst Angels.

 


3D Device Technology Development

3D Device Technology Development
by Tom Dillinger on 07-13-2022 at 6:00 am

CFET cross section v2

The VLSI Symposium on Technology and Circuits provides a deep dive on recent technical advances, as well as a view into the research efforts that will be transitioning to production in the near future.  In a short course presentation at the Symposium, Marko Radosavljevic, from the Components Research group at Intel, provided an update on the development status of 3D device fabrication, in a talk entitled “Advanced Logic Scaling Using Monolithic 3D Integration”.

Although there are significant challenges yet to be addressed, Marko provided a compelling perspective that 3D device topologies will be the successor to the emerging gate-all-around (nanosheet/nanoribbon) device.  This article summarizes the highlights of Marko’s presentation.

Introduction

Marko provided a brief recap of recent process technology developments that have led to the current FinFET devices, and to the upcoming GAA topology.  The first figure below lists these device scaling features, while the next figure illustrates a cross-sectional view of the FinFET and GAA device stack.  (Four vertical nanosheets are illustrated, for the adjacent nFET and pFET devices.)

The GAA topology improves upon the device leakage current control compared to the “tri-gate” surface of the FinFET.  (Additional process engineering steps are typically integrated to reduce the substrate surface leakage current for the device gate material between the bottom of the lowest nanosheet and the substrate.)

Also, as depicted in the figure below, the GAA lithography and fabrication offer some flexibility in the width of the nanosheets in the stack.  Unlike the quantized width of the FinFET device (w=(2*h)+t), designers will have greater flexibility in optimizing circuits for specific PPA goals.

The figure above also highlights some of the GAA process challenges, specifically the steps which are unique compared to FinFET fabrication:

    • an initial Si/SiGe epitaxial stack
    • partial recessed etch of the sacrificial SiGe, exposing the ends of the Si layers for epitaxial growth of the source/drain nodes

FinFETs also use selective epitaxy to expand the S/D nodes – yet, the fins are already exposed on either side of the gate.  The GAA device requires a very precise lateral etch of the interspersed SiGe layers to expose the Si surfaces prior to S/D epitaxy.

    • removal of the remainder of the sacrificial SiGe to “release” the nanosheet surfaces (supported by the S/D epi)
    • precise deposition of the gate oxide and surrounding gate metal on all nanosheet surfaces

Note in the figure above that multiple metal gate compositions will be deposited to provide different workfunction surface potentials, for different device Vt thresholds.

3D Devices

With that background, Marko shared the graphic below, indicating that the next process roadmap device evolution would be to 3D stacked nanoribbons, leveraging the process development experience gained in the lateral pFET and nFET device fabrication.  The 3D stacked devices are typically denoted as a “CFET” (complementary FET) structure.

The figure below provides illustrations of how vertical device stacking could have a significant area scaling factor compared to lateral nanosheet layout, both for a logic cell and an SRAM bitcell (a 1-1-1 device configuration for the transfer gate-pullup-pulldown in the 6T cell).

The figure below expands upon the logic inverter layout above, to show the devices in cross-section.  Note the presence of buried power rails (BPR) providing VDD and VSS to the devices.  Also, note the significant aspect ratios required for contact etch and metal fill.

CFET R&D Initiatives

Actually, there are two very distinct approaches being evaluated for fabrication of CFET devices – “sequential” and “monolithic” (or self-aligned).

    • sequential 3D stacking

The figure below illustrates the sequential process flow.  The bottom devices are fabricated first, followed by the bonding of a (thinned) substrate for fabrication of the top devices.  An oxide dielectric layer is deposited and polished on the starting substrate for the bonding process and to serve as the electrical isolation between the devices.  The presence of the bottom devices constrains the thermal budget available for top device fabrication.

Of particular interest to researchers is that this approach offers the opportunity to utilize different substrate materials (and potentially different device topologies) for the two device types.  For example, the figure below shows a (top) pFET fabricated using a nanosheet device in a Ge substrate with a (bottom) nFET using a FinFET structure.

In the example above, the pFETs in the Ge nanosheets would be fabricated using a starting stack of Ge/SiGe layers, with SiGe again serving as the sacrificial support for source/drain growth and nanosheet release.  This technology option would leverage the higher hole mobility in Ge compared to Si.

The bonding dielectric thickness separating the two device layers is a key process optimization parameter – a thin layer reduces parasitic interconnect resistances and capacitances, yet needs to be defect-free.

    • self-aligned monolithic 3D stacking

The figures below illustrate a cross-section of a monolithic self-aligned CFET structure, and a high-level process flow description. (The SiGe layer in the middle is sacrificial.)

 

Two key process steps unique to the monolithic vertical device structure that are highlighted in the figure above are the distinct nFET and pFET S/D epitaxy growth and the gate workfunction metal deposition.

The figure below illustrates S/D epitaxial growth process for the two device types.  The top device nanoribbons receive a blocking layer prior to the bottom device S/D epitaxial growth.  Then, this blocking layer is removed, the ends of the top nanoribbons are revealed, and the top device S/D epitaxy is grown.  The figure also includes a confirmation that the p-epi and n-epi regions did not receive dopants from the other epitaxial growth step.

 The figure below depicts the sequence for gate metal deposition.  Metal initially deposited on both device types is subsequently removed for subsequent deposition of a different workfunction gate metal for the second (top) nFET.

Experimental data illustrating the range of multiple Vt device characteristics for monolithic nFETs and pFETs is shown below.

Although CFET device technology promises continued improvements in PPA over the upcoming nanoribbon process nodes, a key consideration will be the ultimate cost of the CFET device topology.  Marko presented the following cost estimate comparisons, part of a collaboration with IC Knowledge LLC.  The category breakdowns are:  lithography, deposition, etch, CMP, metrology, and other.  Note that the CFET examples include a BPR distribution, opening up additional cell tracks for signal routing.   The major contributors to the sequential CFET cost difference are the wafer bonding and separate top device processing.

In total, the PPAC benefits of CFET fabrication look compelling, over though the total CFET process cost is higher.  (A more challenging tradeoff is whether the flexibility afforded by sequential CFET device fabrication with different substrates will warrant the additional cost.)

Summary

Although process development challenges remain to be solved, a CFET device process roadmap appears to be a natural extension to the nanoribbon devices soon to achieve production status.

At the recent VLSI Symposium on Technology and Circuits, Intel presented both their R&D results and experimental data from other researchers demonstrating a compelling PPAC benefit.  The longevity of FinFET devices will have lasted a little more than a decade through seven process node generations, as depicted below.

To date, roadmaps for nanoribbon devices depict (at least) two nodes.

The benefits of CFET devices and the leverage of nanoribbon fabrication (and modeling and EDA infrastructure) expertise may result in a shorter longevity for nanoribbons.

-chipguy

Also Read:

Intel Foundry Services Puts PDKs in the Cloud

Intel 4 Deep Dive

An Update on In-Line Wafer Inspection Technology


What’s New With Calibre at DAC This Year?

What’s New With Calibre at DAC This Year?
by Daniel Payne on 07-12-2022 at 9:00 am

whats changed min

When I worked at EDA vendors and attended DAC, one of the most popular questions asked in the booth and suites was simply, “What’s new this year?” It’s a fair question, and yet many semiconductor professionals are so focused on their present project, using their familiar methodology, that they simply aren’t aware of all of the industry changes, and specifically EDA tool updates. I was made aware of several changes to the popular EDA tool family called Calibre RealTime and Calibre Recon from Siemens EDA, announced this week at DAC, so here’s my take on them.

What’s Changed

In the past two decades of semiconductor design we’ve moved beyond the simple EDA requirements of getting a cell, block and chip to be DRC clean, LVS clean and DFM ready with dummy fill. In leading-edge process nodes the types of analysis and number of foundry rules have just exploded, and IC masks started using Double Patterning (DP), Multi-Patterning (MP) and finally EUV.

Complexity Changes over Time

Calibre RealTime Custom

The earliest Design Rule Check (DRC) and Layout Versus Schematic (LVS) tools were designed to be run in batch mode, so engineers submitted their jobs, went to lunch, then in the afternoon viewed the errors and communicated with the layout designers on what and where to fix each violation. Sounds too labor intensive and time consuming.

In recent years the new idea of running these DRC and LVS checks live, while the layout designer was working on the physical design came into vogue, bringing along a more efficient methodology, saving time by having near instant feedback, instead of waiting for a batch job to finish in a queue. The big question with a real time running of DRC has always been, “Are these checks approved by the foundry, and are they complete or just a reduced subset of the rule checks?”

The new productivity feature with Calibre RealTime Custom is that it now automatically tracks DRC across multiple regions, so that multiple edits can be fixed, tracked and checked simultaneously. This enables a team-approach to reaching DRC clean in real time possible, and the benefit is saving precious time. Oh, and the checks are signoff-quality too.

Calibre RealTime Digital

Design For Manufacturing (DFM) requires that the IC surface be more planar, instead of having hills and valleys, so adding dummy fill was a starting point for automating. Advanced nodes down to 3nm require more than dummy fill, and now Calibre RealTime Digital supports in-design fill by using the Calibre Yield Enhancer SmartFill, so designers enjoy a foundry signoff fill by using the design cockpit. It’s a faster way to be DFM ready.

Calibre nmDRC-Recon

Alex Tan blogged earlier about Calibre Recon here, and it’s a way to reduce the number of DRC checks and violations, in order to pinpoint the biggest layout issues first. The new feature with Calibre nmDRC-Recon when using Calibre RealTime Digital is that you can “gray-box” out some of your cells and blocks that are still works in progress, while still checking DRC for all the connections to adjacent blocks and even the upper-level metal layers. Fast feedback, earlier.

Calibre nmLVS-Recon

Finding a fixing interconnect shorts using Calibre LVS was blogged by Tom Dillinger earlier. What’s new with Calibre nmLVS-Recon is that you can run short isolation (SI) mode several times per day, instead of waiting overnight for results. Designers continue to use the same design inputs and foundry rule decks as before, so it’s simple to learn and get running with earlier results.

Summary

Calibre has quite a broad family of tools, where each one is optimized for a specific physical verification or reliability task. What started out as only a batch-oriented EDA tool, has now blossomed into interactive versions that provide earlier feedback and faster times to a clean IC design and layout. New features have been added and announced at DAC this year, so the automation benefits just keep growing to tackle all of the new process node complexity increases. The Calibre tools also work inside of your favorite IC layout, design and physical implementation tools, while providing a consistent UI experience, so go ahead and mix-and-match vendors.

If you attend DAC this year in San Francisco, then make your way over to the Siemens EDA booth, it’s #2521, located on the second floor. Ask the experts in the booth, “So, what’s new this year?”

Related Blogs


Intelligently Optimizing Constrained Random

Intelligently Optimizing Constrained Random
by Bernard Murphy on 07-12-2022 at 6:00 am

Potential coverage problems min

“Who guards the guardians?” This is a question from Roman times which occurred to me as relevant to this topic. We use constrained random to get better coverage in simulation. But what ensures that our constrained random testbenches are not wanting, maybe over constrained or deficient in other ways? If we are improving with a faulty methodology our coverage may seem strong while in fact being incomplete. Synopsys VCS supports an ML-based capability called Intelligent Coverage Optimization (ICO) to answer precisely this question.

What’s wrong with my CR testbench?

Constrained random methods require constraints that we define. Constraints are a practical but coarse and very low-level way to attempt to map the boundaries of intended functionality. Brilliant though we may be, we are imperfect, which can lead to coverage problems. In the figure above the green circle represents the testing space we intend to generate. The yellow circle graphs the space CR can generate given the constraints we have defined. The misalignment between the two indicates multiple TB issues.  As a result, some part of the test space we should cover, we can’t reach (missing stimuli). Some regions in which we will generate tests that do not correspond to intended behavior (illegal stimuli). Even within the overlap, some parts of the space we may be cover too well (over bias). And some parts may be under generated or not at all (under bias).

Our attempts to better cover design testing themselves need exposing these issues early on and rectify them wherever possible. Which is the purpose of ICO – to help you build better testbenches for coverage optimization.

How ICO works

This capability is built into the constraints solver in VCS and uses machine learning (specifically reinforcement learning) to improve stimulus quality for better coverage. In the course of this optimization it will also expose testbench bugs. There’s a lot of detail in the link below on controlling this analysis and reviewing results. My main takeaways are that the analysis does not require recompilation and can be reviewed post-run in text or HTML tables, or in Verdi.

Malay Ganai (Synopsys Scientist) who presented the technology shared a mobile user experience with ICO, comparing non-ICO and ICO analysis directly. The user found 30% more testbench bugs with the ICO analysis and importantly found an RTL deadlock which could not be found with non-ICO analysis. Moreover, they were also able to reduce regression time from 10 days to 6 days for the same functional coverage. Other users report finding critical bugs earlier (thanks to better stimulus coverage), finding more bugs faster and reducing regression times significantly.

AMD experience

AMD presented a more detailed summary of their findings using ICO, comparing also with non-ICO runs. They confirmed that they consistently find ICO covers more tests with the same number of seeds. In one striking example, ICO-based analysis was able to reach the same coverage as non-ICO in 15 regressions versus 23. That’s quite a convincing differentiating value with ICO.

AMD also gave a talk on improving testbench quality. This comes down to analysis diagnostics for skewed distributions, together with root cause analysis. They used ICO features to resolve under constraints, over constraints and to fix constraint inconsistencies. An example might be declaring a constraint variable as int when it should be a more limited bit-width, allowing far too wide of a range in randomization, thus affecting coverage and runtime.

ICO guards the testbench guardians. The testbench measures and helps optimize coverage for the design while ICO measures and helps optimize coverage for the testbench.

You can watch the SNUG presentations for both talks HERE . Register with a SolvNet Plus ID and search for ‘Early Use-cases of VCS ICO (Intelligent Coverage Optimization)’ by Malay Ganai and ‘Accelerate Coverage Closure and Expose Testbench Bugs using VCS ML-Driven Intelligent Coverage Optimization Technology’ by AMD.


Memory Security Relies on Ultra High-Performance AES-XTS Encryption/Decryption

Memory Security Relies on Ultra High-Performance AES-XTS Encryption/Decryption
by Kalar Rajendiran on 07-11-2022 at 10:00 am

dwtb q222 security aes xts fig1.jpg.imgw .850.x

A recent SemiWiki post covered the topic of protecting high-speed interfaces in data centers using security IP. That post was based on a presentation made by Dana Neustadter at IP-Soc Silicon Valley 2022 conference. Dana’s talk was an overview of various interfaces and Synopsys’ security IP for protecting those interfaces. As a senior product marketing manager for Security IP, Dana has now written a technical bulletin that dives into the details of memory security. This post is based on a review of that bulletin.

Technology – A Double-Edged Sword

Many of us have seen (or heard of) the Steven Spielberg movie “Catch Me If You Can” starring  Leonardo DiCaprio. The movie is based on the real life of Frank Abagnale who turned into a con artist at a very young age. While the movie may have taken some Hollywood liberties, it is true that Frank committed lots of crimes until he was caught and served time. His crimes included simple things such as writing fraudulent checks, to impersonating an airline pilot and even a resident doctor. After serving time, he reformed and has been serving the FBI for many decades, helping prevent white-collar crimes and/or catching criminals.

A few years ago, Frank Abagnale appeared on the Google Tech Talks series and gave a talk to a technology audience. An audience member asked Frank if he felt he would be able to successfully commit all the crimes in the current technology era. To the audience’s surprise, Frank said it is much easier now than it was four decades ago. Frank went on to explain that while technology offers more things, faster, cheaper and easier to use, these aspects also make it easier for a smart criminal.

The technology world’s challenge is to offer security solutions that work seamlessly with advanced products without compromising on performance, ease of use and flexibility among other requirements. Factors such as laws and regulations, changing nature of security threats, a growing attack surface and standards evolution are all placing the spotlight on security solutions.

Encryption at the Heart of Memory Security

Preventing security breaches involves many steps, but encryption is always at the heart of the solution. If data is encrypted, even if someone gets their hands on the data, they cannot do malicious things using the encrypted data. While security solutions need to be hard to break, it is also critical for the encryption/decryption process to be quick and easy. Naturally, encryption algorithms have been getting a lot of attention over the decades.

Characteristics of a Good Memory Security Solution

A good memory security solution needs to support not only the latest interfaces but also utilize minimal additional chip/board real estate and execute with very low latency. AES-XTS, or as it is sometimes referred XTS-AES, is the de-facto cryptographic algorithm for protecting the confidentiality of data-at-rest and in-use on storage devices.

It is a standards-based symmetric algorithm defined by NIST SP800-38E and IEEE Std 1619-2018 specifications. By design, it allows for pipelined architectures that can scale in performance to Terabits per second (Tbps) bandwidth. And its Ciphertext Stealing (CTS) mode provides support for data units with sizes that are not divisible by the 16-byte block size of the underlying AES cipher.

While it is one thing to have a standardized algorithm, it is yet another thing to deploy an optimized implementation of the same algorithm. The implementation needs to not only use minimal area and work at low latency but also support all key sizes and seamless context switching for a high number of changing contexts. And in particular for deployments in North America, the implementation should be certifiable to at least FIPS 140-3 Level 2, if not Level 3 for more security sensitive applications.

Synopsys’s Ultra High-Performance AES-XTS IP for HPC

Synopsys Ultra High-Performance AES-XTS Cryptographic IP core (see block diagram below) possesses the above characteristics while providing the flexibility needed to adjust to SoC designs’ specific use cases.

Some key benefits of integrating Synopsys’ standards-compliant AES-XTS crypto cores include:

  • High performance, low latency IP with efficient support for varied data traffic
  • Scalable throughput from 128 to 4096 bits/cycle, achieving bandwidths beyond 4 Tbps
  • Efficient encryption and decryption with 256-bit and 512-bit AES-XTS key sizes
  • Latency as low as 4 cycles
  • One tweak per cycle pre-computation
  • Seamless message interleaving, key setup, and key refresh for up to 64K cryptographic contexts
  • Multi-clock domain support
  • Dedicated secure key port
  • Area, latency, performance, and maximum frequency optimization options
  • FIPS 140-3 certification ready
  • Path for seamless full-duplex inline memory encryption integration with memory interface controllers
  • Support for the latest memory interfaces generations DDR4/LPDDR4 and DDR5/LPDDR5

Click here for accessing the whole technical bulletin. Synopsys offers a number of highly configurable security IP solutions. For more details, refer to the security IP product page.