SNPS1670747138 DAC 2025 800x100px HRes

Podcast EP182: The Alphacore/Quantum Leap Solutions Collaboration Explained, with Ken Potts and Mike Ingster

Podcast EP182: The Alphacore/Quantum Leap Solutions Collaboration Explained, with Ken Potts and Mike Ingster
by Daniel Nenni on 09-15-2023 at 10:00 am

Dan is joined by Mike Ingster from Quantum Leap Solutions (QLS) and Ken Potts from Alphacore. Mike and Ken explain how QLS and Alphacore collaborate to provide industry-leading IP and system solutions to their mutual customers. The markets served by both QLS and Alphacore are discussed and the synergies are explained in this informative podcast.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Koen Verhaege, CEO of Sofics

CEO Interview: Koen Verhaege, CEO of Sofics
by Daniel Nenni on 09-15-2023 at 6:00 am

CEO Interview Koen Verhaege, CEO of Sofics

Koen Verhaege, CEO of Sofics (“Solutions for ICs”), has developed his career first as an engineer, later as a business leader and entrepreneur, working on IP development and valorisation. Koen’s technical accomplishments, publications and patents are in the field of on-chip ESD protection design.

Today, Koen leverages his problem-solving skills in shaping corporate strategy, evolving business models, and forging strategic deals. His unwavering commitment revolves around delivering Distinct Recurring Value in every facet of Sofics’ operations.

Tell us about your company?
We are and aspire to remain a premium physical design IP provider delivering device and circuit solutions to IC manufacturers and IC design companies by leveraging our extensive on-chip EOS/ESD/EMC IP portfolio.

Our motto is to consistently deliver Distinct Recurring Value in all our engagements, upholding the highest standards of excellence, quality, integrity, and social responsibility. Our strategy involves staying at the forefront of next-gen technologies and IC challenges, attracting high-value customers, and retaining top talent.

What problems are you solving?
We offer on-chip robustness solutions that outperform standard options or achieve normal robustness at lower costs or with less constraints, while ensuring first-time right silicon. Our library includes ESD cells and specialty I/Os and PHYs for all silicon processes from 0.18um down to 3nm today.

What application areas are your strongest?
We excel in addressing over-voltage and over-current hazards for a wide range of applications, as well as high-speed, high-frequency, and automotive designs. We cater to those who design their interfaces and those seeking circuit-ready solutions.

What keeps your customers up at night?
Customers worry about meeting conflicting reliability and normal operation specifications, the availability of foundry or 3rd party IP, and the risks of EOS/ESD/EMC failures or IP infringement causing them delays and impacting ROI and market share.

What does the competitive landscape look like and how do you differentiate?
We focus on providing Distinct Recurring Value and as such we will not engage in direct competition with low-cost service providers.

Our extensive patent portfolio in robust device solutions and interface circuits sets us apart. We lead in technology innovation, distinct recurring value solutions, and we support our solutions across a wide range of processes and foundries.

Half of our engineering time is reserved for and dedicated to research and development: constant innovation – also herein we are unique. This makes that our solutions are ready when the customer needs arise.

We are an IP and not a service company – but don’t be mistaken: we deliver the best service to our customers.

What new features/technology are you working on?
We focus on leading edge high value opportunities. This requires access to technology and to challenges. Our strong relationships with leading foundries, like TSMC and Samsung, grant us early access to new technology. Our customer base, including more than 100 fabless, secures access to the engineering challenges in future products.

Today, we see opportunities in automotive integrated circuits and legacy compatible interfaces in advanced CMOS and FinFET technologies.

How do customers normally engage with your company?
Customers discover us through our personal business network and via our digital presence (LinkedIn, blog, web-site). We developed a structured onboarding process with fixed-price customization and proven solutions delivery via license agreements, reducing upfront risk for customers while ensuring Distinct Recurring Value for both parties.

How do you make a difference for engineers in this time of resource shortages?
We prioritize Distinct Recurring Value for our employees, offering opportunities to work with advanced technologies and challenges in the IC field. Our engineers are constantly gathering building blocks for a great career, and at Sofics we pave the path for that career.

Our flexible work arrangements and our energy-efficient, employee-designed office provides an ideal hybrid working environment.

Also Read:

CEO Interview: Harry Peterson of Siloxit

Breker’s Maheen Hamid Believes Shared Vision Unifying Factor for Business Success

CEO Interview: Rob Gwynne of QPT


Deeper RISC-V pipeline plows through vector-scalar loops

Deeper RISC-V pipeline plows through vector-scalar loops
by Don Dingee on 09-14-2023 at 10:00 am

Atrevido 423 + V16 Vector Unit with its deeper RISC-V pipeline technology, Gazillion

Many modern processor performance benchmarks rely on as many as three levels of cache staying continuously fed. Yet, new data-intensive applications like multithreaded generative AI and 4K image processing often break conventional caching, leaving the expensive execution units behind them stalled. A while back, Semidynamics introduced us to their new highly customizable RISC-V core, Atrevido, with its Gazillion memory retrieval technology designed to solve more big data problems with a different approach to parallel fetching. We recently chatted with CEO and Founder Roger Espasa for more insight into what the deeper RISC-V pipeline and customizable core can do for customers.

Minimize taking your foot off the vector accelerator

We start with a deeper dive into the vector capability. It’s easy to think of cache misses as causing an outright pipeline stall, where all operations must wait until data moves refill the pipeline. A better-fitting metaphor for a long data-intensive pipeline, such as in Atrevido, may be a Formula 1 racecar. Wild hairpin corners may still require braking, but gentler turns around most circuits present an opportunity to stay on the accelerator, backing off as little as possible.

Few applications use vector math exclusively; scalar instructions sprinkled in the loop can cause a finely-tuned vector pipeline to sputter without proper handling. “Our obsession is to keep a deeper RISC-V pipeline busy at all times,” says Espasa. “So, we do whatever the memory pipeline needs, and in some cases, that may be a little bit more scalar performance.”

The Atrevido 423 core adds a 4-wide decode, rename, and issue/retire architecture designed to speed up mostly vector math with some scalar math mixed in. “The out-of-order pipeline coupled with 128 simultaneous fetches really helps get scalar instructions out of the way fast –  4-wide helps with that extra last bit of performance,” continues Espasa. “We can get back to the top of the loop, find more vector loads and start pulling those in while the scalar stuff at the tail end finishes.”

 

 

 

 

 

 

 

 

 

It’s worth noting everything happens without managing the ordering in software; the code just issues instruction primitives, and execution occurs when the data arrives. Espasa points out that one of the strengths of the RISC-V community is that his firm doesn’t need to work on a compiler; plenty of experts are working on that side, and the code is standard.

Vector units may appear a lot smaller than they are

After seeing that vector unit in the diagram, we couldn’t resist asking Espasa one question: how big is the Atrevido vector unit in terms of area? Die size is a your-mileage-may-vary question with so much customizability and different process nodes. And when they say customizability, they mean it. Instead of one configuration – say, ELEN=64 and eight Vcores for a 512-bit DLEN engine standard in some other high-end CPU architectures – customers can pick their vector scale. The vector register length is also customizable from 1x up to 8x.

 

 

 

 

 

 

 

 

 

“We don’t disclose die area publicly, but our larger vector unit configurations are taking up something like 2/3rds of the area,” says Espasa. “We’ve started calling them Vcores because it’s easier to transition customer thinking from CUDA cores in GPUs.” He then interjects some customers are asking for more than one vector unit connected to each Atrevido core (!). The message remains the same: Semidynamics can configure and size elements of a RISC-V Atrevido to meet the customer’s performance requirements more efficiently than tossing high-end CPUs or GPUs at big data scenarios.

Some emerging use cases for a deeper RISC-V pipeline

We also asked Espasa what has happened that maybe he didn’t expect with early customer engagements around the Atrevido core. His response indicates a use case taking shape: lots of threads running on simpler models.

“We continuously get requests for new data types, and our answer is always yes, we can add that with some engineering time,” Espasa points out. int4 and fp8 additions say a lot about the type of application they are seeing: simpler, less training-intensive AI inference models, but hundreds or thousands of concurrent threads. Consider something like a generative AI query server where users hit it asynchronously with requests. One stream is no big deal, but 100 can overwhelm a conventional caching scheme. Gazillion fetches help achieve a deeper RISC-V pipeline scale not seen in other architectures.

There’s also the near-far imaging problem – having to blast through high frame rates of 4K images looking for small-pixel fluctuations that may turn into targets of interest. Most AI inference engines are good once regions of interest take shape, but having to process the entire field of the image slows things down. When we mentioned one of the popular AI inference IP providers and their 24-core engine, Espasa blushed a bit. “Let’s just say we work with customers to adapt Atrevido to what they need rather than telling them what it has to look like.”

It’s a recurring theme in the Semidynamics story: customization within the boundaries of the RISC-V specification takes customers where they need to go with differentiated, efficient solutions. And the same basic Atrevido architecture can go from edge devices to HPC data centers with deeper RISC-V pipeline scalability choices, saving power or adding performance. Find out more about the recent Semidynamics news at:

https://semidynamics.com/newsroom

Also Read:

Deeper RISC-V pipeline plows through vector-scalar loops

RISC-V 64 bit IP for High Performance

Configurable RISC-V core sidesteps cache misses with 128 fetches

 


Successful Inter-Op Verification of Enterprise Flash Controller with ONFI 5.1 PHY IP

Successful Inter-Op Verification of Enterprise Flash Controller with ONFI 5.1 PHY IP
by Kalar Rajendiran on 09-14-2023 at 6:00 am

Mobiveil EFC

In an era defined by digital transformation and data-intensive applications, the solid-state device (SSD) market has emerged as a critical player in reshaping storage solutions. While there are several types of non-volatile memories, each with its own unique characteristics and use cases, Flash memory is increasingly overtaking other types of solid-state devices. This is due to its unique combination of characteristics and advantages that align with the evolving needs of modern computing and storage applications. While Flash memories gained their popularity in consumer applications, they are making significant inroads into enterprise applications. In addition to the obvious benefits over hard disk drives (HDD), flash memories can be scaled up to accommodate larger capacities, better than alternate non-volatile solutions. This scalability allows for high-density storage solutions and is vital for data centers, cloud storage, and other enterprise-level applications. Flash memory, particularly NAND Flash, is being increasingly adopted in enterprise storage systems to provide high performance and scalability for mission-critical applications.

Enterprise Flash Controller (EFC) and ONFI PHY

Enterprise Flash Controller IP refers to the core component responsible for managing the data flow between a host system and Enterprise NAND flash memory storage. Enterprise NAND flash devices are a type of NAND flash memory-based storage solution designed specifically for enterprise-level applications. These devices are optimized to meet the high-performance, reliability, and endurance demands of data center environments, servers, and mission-critical enterprise applications. An enterprise-grade flash controller is optimized for high-speed data transfer, error correction, wear leveling, and other crucial functions, ensuring the seamless operation of flash storage in demanding applications.

The ONFI PHY IP (Open NAND Flash Interface Physical Layer Intellectual Property) is a critical component of NAND flash memory systems. It refers to the implementation of the physical layer interface as defined by the ONFI specification, which governs the communication between a NAND flash memory controller and the NAND flash memory devices. The ONFI specification standardizes how data is transferred to and from NAND flash memory chips, ensuring compatibility and interoperability between different manufacturers’ products.

The ONFI 5.1 PHY specification is the latest revision of the ONFI PHY and extends NV-DDR3 and NV-LPDDR4 I/O speeds up to 3600MT/s. To support the faster data rates, ONFI5.1 introduces Write Duty Cycle Adjustment (WDCA), Per-Pin VrefQ Adjustment, Equalization and Unmatched DQS options for NAND vendors. ONFI 5.1 also adds ESD specifications, makes adjustments to tDQSRE and tDQSRH specifications and relaxes data input/output pre-amble timings for NV-DDR2/3 to tWPRE2/tRPRE2.

Empowering the Future

Enterprise flash controllers and ONFI 5.1 PHY IP are an ideal match for several fast-growing markets that prioritize high-speed data storage and transfer solutions.

Mobiveil’s EFC Design IP

Mobiveil is the first company to develop an extremely configurable EFC. Mobiveil’s EFC Design IP offers seamless access to external NAND flash memory, enabling high-speed transactions that capitalize on the pipeline performance of modern enterprise NAND flash devices. Among its many features, the Mobiveil EFC supports ONFI 5.1 specification with NV-LPDDR4 mode of operation, supports various datapath widths, and offers robust support for volume addressing, suspend and resume functions, multi-plan and asynchronous plane read commands, and more. Its configurable features adapt to diverse device needs, while its independent and pipelined interfaces streamline command, data, and report phases. Its architecture allows flexible control of ONFI 5.1 and toggle devices, ensuring compatibility and efficient addressing schemes. With its versatile architecture, the EFC Design IP delivers high performance while allowing software-defined control over device command sequences.

InPsytech’s ONFI 5.1 PHY IP

InPsytech is an IP company focused on high-speed source synchronous DDR architecture, SerDes interfaces, special I/Os and high-speed and low power standard cell offerings. Currently, InPsytech’s ONFI 5.1 PHY IP has undergone silicon validation across processes from N6/N7 to N28 and has already been delivered to customers. Volume production of customer products incorporating the ONFI PHY IP is expected to commence in 2H2023.

Successful Inter-Op Verification

Mobiveil and InPsytech recently announced the successful Inter-Op verification of Mobiveil’s EFC IP with InPsytech’s ONFI 5.1 PHY IP. Inter-op verification, short for interoperability verification, is the process of ensuring that two or more different components, systems, or technologies can work seamlessly together as intended. In this case, it involves testing the compatibility and interaction between Mobiveil’s EFC Design IP (which manages data transfers to and from NAND flash memory) and InPsytech’s ONFI 5.1 PHY IP (which handles the physical layer communication to NAND flash memory chips). The successful Inter-Op verification signifies that an integrated solution can effectively address the demands of the enterprise flash storage market while delivering the promised performance, reliability and compatibility.

Summary

While this post is about an SSD-focused solution, Mobiveil also announced a partnership recently with Winbond to deliver HyperRAM IP controller solution for SoC designs. HyperRAM offers much higher density than embedded SRAM and lower power compared to typical DRAMs. Since Mobiveil already offers a PSRAM controller solution, it was easy to adapt to HyperRAM. An earlier post on SemiWiki covered Mobiveil’s PSRAM controller solution in partnership with AP Memory.

Mobiveil is a fast-growing technology company that specializes in the development of Silicon Intellectual Properties, platforms and solutions for various fast growing markets. Its strategy is to grow with fast burgeoning markets by offering its customers valuable IPs that are easy to integrate into SoCs. It offers a wide range of IP solutions for various market and application needs.

To learn more, visit www.mobiveil.com.


WEBINAR: Understanding TSN and its use cases for Aviation, Aerospace and Defence

WEBINAR: Understanding TSN and its use cases for Aviation, Aerospace and Defence
by Daniel Nenni on 09-13-2023 at 10:00 am

Ethernet TSN Profiles

This webinar will introduce Time-Sensitive Networking (TSN) and unveil how TSN can provide value in aviation, aerospace and defence.

TSN is a new set of standard extensions based on the IEEE 802.1 and IEEE 802.3 Ethernet standards. It is designed to provide deterministic guarantees on Quality of Service (QoS) metrics and reliability in a switched Ethernet network. TSN is a broad concept with many features within the four key areas: Time Synchronization, Reliability, Latency and Resource Management. TSN is applicable in multiple industries, all characterized by a need of real-time applications and determinism.  One of several benefits of TSN is that the profiles and standards are open, thus making it possible for different manufacturers to interoperate with each other.

To ensure an optimum feature set and configuration for each industry, IEEE work groups are working on defining profiles for each industry. Below list includes the six profiles currently defined. All six profiles are available as drafts with variating maturity.

Figure 1: Illustration of the TSN profiles

This webinar will focus on TSN in aerospace, aviation and defence, as it is described in P802.1DP.

Benefits of TSN

The TSN features makes it possible to use Ethernet for applications, where conventional Ethernet is not feasible due to the limitations within real-time communication and reliability. TSN makes it possible to use the same Ethernet network for both high-priority, time-critical messages and conventional best-effort Ethernet traffic.

This gives many different use cases in numerous industries. Below is a list of some of the benefits that TSN provides.

  • Time synchronization within a network
  • Efficiency, easier management, and cost-effectiveness
  • Reduced and deterministic latency
  • Improved reliability and possibility of redundancy
  • Scalability and simple expansion of the network
  • Possibility of converging networks, while ensuring vital process data is handled in a reliable and deterministic manner despite of other network traffic on the same network

Join Kim Schjøtler Sørensen our Ethernet IPs Product Manager for a webinar, where he will simplify TSN’s core concepts and present its applications in aerospace, aviation and defence.

Don’t miss out on this opportunity to unravel the potential of TSN technology within your business.

Register Now: https://www.comcores.com/webinar-time-sensitive-networking-tsn-usecases-aviation-aerospace-defence/

US & Europe: Tuesday, 03 October 2023, 11 AM EST, 8 AM PST, 5 PM CET

Asia & Europe: Wednesday, 04 October 2023, 3 PM China, 4 PM Japan & Korea, 9 AM CET

Also Read:

JESD204D: Expert insights into what we Expect and how to Prepare for the upcoming Standard

WEBINAR: O-RAN Fronthaul Transport Security using MACsec

WEBINAR: Unlock your Chips’ Full Data Transfer Potential with Interlaken


Scaling LLMs with FPGA acceleration for generative AI

Scaling LLMs with FPGA acceleration for generative AI
by Don Dingee on 09-13-2023 at 6:00 am

Crucial to FPGA acceleration of generative AI is the 2D NoC in the Achronix Speedster 7t

Large language model (LLM) processing dominates many AI discussions today. The broad, rapid adoption of any application often brings an urgent need for scalability. GPU devotees are discovering that where one GPU may execute an LLM well, interconnecting many GPUs often doesn’t scale as hoped since latency starts piling up with noticeable effects on user experience. Achronix’s Bill Jenkins, Director of AI Product Marketing, has a better solution for scaling LLMs with FPGA acceleration for generative AI.

Expanding from conversational AI into transformer models

Most of us are familiar with conversational AI as a tool on our smartphones, TVs, or streaming devices, providing voice-based search capability to simple questions, usually returning the best result or a short list. Requests head to the cloud where data and search indexing live, and results come back within a few seconds, usually faster than typing requests. Behind the scenes of these queries are three steps, depending on the application: automatic speech recognition (ASR), natural language processing (NLP), and speech synthesis (SS).

 

 

 

 

 

 

 

Generative AI builds on this concept with more compute-intensive transformer models with billions of parameters. Complex, multi-level prompts can return thousands of words seemingly written from research across various short-form and long-form sources. Accelerating ASR, NLP, and text synthesis – using an LLM like ChatGPT –  becomes crucial if response times are to stay bounded within reasonable limits.

A good LLM delivering robust results quickly can draw hundreds of simultaneous users, complicating a hardware solution. One popular approach avoids long vendor lead times, allocation that can lock out smaller customers, and high capital costs of procuring high-end GPUs with cloud-based GPU implementations using on-demand elastic resource expansion. But, operating expenses can eat up the apparent advantages of rented GPUs at scale. “Spending millions of dollars in the cloud for GPU-based generative AI processing and still ending up with latency and inefficiency is not for everybody,” observes Jenkins.

FPGA acceleration for generative AI throughput and latency

The solution for LLMs is not bigger GPUs or more of them because the generative AI latency problem isn’t due to execution unit constraints. “When an AI model fits in a single high-end GPU, it will win in a contest versus an FPGA,” says Jenkins. But as models get larger, requiring multiple GPUs to increase throughput, the scale tips in favor of Achronix Speedster 7t FPGAs due to their custom-designed 2D network-on-chip (NoC) running at 2 GHz and built all the way out to the PCIe interfaces. Jenkins indicates they are seeing as much as 20 Tbps of bandwidth across the chip and up to 80 TOPS, essentially wiping out floor planning issues.

 

 

 

 

 

 

Achronix has been evangelizing that one FPGA accelerator card can replace up to 15 high-end GPUs for speech-to-text applications, reducing latency by 90%. Jenkins decided to study GPT-20B (an LLM named for its 20 billion parameters) to see how the architectures compare in accelerating generative AI applications. We’ll cut to the punchline: at 32 devices, Achronix FPGAs deliver 5 to 6 times better throughput and similarly reduced latency. The contrast is striking at INT8 precision, which also reduces power consumption in an FPGA implementation.

“Generative AI developers can choose Achronix FPGAs they can actually get their hands on quickly, getting 5x-6x more performance for the same device count, or using fewer parts and saving space and power,” Jenkins emphasizes. He continues to say that familiarity with high-level libraries has kept many developers on GPUs, but they may not realize how inefficient a GPU-based architecture is until they run into these larger generative AI models. Jenkins worked on the team that developed OpenCL, so he understands programming libraries. He shares that AI compilers and FPGA IP libraries have advanced so developers don’t need intimate knowledge of FPGA hardware details or hand-coding to get the performance advantages.

LLMs are not getting smaller, and high-end GPUs are not getting cheaper (although vendors are working on the lead time problems). As models develop and grow, FPGA acceleration for generative AI will be a more acute need. Achronix stands ready to help teams understand where GPUs become inefficient in generative AI applications, how to deploy FPGAs for scalability in real-world scenarios, and how to keep capital and operating expenses in check.

Learn more about the GPT-20B study in the Achronix blog post:
FPGA-Accelerated Large Language Models Used for ChatGPT

Also Read:

400 GbE SmartNIC IP sets up FPGA-based traffic management

eFPGA Enabled Chiplets!

The Rise of the Chiplet


Soitec is Engineering the Future of the Semiconductor Industry

Soitec is Engineering the Future of the Semiconductor Industry
by Mike Gianfagna on 09-12-2023 at 10:00 am

Soitec is Engineering the Future of the Semiconductor Industry

The crystalline structure of silicon delivers the incredible capabilities that have fueled the exponential increases defined by Moore’s Law. It turns out that silicon in its purest form will fall short at times – power handling and speed are examples. In these cases, adding additional materials to the silicon can enhance its capabilities for demanding requirements. Called compound semiconductors, these enhanced materials unlock many of the high-performance applications that are emerging today. But adding an epitaxial layer of new material to silicon is very difficult, even unpredictable at times. But an innovative company has changed all that. Read on to see how Soitec is engineering the future of the semiconductor industry.

Soitec – A Brief History

The semiconductor supply chain is a highly complex, multi-national web of organizations and capabilities. If we trace that supply chain back to its roots, we find the raw material used to manufacture semiconductor devices. This is where Soitec lives. Born out of Grenoble’s CEA-Leti (Atomic Energy Commission/Electronics and IT Technology Laboratory) in the 1990’s, Soitec has become a critical source of engineered substrate materials for the entire semiconductor industry.

With state-of-the-art manufacturing facilities in France, Belgium, Singapore, and China, Soitec has become a global leader in engineered substrates. Using its unique Smart Cut™ process, Soitec can reliably and cost-effectively insert an insulating layer between two layers of silicon oxide, creating silicon-on-insulator (SOI) wafers. One of these layers contains the differentiating materials that delivers the required improvements in system performance.

Depending on the materials used, these engineered substrates can deliver enabling performance for RF, power and optical communications as examples. Using its Smart Cut process, Soitec has an ambitious plan for heterogeneous material combinations to deliver an anything-on-anything roadmap.

The possibilities for such a roadmap have broad implications for the entire semiconductor industry. Let’s look at the impact silicon carbide (SiC) compound semiconductors have on the automotive market.

Connecting the Automotive Ecosystem with SiC Manufacturing

This was the title of a presentation Soitec gave at the recent Semicon West event in San Francisco. The presentation focused on the powertrain for EVs and the impact silicon carbide material can have there. Powertrain elements examined included:

  • Electric Motor (and e-transmissions)
  • Battery Pack (modules, cells, battery management)
  • Power Electronics (E-drive/inverter (DC/AC), DC/DC converter, on-board charger (AC/DC))

These elements can add up to over $10,000 of system cost. The use of silicon carbide can have a big impact on these elements. When compared to traditional silicon material based on insulated-gate bipolar transistors (IGBTs), the following substantial improvements are possible:

  • ~50 percent faster charging time
  • ~5% – 10% increased range
  • ~$500 – $1,000 reduced system cost

So, the question becomes what is the best path to these improvements? It turns out silicon carbide compound semiconductor material is costly, energy intensive and time-consuming to produce. To manufacture a boule of SiC which will contain 40-50 wafers each, there are many process steps that must be carefully controlled. The whole process can take about two weeks at high temperature of 2500°C which is roughly the temperature at the surface of the sun. Soitec presented the diagram below to summarize the requirements.

The presentation then gave a glimpse into how real Soitec’s anything-on-anything roadmap is. Using the fundamentals of its Smart Cut™ process, Soitec has created a SmartSiC™ engineered substrate. Soitec’s SmartCut™ process – think of it as an atomic scalpel – transfers an ultra-thin single crystalline SiC layer extracted from a silicon carbide so-called donor wafer, which is then bonded to an ultra-low-resistivity polycrystalline silicon carbide wafer. The donor wafer can then be re-used 10 times, said Emmanuel Sabonnadiere, vice president, automotive and industrial at Soitec, which makes this new engineered substrate unrivalled.

The benefits of this process are substantial and include:

  • 40,000 tons of CO2 reduction for each 1 million wafers
  • 200mm scalability to accelerate SiC adoption through 10x reusability
  • Enable a new generation of SiC devices thanks to an RDSON improvement of up to 20%
  • ~8X improved conductivity compared to a conventional single crystal SiC
  • Reduced Capex & Opex

The figure below shows the details of the process.

SmartSiC Process

Strategic partnerships are being set up across the automotive supply chain to deliver on the substantial benefits of this approach.

Comments From the Presenter

Emmanuel Sabonnadière

Emmanuel Sabonnadière, Vice President Division Automotive & Industrial at Soitec was the presenter at Semicon West. I had the opportunity to chat with him for a bit on the work being done at Soitec and its implications.

He began by explaining that the automotive division at Soitec has grown by 80% over the past year. Impressive. Emmanuel clearly has a passion for the impact that silicon carbide can have on system cost and performance. He has a history dating back as CEO of CEA-Leti where a lot of the early innovation occurred.

He discussed the extreme efficiency of Soitec’s process – a silicon carbide layer is complex and challenging to produce and can be used many times to create engineered substrates resulting in a highly sophisticated process.

Emmanuel also described the substantial investment being made by Soitec to build out the manufacturing infrastructure needed to broadly deploy its capabilities in the fast-growing EV market. Opening to celebrate first production is planned for end of September 2023

To Learn More

Soitec has developed a short, under two-minute video that puts all the benefits of the SmartSiC process in perspective. I highly recommend to have a look, you can find the Soitec video here. This will help you understand how Soitec is engineering the future of the semiconductor industry.


Chiplets and IP and the Trust Problem

Chiplets and IP and the Trust Problem
by Bernard Murphy on 09-12-2023 at 6:00 am

Trust min

Perforce recently hosted a webinar on “IP Lifecycle Management for Chiplet-Based SoCs”, presented by Simon Butler, the GM for the Methodics IPLM BU. The central theme was trust, for IPs as much as chiplets. How can an IP/chiplet consumer trust that what they receive has not been compromised somewhere in the value chain from initial construction to deployment in an OEM product?

What is the trust scope?

This feels like a big problem to tackle. On a quick search I see multiple proposed solutions to address different classes of attack:

  • Late stage added hardware trojans, against which a physical inspection certification authority has been proposed,
  • Known good die tagging with a PUF, where the correct tag is not reproducible in a fake die,
  • Zero trust chiplets which assume they are operating in an insecure environment; good for them but doesn’t necessarily fix the total system,
  • In the pre-silicon part of the chain mechanisms to fingerprint an IP component along with metadata for validation on receipt.

The Perforce approach to trust management

The last of these options is the area that Simon aims to address. This centers around the bill of materials (BOM) for the SoC. Each IP and the SoC itself can be characterized by multiple factors: version number, design configuration scripts, tool versions and configuration scripts, and embedded software. This last item can similarly be broken down to top-level code, libraries, and packages, etc. also with version numbers.

Simon advises that for each item in the BoM version numbers should be automatically updated where appropriate throughout the development lifecycle. These version updates are important to support traceability – who made what change, when and why. Metadata should be stored with IP information to track open bugs in each release and the release in which they were fixed and test results for the IP. I wonder if here they could also include a fingerprint for the simulation input and output? The results themselves would be too bulky to store but a fingerprint (like a hash over the testbench and the sim output) would be a tricky thing to fake.

Taken altogether, each component representation, and the BOM should be immutable, ensuring traceability of any changes in the BOM, and should therefore also be easily checkable, so if a change was introduced after the IP or soft SoC was shipped that fact would become apparent immediately.

Blockchain as a ledger management system for provenance

Of course if I am an experienced bad actor I can learn all the ways you generated your metadata and fingerprints and update all checks after I have inserted my malware. Simon’s suggestion to get around that problem is to use blockchain managed signatures for important metadata. Here, blockchain should be integrated into the component management platform so that ledger entries can be made and signed on each release. This is a much more difficult thing to compromise. In fact I wonder if blockchain couldn’t become a part of the larger chiplet trust solution? Interesting idea.

Methodics IP Lifecycle Management (IPLM)capabilities

Methodics provides a comprehensive range of IPLM capabilities, including a fully traceable ecosystem enforcing version immutability through design evolution, release management, IP discovery and reuse features including automatic cataloging of all IP and metadata, workspace management across design organizations and IP-centric planning support enabling different teams to understand characteristics and challenges flagged by other teams, in planning and during development.

You can register to view the webinar HERE.


Synopsys Expands Synopsys.ai EDA Suite with Full-Stack Big Data Analytics Solution

Synopsys Expands Synopsys.ai EDA Suite with Full-Stack Big Data Analytics Solution
by Kalar Rajendiran on 09-11-2023 at 10:00 am

Wafer Circuit Detail

More than two years ago, Synopsys launched its AI-driven design space optimization (DSO.ai) capability. It is part of the company’s Synopsys.ai EDA suite, an outcome of its overarching AI initiative. Since then, DSO.ai has boosted designer productivity and has been leveraged for 270 production tape-outs. DSO.ai uses machine learning (ML) techniques to explore the design space and identify optimal solutions that meet the designer’s PPA targets. DSO.ai capability was just the tip of the ice berg in terms of AI-driven technology from Synopsys. Since then, the company has been expanding its AI-driven tools offerings.

At its annual Synopsys Users Group (SNUG) conference back in March 2023, the company announced additional optimization capabilities. These capabilities include verification space optimization (VSO.ai), test space optimization (TSO.ai), analog design migration automation and lithography models development acceleration. Proof of rapid adoption of these tools and capabilities is the fact that Synopsys’ AI-driven revenue already makes up about 10% ($0.5 billion) of the company’s annual revenue.

I sat down this week with Shankar Krishnamoorthy, Synopsys’ GM of the EDA Group to learn about the company’s next expansion of its Synopsys.ai EDA suite with a full-stack big data analytics solution. The newly announced capabilities are made possible by applying AI/ML driven analysis to aggregated data.

“AI and data are two sides of the same coin, and our announcement today is to really augment that Synopsys.ai vision with an end-to-end EDA data analytics platform that we are introducing.” said Krishnamoorthy. “…There’s a tremendous opportunity to run AI/ML pipelines on this data to help customers build very useful applications.”

Aggregating Data is Key

While the DSO.ai, VSO.ai and TSO.ai capabilities are optimizer capabilities, this week’s announcement is about scalable data analytics. Design tools, testing tools and manufacturing tools all generate large amounts of data. By aggregating these data into big data stores and performing AI/ML driven analysis, the full-stack big data analytics solutions help customers build customized applications for various use cases. The big opportunity with data analytics is that once the data is aggregated, we can start building models to predict what can happen in the future, which is predictive analytics. We can also take it one step further to prescribe what needs to be changed to achieve improvements, which is prescriptive analytics. Both predictive and prescriptive analytics are key benefits of the end-to-end EDA analytics solution. Whether performing root cause analysis on issues, identifying anomalies, or optimizing workflows, customers benefit from improved results and increased productivity.

Generative AI

With AI, the data on which foundation models are trained determines the quality of the end applications to be enabled. That is why the data aspect of the Synopsys.ai initiative is so critical and equally important is a data platform to aggregate relevant data.  Synopsys is providing this platform to help customers aggregate their data, whether it be design data, silicon/product engineering data or fab data. Customers are enabled to build interesting GenAI models to allow them to drive a higher level of automation and increase efficiencies even more.

Design.da:

Relevant data generated from DSO.ai, VSO.ai and TSO.ai are aggregated and AI/ML techniques applied to enable customers to build interesting applications to improve productivity and efficiencies. The result is accelerated design closure, optimized PPA and fast time to market.

Silicon.da:

Product engineering data already exist at fabless semiconductor companies (FSC). The Silicon.da capability allows FSCs to aggregate the data and perform analytics and build models for looking at wafer test data and product test data. Customers benefit from rapid root cause analysis of failed dies and products.

Fab.da:

On the foundry side, process control data have largely been not acted upon in the past for lack of big data analytics capability. Digitizing the fab floor is a priority right now for all foundries. Fab.da capabilities help address this priority of digitizing the fab floor by analyzing the process control data and help build models for achieving efficiencies. By applying AI/ML techniques to analyze the data generated by the various tools at the fab, root cause for deviations and excursions can be quickly identified. The various fab tools and processes can be improved not only to increase yield efficiencies but also for other objectives such as CO2 emissions for example.

Summary

No matter whether one is a foundry, a FSC or an integrated device manufacturer (IDM), customers are always interested in improving efficiencies and time to market. Depending on the customer type, one or more of the newly announced capabilities will be of appeal and value. An IDM will benefit from using all three (Design.da, Silicon.da and Fab.da) of Synopsys’ newly announced capabilities. A FSC will benefit from using the Design.da and Silicon.da capabilities. And a foundry will benefit from using the Fab.da capability.

You can read the full press release here. To learn more details, visit the data analytics page.

Also Read:

ISO 21434 for Cybersecurity-Aware SoC Development

Key MAC Considerations for the Road to 1.6T Ethernet Success

AMD Puts Synopsys AI Verification Tools to the Test


Stochastic Model for Acid Diffusion in DUV Chemically Amplified Resists

Stochastic Model for Acid Diffusion in DUV Chemically Amplified Resists
by Fred Chen on 09-11-2023 at 8:00 am

Stochastic Model for Acid Diffusion in DUV Chemically Amplified Resists 1

Recent articles have focused much effort on studying the stochastic behavior of secondary electron exposure of EUV resists [1-4]. Here, we consider the implications of extending similar treatments to DUV lithography.

Basic Model Setup

As before, the model uses pixel-by-pixel calculations of absorbed photon dose, followed by a quantum yield of acids (previously secondary electrons for EUV [1-2]) per pixel, with both absorbed photon number and acid generation values being subject to Poisson statistics. Gaussian blur is then applied per pixel; however, unlike conventional considerations, the blur scale parameter (often known as sigma) is itself another number randomly chosen from a range or distribution. Smoothing can be finally applied to give more visually realistic images.

Acid Diffusion Length

Experimentally, it was found that secondary electron blur increased with dose and would itself follow an exponential or normal distribution [1-2,5]. Likewise, acid diffusion lengths should also be considered to follow a similar distribution. From the literature, we note that (1) it is not dependent on dose [6], and (2) it is in fact obviously dependent on bake temperature and time [7-8]. Generally, the diffusion length is given as 2*sqrt(Dt), where D is the diffusion coefficient, and t is the time elapsed (during bake). So, the range or distribution of acid diffusion lengths corresponds to that of the diffusion coefficient. From the values in the references [6,8], we can estimate a standard deviation of ~1 nm. The target value of the acid diffusion length should, of course, be sufficiently smaller than the target critical dimension (CD), e.g., 40 nm.

ArF Immersion Example (80 nm pitch)

Following [6], we may take a target diffusion length value of 10 nm with +/-7 standard deviations of +/-7nm, giving a range of 3-17 nm. The absorbed dose is taken to be 10% of the nominal dose of 30 mJ/cm2. The acid quantum yield is assumed to be 0.33. The worst case would be at +7 standard deviations, or 17nm, with a probability of 1.28e-12. We examine the typical and worst cases below.

Figure 1. Typical acid deprotected image for 10nm acid diffusion length. 3 mJ/cm2 absorbed over 80 nm line pitch. Left: half-pitch target. Center: wider exposed feature target. Right: narrower exposed feature target.

Figure 2. +7 standard deviation deprotected image with 17 nm acid diffusion length. 3 mJ/cm2 absorbed over 80 nm pitch. Left: half-pitch target. Center: wider exposed feature target. Right: narrower exposed feature target.

The trend seems to be that narrower exposed feature are most sensitive to stochastic defects, and edge roughness would be most commonly observed. Obvious means to address these issues would be higher doses and more absorptive resists. Increasing the absorbed dose to 8 mJ/cm2 (e.g., 20% absorbed from 40 mJ/cm2) gives us the following.

Figure 3. Typical acid deprotected image for 10nm acid diffusion length. 8 mJ/cm2 absorbed over 80 nm line pitch. Left: half-pitch target. Center: wider exposed feature target. Right: narrower exposed feature target.

Figure 4. +7 standard deviation deprotected image with 17 nm acid diffusion length. 8 mJ/cm2 absorbed over 80 nm pitch. Left: half-pitch target. Center: wider exposed feature target. Right: narrower exposed feature target.

Clearly, the higher dose helps to smoothe out the roughness, but the narrow exposed feature is still vulnerable to becoming defective at a low rate. With brightfield attenuated phase-shift masks becoming a standard for improving NILS [9], narrow exposed features can be practically avoided anyway.

References

[1] F. Chen, Modeling EUV Stochastic Defects with Secondary Electron Blur, https://www.linkedin.com/pulse/modeling-euv-stochastic-defects-secondary-electron-blur-chen

[2] F. Chen, Secondary Electron Blur as the Origin of EUV Stochastic Defects, https://www.linkedin.com/pulse/secondary-electron-blur-randomness-origin-euv-stochastic-chen

[3] H. Fukuda, Localized and cascading secondary electron generation as causes of stochastic defects in extreme ultraviolet projection lithography, J. Microlith./Nanolith. MEMS MOEMS 18, 013503 (2019), https://www.spiedigitallibrary.org/journals/journal-of-micro-nanolithography-mems-and-moems/volume-18/issue-1/013503/Localized-and-cascading-secondary-electron-generation-as-causes-of-stochastic/10.1117/1.JMM.18.1.013503.full

[4] H. Fukuda, Cascade and cluster of correlated reactions as causes of stochastic defects in extreme ultraviolet lithography, J. Microlith./Nanolith. MEMS MOEMS 19, 024601 (2020), https://www.spiedigitallibrary.org/journals/journal-of-micro-nanolithography-mems-and-moems/volume-19/issue-2/024601/Cascade-and-cluster-of-correlated-reactions-as-causes-of-stochastic/10.1117/1.JMM.19.2.024601.full

[5] F. Chen, EUV Stochastic Defects from Secondary Electron Blur Increasing With Dose, https://www.youtube.com/watch?v=Q169SHHRvXE

[6] M. Yoshii et al., Influence of resist blur on resolution of hyper-NA immersion lithography beyond 45-nm half-pitch, J. Microlith./Nanolith. MEMS MOEMS 8, 013003 (2009).

[7] D. Van Steenwinckel et al., Lithographic importance of acid diffusion in chemically amplified resists, Proc. SPIE 5753, 269 (2005).

[8] M. D. Stewart et al., Acid catalyst mobility in resist resins, JVST B 20, 2946 (2002).

[9] F. Chen, “Phase-Shifting Masks for NILS Improvement – A Handicap for EUV?”, https://www.linkedin.com/pulse/phase-shifting-masks-nils-improvement-handicap-euv-frederick-chen

Also Read:

Advancing Semiconductor Processes with Novel Extreme UV Photoresist Materials

Modeling EUV Stochastic Defects with Secondary Electron Blur

Enhanced Stochastic Imaging in High-NA EUV Lithography