SILVACO 073125 Webinar 800x100

Unveiling the Future of Conversational AI: Why You Must Attend This LinkedIn Live Webinar

Unveiling the Future of Conversational AI: Why You Must Attend This LinkedIn Live Webinar
by Daniel Nenni on 10-16-2023 at 8:00 am

Achronix Webinar LinkedIn

In the ever-evolving world of Conversational AI and Automatic Speech Recognition (ASR), an upcoming LinkedIn Live webinar is set to redefine the speech-to-text industry. Achronix Semiconductor Corporation is teaming up with Myrtle.ai to bring you a webinar on October 24, 2023, at 8:30am PST.

Moderated by EE Times’ Sr. Reporter, Sally Ward-Foxton, the webinar will explore a revolutionary ASR solution that promises to change the acceleration game in Conversational AI. Achronix, a leader in high-performance FPGAs and embedded FPGA (eFPGA) IP, and Myrtle.ai, a company known for optimizing low-latency machine learning (ML) inference for real-time applications, are teaming up to present a technology that’s highly relevant in today’s tech landscape.

At the core of this event lies a real-time streaming speech-to-text solution based on Achronix’s Speedster7t FPGA. Imagine the power to convert spoken language into text in over 1,000 concurrent real-time streams with remarkable accuracy and speed. This isn’t just about innovative technology; it’s about the practical applications and the potential impact on your business. If you’re part of a team that relies on fast, accurate speech-to-text conversion, this webinar is tailor-made for you.

One of the key takeaways from this event is understanding how this ASR solution can significantly reduce operational expenses (OpEx) and capital expenses (CapEx) while maintaining top-tier performance. Bill Jenkins, the Director of AI Product Marketing at Achronix, highlights that it can reduce costs by up to 90% compared to traditional CPU/GPU-based server solutions. In times where efficiency and cost-effectiveness are paramount, this is knowledge that can transform your decision-making processes.

Beyond the impressive cost savings, the webinar is your opportunity to explore the fascinating capabilities of FPGAs. The Achronix Speedster7t FPGA has unique features like a 2D network on chip (NoC) and ML processor (MLP) arrays. These features have been leveraged to create an ASR product significantly more optimized than anything available today. The extremely low latency of these FPGAs makes them ideal for real-time workloads, and this event will unveil how this low-latency technology can supercharge your business’s operations.

Moreover, this ASR solution is not just about performance; it’s also about flexibility. It’s compatible with major deep learning frameworks like PyTorch and offers re-trainability for multiple languages and specialties. If your business has specific needs or requirements, this solution can be customized to suit your objectives, making it a perfect fit for a wide range of industry-specific applications.

So why should you attend this webinar? In addition to unveiling the technology itself, it’s an opportunity to hear from experts in the CAI field. Bill Jenkins, an expert in Achronix-FPGA-powered ASR solutions, and Julian Mack, a Senior Machine Learning Scientist at Myrtle.ai, will guide you through this groundbreaking ASR solution with Sally Ward-Foxton’s moderation of the conversation and take on the current CAI landscape.

It’s a unique opportunity to discover the technology that will reshape how industries process speech data. Mark your calendars and attend this webinar on October 24th at 8:30am PST; it’s your ticket to a future where technology meets innovation.

Also Read:

Scaling LLMs with FPGA acceleration for generative AI

400 GbE SmartNIC IP sets up FPGA-based traffic management

eFPGA Enabled Chiplets!

The Rise of the Chiplet


FD-SOI, the technology shaping the future of automotive radars

FD-SOI, the technology shaping the future of automotive radars
by admin on 10-16-2023 at 6:00 am

White paper Figure 3

By Philippe Flatresse, Bich-Yen Nguyen, Rainer Lutz of SOITEC

    I.          Introduction

Automotive radar is a key enabler for the development of advanced driver assistance systems (ADAS) and autonomous vehicles. The use of radar allows vehicles to sense their environment and make decisions based on that information, enhancing safety and driving performance. Automotive Radars are considered very robust against disturbing atmospheric and environmental factors being able to instantaneously measure distance, angle and velocity and produce detailed images of the surroundings.

Initially developed for high class vehicles, the radar has gained considerable momentum over the past 2 decades. The first mass-produced 77Ghz radar was implemented in a Mercedes-Benz S-Class in 1998. Eight years later it was followed by a more advanced system combining 77Ghz long range radar (LRR) with two 24GHz short range radar (SRR) sensors to fit the urban traffic [1]. In 2011, the democratization of automotive radar has clearly begun with the adoption of standard series product in middle class vehicles.

Ten years later, the worldwide automotive radar market is primarily driven by rising demand for advanced driver assistance systems and further accelerated by the requirement for active safety systems mandated by government laws or new car assessment program such as NCAP. The global automotive radar market has grown in response to the increased use of radar equipment per vehicle. Several new car models have announced that they will have up to 10 radar sensors per vehicle starting 2025, which will enable the creation of a radar-based 360° surround view necessary for advanced driver assistance and semi-autonomous operation. As a result, the automotive industry is currently experiencing a high demand for high-precision, multi-functional radar systems, which has led to increased research and development activities in the field of automotive radar systems.

To meet the demands of the next generation of radar systems, the move to advanced CMOS technology is considered to be a necessary transition. Adopting CMOS technology allows for a significant increase in the density of integration, making it possible to create a radar transceiver that is entirely integrated onto a single chip, known as a radar system on chip (SoC). This type of design typically includes the millimeter wave frontend, analog baseband, and digital processing all on the same chip. It may also include MCUs, DSPs, memory, and machine learning engines, allowing the radar to operate independently with very few external components, thus reducing the overall BOM cost. The main nodes of choices today are 40/45nm, 28nm, and 22nm. Some are even going for 16nm.

One of the most promising silicon technologies for automotive radar already identified by several module makers is fully depleted silicon-on-insulator (FD-SOI) [2, 3]. FD-SOI technology enables the integration of high-frequency radar components on a single chip. The technology can not only improve the performance of the radar system but it also allows for low-power operation, which is critical for automotive applications where power efficiency is a concern.

    II.          Automotive Radar trends

The use of radar technology in vehicles is expected to continue growing, driven by both an increase in the number of cars adopting radar and by the amount of radar content per vehicle. These trends are driven by the growing adoption of advanced driver assistance systems (ADAS) and will be further sustained and strengthened in the long-term by the development of highly automated or autonomous driving. According to several market reports [4,5], the global automotive radar market size accounted for USD 6 Billion in 2021 and is estimated to achieve a market size of USD 22 Billion by 2030 growing at a CAGR of 20% from 2022 to 2030. In other words, the sales of automotive radar (SRR, MRR, and LRR) for level 2 and above are projected to experience significant growth in the coming years from 100 million units in 2021 to 400 million in 2030, meaning 4X increase in less than one decade. It is worth mentioning that 50% of the automotive radar will be manufactured in CMOS technologies by 2025.

The technology of automotive radar sensors has evolved over the years since its first introduction in 2000. Previously, automotive radars were mainly used in the 24 GHz frequency band for short-range detection and in the 76-77 GHz range for longer range or more complex applications. It is important to notice that the European Telecommunications Standards Institute (ETSI) and the Federal Communications Commission (FCC) have allocated a specific frequency band between 76 GHz and 81 GHz for automotive radar applications. However, due to difficulties in designing efficient and cost-effective integrated circuits at such high frequencies, a temporary frequency band around 24 GHz was also made available to manufacturers to develop 77 GHz radar transceivers. With advancements in technology, 77 GHz radar products are now well-developed, and the temporary 24 GHz frequency band is no longer available since 2022. Nowadays, the trend has shifted towards using the 76-81 GHz frequency band for the development of new sensors. While research into even higher frequency bands above 100 GHz is ongoing, the integration of these technologies into vehicles and the challenges related to semiconductor technology performance are still being studied.

III.          Automotive Radar challenges

Automotive radar technology plays is a crucial component in the development of advanced driver assistance systems (ADAS) and autonomous vehicles. However, the development and implementation of automotive radar systems comes with several challenges, including the need to save lives, energy, and costs (Fig.3).

Save Lives: Automotive radar technology needs improve the safety of vehicles by providing advanced warning of potential hazards on the road. To save lives, automotive radar must have high resolution to accurately detect and identify objects, low latency to provide timely warnings to the driver, and real-time classification capabilities to distinguish between different types of objects, dealing with the environmental conditions (like fog, rain, dust and other factors) that can affect the accuracy of the radar. These features allow the radar to detect and track objects such as other vehicles, pedestrians, and animals, and provide the driver with the information they need to make safe decisions on the road. Improving the resolution, latency, and classification capabilities of automotive radar technology can help to reduce accidents and save lives.

Save Energy: Automotive radar technology can also play a role in saving energy by optimizing the size, weight, and power (SWaP) of the system. This can be achieved by using efficient processing techniques and reducing the size and weight of the radar’s package. Optimizing SWaP is particularly important in electric vehicles (EVs) and hybrid electric vehicles (HEVs) where energy storage is limited. Reducing the size and weight of the radar system can also help to reduce the overall vehicle weight and improve fuel efficiency.

Save Costs: Transitioning to complementary CMOS technology is one way to reduce the cost of automotive radar systems. CMOS is a widely used manufacturing process for integrated circuits that allows for the integration of multiple functions onto a single chip. Single-chip solutions can further reduce costs by integrating all the necessary components and functions of the radar system onto a single chip. This can help to reduce the size and weight of the radar system, as well as simplify the manufacturing process. Reliability and yield are also important factors to consider when mass-producing automotive radar systems.

The success of advanced driver assistance systems (ADAS) and autonomous vehicles depends on the ability to address the challenges related to safety, energy efficiency and cost-effectiveness. A critical aspect in this regard is the choice of silicon technology, with a clear shift nowadays towards the use of CMOS technology. The technology that will dominate this field will be the one that can provide high resolution, low latency, and accurate classification, reduce energy consumption, simplify the manufacturing process and reduce costs by offering fully integrated radar systems.

IV.          Automotive radar key metrics

The radar sensor is tasked with identifying and spatially locating mass-based obstacles. These can include other vehicles, bicyclists, pedestrians, animals, and even fixed obstacles. The key metrics when analyzing the performance of a radar sensor is how far it can detect object (range), how reliably it can resolve objects (range resolution), how much it can resolve velocity of objects (velocity resolution) and how accurate it can determine object position and trajectory (angle resolution). The table below gives the technical requirements of automotive radar sensor.

Table 1: Typical performance parameters of advanced LRR radar [6]
Latest radars are able to achieve long range and high level of accuracy and resolution. High accuracy and resolution enables not just object detection today but also object classification. However, the price you pay for more accuracy and resolution is more data. As accuracy and resolution increase, the volume of data goes up accordingly, resulting in the need for more computing power. The choice of architecture and the use of efficient CMOS technology are crucial in managing the large volume of data generated by high accuracy and resolution radar systems, while keeping power consumption low, and is essential for the future of radar technology.

Let’s have a look now to the requirements to fulfill the key metrics of automotive radars:

  • Range: One of the range requirements is a high power transmitter, as it allows the radar to detect objects at a greater distance. A high sampling rate is also necessary to accurately determine the location and velocity of objects. Solutions to achieve these requirements include the stacking of multiple devices to increase the power output, and the use of low power and high linear ADC to efficiently process the radar signals.
  • Range and velocity resolution: Resolution refers to the ability of the radar to differentiate between objects that are close together. To improve resolution, one approach is to move to higher frequency bands, as the wavelength of the radar signal decreases with increasing frequency, allowing for finer resolution. Solutions to achieve this include using higher Ft/Fmax technologies in the radar signal. This allows for higher frequency components to be used in the signal, which can result in improved resolution.
  • Angle resolution: To improve angle resolution, one of the requirements is to limit thermal issues, as high temperatures can affect the performance of the radar’s electronic components. One solution to this is to improve the power amplifier (PA) and digital efficiency of the radar system. By increasing the efficiency of the PA and decreasing the digital power, less heat is generated, which can help to reduce thermal issues.

This section highlights the importance of having top-notch silicon technology with cutting-edge analog and millimeter wave RF capabilities in order to tackle the technological challenges posed by future automotive radars. Such technology must be able to handle a large amount of data while maintaining low power consumption.

V.          FD-SOI technology, do more with less energy

The current major technology shift in the radar segment is the adoption of CMOS technologies. Current CMOS technologies available for automotive such as 40nm, 28nm, 22nm, 16nm provide a high level of integration for digital circuits and exhibit very good performance in RF applications. Using CMOS technology to design automotive radar systems allows for a number of advantages over traditional analog radar transceivers. One of the main benefits is that CMOS allows for the integration of multiple components, such as the radar transceiver, signal processing circuits and control logic, into a single chip. This improves the resolution and density of the radar system, allowing for more accurate and reliable detection of objects. Additionally, CMOS technology is also generally less expensive than traditional analog radar transceivers, which help to lower the overall cost of the radar system. By having the radar system on a single chip, the SoC can be more compact and power efficient, which is important in automotive applications where space and power consumption are both important.

One way to evaluate the suitability of CMOS technology for automotive radar operating at 77 GHz is to look at the speed of the transistors. The table 2 below compares the fT and fmax of various state-of-the-art CMOS technologies. These values give an indication of how fast the transistors can operate and therefore how well they can handle high frequency signals, making it a good indicator of the feasibility of using CMOS technology in automotive radar. Today, the transit frequencies achieved allow CMOS technologies to penetrate the radar automotive market traditionally dominated by BiCMOS processes representing more than 2/3 of the overall radar market as of today. As shown in Table 1, among CMOS processes currently used for radars, the 22nm FD-SOI technology clearly outperforms both finfet and bulk technologies and is on par with the state of art SiGe technologies. This technology is seen by several radar chip makers, such as Bosh or Arbe, as state of the art CMOS technology for radars with the ability to offer transistors of ft > 350 GhZ ad Fmax > 390 GhZ and several additional unique benefits that are described in the next sections.

Table 2: Transit Frequency comparison of various Silicon technologies used in radar applications [6], [7]
a.     Unique features of FD-SOI technology

FD-SOI is a well-known technology in the field of semiconductors allowing for improved performance and lower power consumption compared to traditional bulk silicon technology. FD-SOI technology allows for more flexibility in the design and manufacturing process, making it a popular choice for a wide range of applications, including automotive radar systems. FD-SOI transistors offer several unique features, such as the ability to operate at low voltage, to cancel PVTA variations, to be quasi-insensitive to radiation and to exhibit a very high intrinsic transistor speed, which make it an ideal choice over other RF-CMOS technology alternatives (Figure 8).

  i.     FD-SOI and Ultra Low Voltage

Thanks to its intrinsic low-variability characteristics and body bias techniques, FD-SOI is able to operate at very low supply voltages down to 0.4V or below, making it an ideal technology for applications where power consumption is a critical concern. Lowering the supply voltage reduces the dynamic power consumption offering a unique advantage over other technologies, as it allows for more efficient power usage in applications where power is more of a challenge than performance.

 ii.     FD-SOI and Soft Error Rate

FD-SOI is known for its high resistance to high-energy particles, which can cause soft errors in electronic devices. This is because in FD-SOI, the active device region is separated from the substrate by a thin insulating layer called the Box. The buried oxide layer reduces the susceptibility of the device to charges generated in the substrate, making it less likely to experience soft errors. This feature makes FD-SOI a suitable technology for safety-critical applications such as advanced driver assistance systems (ADAS) and autonomous driving systems (AD).

iii.     FD-SOI and Body Biasing

Another key advantage of FD-SOI is body biasing. The body biasing allows taking control of the threshold voltage of the device post-fabrication. Body bias is a very powerful knob in automotive applications and has been widely deployed for PVTA compensation in many consumer and automotive products already. By implementing body biasing in a product, significant reductions in process, voltage, temperature and aging variation can be attained, simplifying the task of product engineers to ensure product specifications at a 1-ppm [10].

Body Biasing is a must have in the next radar generation as a key technique to improve the digital and the analog as well as reliability.

On the digital side, new adaptive body biasing techniques (ABB) have been recently developed allowing the application design to maintain a targeted operating frequency over a wide range of operating conditions such as temperature, manufacturing variability and supply voltage [12]. The architecture enables to reduce energy consumption of processors in 22nm FD-SOI technology by up to 30% and increase the operating frequency up to 450% compared to a technique in which body biased technique is not used [20].

In analog circuits, body bias has several benefits in analog circuits. One of the most significant benefits is improved accuracy, which can be achieved by fine-tuning the performance of the circuit. Additionally, body bias can also reduce power consumption by controlling the operating point and reducing leakage currents. Furthermore, body bias increases the voltage headroom of the circuit, allowing it to operate over a wider range of supply voltages. Body bias also improves the noise immunity of the circuit by reducing the threshold voltage and increasing the drain current. Lastly, body bias can be used to optimize the performance of the circuit for specific applications, such as low-power, high-speed, or high-linearity. These benefits make body bias a valuable knob for trading-off between different performances characteristics such as power consumption, speed, and linearity of analog circuits. [17]

On the reliability side, by controlling the operating point of transistors, body bias can reduce stress on the devices, leading to improved reliability. Additionally, body bias can compensate for temperature-related changes in the threshold voltage, making the circuit more temperature stable. Body bias can also reduce the threshold voltage and increase the drain current, improving the immunity to single event effects such as cosmic rays and alpha particles. Dynamic reliability drift compensation is also a promising area of research that holds the potential to produce fully resilient automotive systems [11]. Furthermore, body bias can reduce the degradation of the threshold voltage over time, leading to improved aging performance. Finally, body bias can compensate for variations in the threshold voltage due to manufacturing processes, making the circuit more robust. These benefits make body bias a valuable tool for improving the reliability of analog circuits.

iv.     FD-SOI and Analog/RF

As speed, noise, power, leakage, and variability targets are becoming more and more difficult to meet, FD-SOI technology offers a solution by providing improved matching, gain, and parasitics in transistors, thus simplifying the design of analog and RF blocks. Combining as many analog/RF functions as possible into a single RF-CMOS silicon platform is becoming more vital for cost and power efficiency, but RF-CMOS platforms struggle with increasing frequency, particularly in the mmWave spectrum. FinFETs have even more limitations architecture, so SiGe-Bipolar platforms are often used in this frequency range. FD-SOI, being a planar technology, does not have the limitations of 3D devices, with Ft/Fmax in the range of 350 GHz to 410 GHz reported, enabling full utilization of the mmWave spectrum, making FD-SOI RF-CMOS platforms a promising option for various applications such as automotive radars.

v.     FD-SOI and performance booster

Smartcut technique has been used to transfer the biaxial strain silicon film that grown pseudo-morphically on the fully relaxed SiGe buffer layer on Si bulk as a donor wafer to form the unique strain Si on Insulator (SSOI) wafer. The SSOI technology is a nature extension of the SOI technology, combining the advantages of the FD-SOI and the carrier mobility enhancement of the tension strain silicon. The biaxial tension strain Si and compression SiGe which can be formed by partially relaxed the tension strain and then locally Ge condensation [18] to boost the performance of n-channel and p-channel transistors, respectively [19] for boosting performance of logic and RF as shown in Figures 15. The saturation current (Ion) of the n-channel FD-SOI device gains 28% with 20% tension strain Si channel (Fig. 15.a) and 16% Ion gain for 35% c-SiGe formed by Ge condensation without relaxed the tension strain by Ar or Si implant prior to Ge condensation (Fig. 15.b). Using partial tension strain relaxation prior to form cSiGe channel will gain higher performance to great than 20% with 25% cSiGe. Figures 15.c&d show further performance gain with segment the width of channel (W) into multi narrow W to convert the biaxial strain into uniaxial strain which can further improves the performance up to 50% gain for the same strain level. Figure 16 shows the transconductor gain as a function of the gate length, the performance of longer gate length is less limited by parasitic resistance thus its performance gains closer to the mobility gain. The geometry impacts by the strain channel material on performance gain should be considered to maximize the benefits of the strain materials.

Strained Si on silicon-on-insulator (SSOI) is a natural extension of SOI, combining the advantages of SOI and the carrier mobility enhancement of tensile-strained Si for high-performance low-power applications [4].

a.     FD-SOI benefits for radar system-on-chips

A system-on-chip (SoC) that integrates the analog front-end and digital signal processing is a logical choice for the next generation of radar technology, as it allows for the efficient and extensive monitoring of multiple parameters and real-time evaluation during sensor operation, which is mandatory for being eligible for safety-critical applications.  As shown in the block diagram in Figure 7, a fully integrated mmWave radar includes transmit (TX) and receive (RX) radio frequency (RF) components; analog components such as clocking; and digital components such as analog-to-digital converters (ADCs), microcontrollers (MCUs) and digital signal processors (DSPs). Traditionally, the radar systems were implemented with discrete components, which increased power consumption and overall system cost.

FD-SOI appears as an ideal technology and a natural evolution for automotive radars. FD-SOI technology combines the high mobility of undoped channel, smallest total capacitance for same design rule, low power digital capabilities with the option for Si-base mobility booster, these greatly enhanced performances for both digital and RF/mmWave functions, providing an ideal platform to develop fully integrated radar device.

i.          Unrivalled energy efficiency

Power efficiency is a critical consideration for every automotive sensor application. Whether powered by fossil fuel or electricity or a combination of the two, the energy consumption and the thermal management constraints compound the power efficiency challenge of the vehicles. The radar sensor processing requires the utmost attention to performance-per-watt metrics. In this domain, FD-SOI exhibits a clear advantage compared to competing approaches. FD-SOI offers significant energy efficiency improvements for MCUs and DSPs over other CMOS technologies. At a given technology node, FD-SOI can consume up to 40% less power or operate 30% faster than equivalent transistors. It allows boosting the performance at constant power level, making it ideal for low power or thermally constraint applications. Additionally, sensors with a 20% smaller form factor can be designed in FD-SOI leading to reduced costs for the overall system and package. Overall, FD-SOI technology is a cost-effective and power-efficient solution for sensor applications, making it well suited for low-power mm-wave radar systems. The benefits can lead to improved imaging radar resolution, more features in the same footprint and reduced BoM cost.

ii.          State of the Art CMOS Power Amplifier

Power amplifiers (PAs) play a critical role in automotive radar systems, as they are responsible for amplifying the weak radar signals that are transmitted and received by the radar antenna. When designing a power amplifier (PA) for radar systems, several key factors must be taken into consideration to ensure optimal performance.

  • High efficiency in saturation: The PA should be able to operate in saturation mode, which is commonly used in Frequency Modulated Continuous Wave (FMCW) radar systems, to achieve high efficiency while still providing a strong radar signal. For Pulse Modulated Continuous Wave (PMCW) radar systems, the PA should be able to achieve high efficiency even when operating with some back off, as linearity is required in the PA to ensure accurate radar measurements.
  • High output power: The PA must be able to provide a high output power to ensure a strong radar signal and a long detection range.
  • Stability of performance over temperature: The PA should be able to maintain stable performance over a wide range of temperatures, as automotive radar systems are subjected to harsh environments.

FD-SOI technology brings several advantages when designing power amplifier (PA) for radar systems. One of the most significant benefits of FD-SOI technology is its high efficiency. Intrinsically, the PA in FD-SOI provides high output power due to the high breakdown voltage of the fully depleted transistors. In CMOS technologies, the design of the PA is based a cascade architecture to increase power handling capability. The cascode stage acts as a buffer and provides additional gain, which improves the overall performance of the PA. One of the main limitations of using cascode architecture in bulk or Finfet is that it sees high drain-to-substrate voltage due to the biasing of the P-substrate. In FD-SOI, each transistor is fully isolated, or “floating,” which means that there is no direct contact between the substrate and the active devices as shown in figure 9. This eliminates the need for substrate biasing, which can reduce the drain-to-substrate voltage and improve the overall performance of the PA. The stacking in FD-SOI can reach up to 50% higher PAE compared to other technologies, as this allows for the input power to be distributed among multiple devices, reducing the power loss due to heat. In the context of radar, higher performance can be achieved for both long-range Radar (LRR) and short range radar (SRR) using FD-SOI. Another important factor to consider is the stability of performance over temperature. When assisted with back gate bias, the high thermal stability of FD-SOI technology can easily be maintained enabling a tight output power control over a wide range of temperatures [13]. +/- 1dB variation over 145 degree temperature change is presented in [14], a crucial result for automotive radar systems that are subjected to harsh environments. Finally, in terms of reliability, FD-SOI PAs exhibit also better reliability figures due to the absence of lateral bipolar and higher breakdown voltage allowing a larger swing. Overall, FD-SOI technology provide a optimum solution for automotive radar power amplifiers, providing high efficiency, high linearity, high output power, high reliability, and stability over a wide range of temperatures.

 iii.          Low Power ADCs

Automotive radar systems prioritize performance at mmWave frequencies over the specific requirements of the ADC (Analog-to-Digital Converter). However, the ADC still plays a important role in the overall system performance. The ADC should have low power consumption, particularly for high sampling rates, such as in Pulse-Modulated Continuous-Wave (PMCW) radar systems. Additionally, the ADC should have high linearity to accurately convert the analog signal to digital, and should be able to compensate for variations in process, voltage, and temperature (PVT). These requirements are important to ensure that the radar system operates correctly and accurately in real-world automotive environments.

The automotive radar ADC in FD-SOI (Fully-Depleted Silicon-on-Insulator) technology has several advantages for low power consumption. The smaller switches in FD-SOI have lower Ron (resistance) and lower parasitics, which results in higher SNDR (Signal-to-Noise-and-Distortion Ratio) and better switch linearity. Additionally, the FD-SOI technology allows for the use of lower power supply voltages, which further reduces power consumption.

In [15], comparing the ADC performance of 28nm BULK and 28nm FDSOI technology, it can be seen that the FDSOI technology has a lower power supply voltage of 1.1V as opposed to 1.0/1.8V, which leads to a significant reduction in power consumption from 76.4mW down to 19.8mW. The FDSOI technology also has a higher SNDR of 60.7 dB at a Nyquist frequency, as compared to 57.2 dB for the BULK technology. Overall, the use of FDSOI technology in automotive radar ADC results in lower power consumption and improved performance.

b.     FD-SOI ideally positioned for the radar applications

FD-SOI technology is ideally positioned for radar applications due to its advantages in system cost, power efficiency, and radar performance. Compared to alternatives technologies such as SiGe or planar CMOS technologies as shown in figure 13, FD-SOI offers higher integration capability, better receiver noise figure and also built-in aging and temperature compensation solutions that can be easily implemented using body bias techniques. On top, the technology also exceeds the mmW automotive radar requirements in terms of output power and power amplifier efficiency.  In few words, FD-SOI is well-suited for cost-sensitive automotive radar applications requiring significant processing and RF mmW capabilities at lower power consumption.

 VI.          FD-SOI and High Resistivity substrate

The CMOS mmWave era opens up the door to revolutionary applications in the fields of ADAS and AD. FD-SOI technology is entering those filed as a versatile and flexible solution, seen as a digital technology with state of the art RF/mmWave capabilities. The technology provides significant benefits in terms of resolution, velocity, power consumption and cost requested by the radar market.

A clear trend is the increasing differentiation of standard cost-effective sensors for driver assistance applications and high-performance sensors for autonomous driving. Future radar applications, such as 4D imaging, impose new levels of performance for both RF and logic devices. To fulfill all the requirements, additional improvements at substrate level are mandatory. The high resistivity option is seen as a new leveler to further enhanced performance of RF devices (Fig.4). High resistivity SOI substrate has been widely adopted in the smartphone market, being currently present in 100% of them. It is clearly a valuable option for next generation automotive radar, FD+HR yields best-in-class passive-loss, increasing efficiency and reducing floorplanning. The high resistivity option in FD-SOI is seen as a major booster to reach ultimate mmW performance, enabling a higher level of SoC integration with unrivalled RF-mmW characteristics. It is important to highlight that SOITEC has designed a new substrate that avoids device integration changes of existing 28nm, 22nm and future 18nm, 12/10nm technology nodes  (Fig.5) [16].

VII.          Conclusion

In conclusion, FD-SOI is playing a crucial role in shaping the future of automotive radar. Its intrinsic key features make it a valuable technology for the automotive radar industry. FD-SOI enables the development of single chip high performance and cost-effective radars, making it a crucial enabler in vehicle safety and autonomous driving. Its ability to increase computing power while maintaining energy efficiency and intelligence opens the door to disruptive innovations and SWaP optimizations FD-SOI technology can definitively help drive a safe transition to CMOS technologies in the automotive radar industry.

VIII.          Graphs

 

Figure 1: Automotive Radar market per frequency

 

 

 

Figure 2: Automotive Radar RFIC volume by technology
Figure 4: High resistivity substrate
Figure 5: FD-SOI substrate roadmap
Figure 6: FD-SOI substrate and radar architecture
Figure 7: Single Chip radar architecture
Figure 8: FD-SOI key features
Figure 9: PA in FD-SOI
Figure 10: ADC in FD-SOI
Figure 11: Body Bias to improve digital, analog and reliability
Figure 12: FD-SOI 50% faster than Bulk, 10 times faster with ABB
Figure 13: FD-SOI ideally positioned for radar applications
Figure 14: FD-SOI value for automotive radar
Figure 15: Ion vesus Ioff of (a) SOI vs. sSOI for n-FET, (b) SOI vs cSiGeOI p-FET, (c)35% cSiGeOI p-FET with different W. (d) 35% cSiGeOI p-FET Ion and µ gain versus W [Ref.a]
Fig. 16: Peak transconductance enhancement of SSOI versus SOI

VIII.          References

  • [1] Holger H. Meinel and Juergen Dickman, “Automotive Radar: From its origin to future directions”, Microwave Journal, 2013
  • [2] ee-News Automotive, “Arbe moves to production of 4D radar chipset”, Business News, 2022
  • [3] David Manners, “Bosch to use FD-SOI for automotive radar SoCs”, Electronics Weekly, 2021
  • [XX] K. Ramasubramanian, “Moving from legacy 24 GHz to state-of-the-art 77 GHz radar.” Texas Instrument, 2017.
  • [4] Acumen Research and Consulting , “Automotive Radar Market Report and Region Forecast, 2022 – 2030” , 2023
  • [5] Yole Development , “Status of the Radar Industry: Players, Applications and Technology Trends” , Market and Technology Report, 2020
  • [6] Christian Waldschmidt et al., “Automotive Radar—From First Efforts to Future Systems” , IEEE Journal of Microwave, 2021
  • [7] Philipp Ritter, “Toward a fully integrated automotive radar system-on-chip in 22 nm FD-SOI CMOS” , International Journal of Microwave and Wireless Technologies, 2021
  • [8] Nobuyuki Sugii, “Ultralow-Power SOTB CMOS Technology Operating Down to 0.4 V”, Journal of Low Power Electronics and Applications
  • [9] Nobuyuki Sugii, “Ultralow-Power SOTB CMOS Technology Operating Down to 0.4 V”, Journal of Low Power Electronics and Applications
  • [10] P. Flatresse, “Process and design solutions for exploiting FD-SOI technology towards energy efficient SOCs”, International Symposium on Low Power Electronics and Design (ISLPED), 2014.
  • [11] P. Flatresse, “RBB & FBB in FD-SOI” SOI Forum, Shangaï, 2017
  • [12] A. Bonzo et al. “A 0.021 mm² PVT-Aware Digital-Flow-Compatible Adaptive Back-Biasing Regulator with Scalable Drivers Achieving 450% Frequency Boosting and 30% Power Reduction in 22nm FD-SOI Technology.”, ISSCC 2021
  • [13] Florent Torres , Magali De Matos , Andreia Cathelin , Eric Kerhervé “A 31GHz 2-Stage Reconfigurable Balanced Power Amplifier with 32.6dB Power Gain, 25.5% PAEmax and 17.9dBm Psat in 28nm FD-SOI CMOS,” RFIC 2018.
  • [14] Venkat Ramasubramanian, “22FDX: An Optimal Technology for Automotive and mmWave Designs”, Solid State Technology Webinar, Dec. 13, 2018
  • [15] Ashish Kumar, Chandrajit Debnath, Pratap Narayan Singh, Vivek Bhatia, Shivani Chaudhary, Vigyan Jain, Stephane Le Tual, Rakesh Malik, “A 0.065mm2 19.8mW single channel calibration-free 12b 600MS/s ADC in 28nm UTBB FDSOI using FBB”, ESSCIRC Conference 2016.
  • [16] Bertrand et al, “Development Of High Resistivity FD-SOI Substrates for mmWave Applications” ECS Transactions, 2022
  • [17] Ragonese, E. Design Techniques for Low-Voltage RF/mm-Wave Circuits in Nanometer CMOS Technologies. Appl. Sci. 2022, 12, 2013.
  • [18] C, Sun et al, “Enabling UTBB Strained SOI Platform for Co-integration of Logic and RF: Implant-Induced Strain Relaxation and Comb-like Device Architecture”, VLSI 2020
  • [19] B. De Salvo et al, “A mobility enhancement strategy for sub 14nm power-efficient FDSOI technologies” IEDM 2014
  • [20] Y. Mousry et al, “A 0.021mm2 PVT-Aware Digital-Flow-Compatible Adaptive Back-Biasing Regulator with Scalable Drivers Achieving 450% Frequency Boosting and 30% Power Reduction in 22nm FDSOI Technology”, ISSCC 2021
  • [21] Ned Cahoon, Alvin Joseph, Chaojiang Li, Anirban Bandyopadhyay “WS-01 – Recent advances in SiGe BiCMOS: technologies, modelling and circuits for 5G, radar and imaging”, European Microwave Week (EuMW) 2019
  • [22] Skyworks white paper “5G Millimeter Wave Frequencies And Mobile Networks -A Technology Whitepaper on Key Features and Challenges” 2019
  • [23] Farzad Inanlou, Sudipto Bose “mmWave Foundry of Choice: Accelerated and Simple Automotive Radar Design” 2020
  • [24] S. Li, M. Cui, X. Xu, L. Szilagyi, C. Carta, W. Finger, F. Ellinger, “An 80 GHz Power Amplifier with 17.4 dBm Output Power and 18 % PAE in 22 nm FD-SOI CMOS for Binary-Phase Modulated Radars,” Asia-Pacific Microwave Conference, Dec. 2020, Hong Kong
  • [25] L. Gao, E. Wagner and G. M. Rebeiz, “Design of E-and W-Band Low-Noise Amplifiers in 22-nm CMOS FD-SOI,” in IEEE Transactions on Microwave Theory and Techniques, vol. 68, no. 1, pp. 132-143, Jan. 2020.
  • [26] M. SadeghDadashS. BonenU. AlakusuD. Harameand S. P. Voinigescu”DC-170 GHz Characterization of 22nm FDSOI Technology for Radar Sensor Applications” 2018 13th European Microwave Integrated Circuits Conference (EuMIC) pp. 158-161 2018.
  • [27] Vadim Budnyaev and Valeriy Vertegel “ A SiGe 3-stage stage stage LNA for automotive radar application from 76GHz to 81GHz”, ITM Web of Conferences 30, 2019

IX.          Acronyms

  • ABB: Adaptive Body Biasing
  • ADAS: Advanced Driver-Assistance System
  • ADC: Analog-to-Digital Converters
  • BOM: bill of materials
  • LRR: Long Range Radar
  • MCU: Microcontroller unit
  • DSP: Digit Signal Processor
  • MRR: Medium Range Radar
  • NCAP: New Car Assessment Programs
  • PPAC: Power, Performance, Area and Cost
  • PVTA: Process, Voltage, Temperature, Aging
  • SoC: System on Chip
  • SRR: Short Range Radar
  • SWaP: Size, Weight and Power
Also Read:

Soitec is Engineering the Future of the Semiconductor Industry

Semiconductors and Mobile Communications: 5G and Beyond


Podcast EP187: The Drivers and Dynamics of the Worldwide Semiconductor Supply Chain with Sypplyframe’s Richard Barnett

Podcast EP187: The Drivers and Dynamics of the Worldwide Semiconductor Supply Chain with Sypplyframe’s Richard Barnett
by Daniel Nenni on 10-13-2023 at 10:00 am

Dan is joined by Richard Barnett, chief marketing officer and SaaS sales leader at Supplyframe. With more than 25 years of leadership experience in strategic marketing, sales and product management, Richard is recognized as a thought leader on supply chain and strategic sourcing transformation as well as digital marketing engagement with design engineers.

Richard discusses the current state of the worldwide semiconductor supply chain and the drivers for change with Dan. He reviews the forces at play that shape the supply chain, their complex interrelationships and the impact and risks associated with shifting supply to different regions.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Islam Nashaat of Master Micro

CEO Interview: Islam Nashaat of Master Micro
by Daniel Nenni on 10-13-2023 at 6:00 am

Islam Nashaat

Eng. Islam Nashaat received his B.Sc. and M.Sc. degrees from Ain Shams University, Cairo, Egypt, in 2010 and 2017, respectively. He joined Si-Vision as an Analog Physical Design Engineer in 2010, where he initiated the company’s CAD team in 2013, and became CAD and Physical Design Team Lead in 2016 after the company’s flagship product got acquired by Synopsys. In 2020, he joined Goodix Egypt acting as Physical Design Manager. He co-founded Master Micro in 2020 and joined it as a full-time CEO in 2021. During his professional career he participated-in and managed the delivery of tens of silicon verified IC chips and IPs. In addition, he developed and managed the development of many automation scripts covering the analog front-end and back-end flows with several publications.

Tell us about your company

Master Micro is a disruptive EDA startup in the field of analog/mixed-signal design automation. Founded in 2020 by Eng. Islam Nashaat (CEO) and Dr. Hesham Omran (CTO), our mission is to revolutionize the full-custom chip design methodology to keep up with the rapid advancements in technology.

Despite being a relatively young company, we have made significant strides. We successfully launched our first product, the Analog Designer’s Toolbox (ADT) in 2022, which is the culmination of several years of research. Since then, we have conducted product demonstrations for numerous companies worldwide. Many of these companies have become paying customers, while others are currently evaluating our offerings. Achieving this level of interest and adoption within such a short time frame is truly inspiring and motivates us to continue pushing boundaries in the EDA industry. In 2023, we launched our second product, the Sizing Assistant (SA), which is seamlessly integrated in the schematic editors to make the device sizing process fast, intuitive, and optimized.

What problems are you solving?

The rapid advancement of technology has led to increasingly complex and costly chip designs. This coincides with an expected shortage of talent in the semiconductor industry in the coming years, presenting a significant challenge. The existing analog design process, which is outdated and iterative, is struggling to keep up with the complexities of new technologies. Its lack of a systematic design approach results in ad-hoc methodologies that heavily rely on experts and lead to suboptimal designs.

That’s where our role comes into play. We are dedicated to developing the next generation of analog design automation tools that address the challenges of full-custom design productivity and the scarcity of analog design expertise. By leveraging innovative circuit solving techniques and a designer-oriented user-friendly interface, we aim at making the analog design process fast, optimized, and intuitive.

What application areas are your strongest?

With a combined experience of over 50 years in front-end and back-end analog/mixed-signal design, software, and CAD engineering, our team possesses a deep understanding of the intricacies of the field. Currently, our primary focus lies in the analog design front-end flow, specifically emphasizing the crucial analog building blocks utilized in any analog subsystem.

We are proud to offer two innovative products that cater to the needs of analog designers. The first is the Analog Designer’s Toolbox (ADT), a powerful tool that revolutionizes circuit-level design, visualization, and optimization. ADT enables users to paint the design space using millions of correct-by-construction design points generated within seconds, resulting in an up to 100x increase in design productivity.

Our second product, the Sizing Assistant, (SA) operates at the device level. SA gives designers the power to define the properties of transistors using their electrical parameters (e.g., gm/ID, fT, mismatch, Ron, etc.), and then receive the valid device sizing interactively. This tool streamlines the device-level sizing process and facilitates efficient decision-making. Both of our products are designed to seamlessly integrate within existing design environments, ensuring a smooth and hassle-free experience for analog designers.

What keeps your customers up at night?

Full-custom design is a bottleneck that dominates the time and cost of many chip design projects. In addition, the design quality is highly dependent on the designer’s expertise, and there can be a large unseen room for improving power, performance, and area.

Our tools provide a significant productivity boost to analog design teams, resulting in 10x-100x time savings. The designers can quickly visualize the design space and pick global optimal design points in a systematic and intuitive way that is independent of the designer’s level of expertise.

Our flagship product, ADT, has garnered tremendous excitement among visionary analog design leaders who are eager to explore and integrate our tools into their design flows. The feedback we got from our customers is that ADT is indeed taking analog circuit design to a new level, as it provides analog designers with profound insights into the analog design process and guides them to understand, optimize and improve their designs. The level of enthusiasm and ownership our visionary customers feel towards our tools is remarkable. Their passion for our tools is evident as they actively contribute suggestions for new features and functionalities that deepen their engagement and productivity with the tools.

What does the competitive landscape look like and how do you differentiate?

As you’re aware, the EDA market is highly specialized, with only a handful of established players. It’s a formidable challenge to stand out in this space. Traditional analog optimization tools have been around for many years, but they are not widely accepted in the analog design community. We differentiate ourselves by offering a designer-oriented design flow that leverages the powerful combination of the gm/ID methodology, precomputed lookup tables, and custom vectorized solvers. This provides distinct advantages to our tools in terms of speed, accuracy, and designer-oriented visualization.

Our approach not only empowers designers with unique capabilities but also ensures that their intuition remains intact through exceptional user interface and visualization features. This makes our tools a complementary and seamless integration within the familiar flow used by analog designers. We are proud to offer a solution that combines cutting-edge techniques with a user-friendly experience, addressing the specific needs of analog designers.

What new features/technology are you working on?

Before delving into our new features, it’s crucial to emphasize that our team comprises experienced designers who possess a thorough understanding of the existing gaps in the design process. Moreover, we maintain regular communication with our customers, allowing us to gain valuable insights into their challenges and pain points. With that in mind, we are excited that we will soon introduce a cutting-edge tool specifically designed to address the cumbersome analog/mixed-signal design porting flow. This tool is particularly beneficial for companies that frequently port designs across different technologies. Furthermore, we are actively harnessing the power of emerging technologies such as Artificial Intelligence and Machine Learning to enhance our tools further. By leveraging these advancements, we aim to provide not only functional solutions but also an exceptional customer experience that leaves a lasting impression.

How do customers normally engage with your company?

Our customer engagement spans across multiple channels, allowing us to connect with a wide range of clients. We actively reach out to customers through our extensive network in the industry. In addition, we engage with analog designers at renowned conferences and trade shows that we attend, and we also collaborate with our trusted distributors. Another important avenue for us is our strong presence on LinkedIn, where we have amassed over 12,000 followers, making it a powerful channel for communication.

Furthermore, our website https://adt.master-micro.com, serves as a hub for customers to interact with us directly. Here, we offer a comprehensive range of services, starting with personalized demos for design teams, and a support portal for customers. We offer a free evaluation period for prospective customers, allowing design teams to fully explore and familiarize themselves with the capabilities of our tools.

On the other side, we also reach out to universities and IEEE societies to educate the next generations of designers about new analog design methodologies, and empower professors, researchers, and students to adopt our tools in their research projects and teaching activities.

Also Read:

CEO Interview: Sanjeev Kumar – Co-Founder & Mentor of Logic Fruit Technologies

CEO Interview: Stephen Rothrock of ATREG

CEO Interview: Dr. Tung-chieh Chen of Maxeda


TSMC N3E is ready for designs, thanks to IP from Synopsys

TSMC N3E is ready for designs, thanks to IP from Synopsys
by Daniel Payne on 10-12-2023 at 10:00 am

synopsys ucie phy ip min

TSMC has been offering foundry services since 1987, and their first 3nm node was called N3 and debuted in 2022; now they have an enhanced 3nm node dubbed N3E that has launched.  Every new node then requires IP that is carefully designed, characterized and validated in silicon to ensure that the IP specifications are being met and can be safely used in SoC or multi-die system designs. This new IP must cover a wide range of functions, like interface, memory and logic. Synopsys has a large IP team that has risen to the challenge by creating new IP for the TSMC N3E node and achieving first-pass silicon success.

Chiplet Interconnect

Systems made up of heterogeneous chiplets require die-to-die communication, and that’s where the UCIe standard comes into play. Synopsys is a Contributor member of the UCIe Consortium, and they offer IP for both a UCIe Controller and a UCIe PHY in the TSMC N3E node.

The UCIe PHY IP had first silicon results in August 2023, showing data rates of 16Gbps and scalable to 24Gbps per channel. . Earlier this year, Intel unveiled world’s first Intel-Synopsys UCIe interoperability test chip demo at Intel Innovation. The interoperability was between Synopsys UCIe PHY IP on TSMC N3E process and Intel PHY IP on Intel 3 technology.

Industry’s Broadest Interface IP Portfolio on TSMC N3E

The IEEE approved the 802.3 standard for Ethernet back in 1983, quite the extended standard, while the Synopsys 224G Ethernet PHY IP had first silicon success in August 2023. Network engineers look at the eye diagram to see the 224Gbps PAM-4 encoding. Jitter levels surpassed both the IEEE 802.3 and OIF standard specifications.

Supporting standards like PCI Express 6.0, 400G/800G Ethernet, CCIX, CXL 2.0/3.0, JESD204 and CPRI there is the Synopsys Multi-Protocol 112G PHY IP. Engineers can combine this PHY IP with a MAC and PCS to build a 200G/400G/800G Ethernet block.

SDRAM and memory modules can use the Synopsys DDR5 PHY IP on TSMC N3E to achieve transfer rates up to 8400Mbps. You can see the wide open eye and clear margins for this IP operating at speed.

The PCI Express standard started out in 2003 and has been continually updated to meet the growing demands of cloud computing, storage, and AI. PCIe 5.0 is now supported using the Synopsys PCIe 5.0 PHY IP. First silicon on TSMC N3E showed operating speeds of 32 GT/s, and the Synopsys PCIe 5.0 PHY IP is listed on the PCI-SIG Integrators list.

I’ve been using USB-C on my MacBook Pro, iPad Pro and Android phone for years now. Synopsys now supports USB-C 3.2 and DisplayPort 1.4 PHY IP in the latest TSMC process. With this IP users can connect up to 8K Ultra High-Definition displays.

Smartphone companies standardized on the MIPI protocol years ago as an efficient way to connect cameras, and the Synopsys MIPI C-PHY IP/D-PHY IP can operate at 6.5Gb/s per lane and 6.5Gs/s per trio. The C-PHY IP supports v2.0, and the D-PHY IP2.1.

The latest Synchronous DRAM controller spec is LPDDR5X, supporting data transfer speeds up to 8533Mbps, a 33% improvement over LPDDR5 memory. The Synopsys LPDDR5X/5/4X Controller is silicon-proven, and ready to be designed with.

Logic Libraries and Memories

Up to half the area of an SoC can be memories, so the good news is that the Synopsys Foundation IP allows you to add memories and logic library cells quickly into a new design. Here are the test chip diagrams from Synopsys on the TSMC N3E node for memories and logic libraries.

Summary

TSMC and Synopsys have collaborated quite well together over the years, and that partnership now extends to the N3E node where SoC designers can find silicon-tested IP for interfaces, memories and logic. Power, performance and yield are looking attractive for N3E, so the technology is ready for your most demanding designs. Starting a design with N3E also provides you a quicker path to migrate to the N3P process.

Instead of creating all of your own IP from scratch, which will lengthen your schedule, require more engineering resources and increase risk, why not take a look at the proven and broad Synopsys IP portfolio for N3E .

Related Blogs


Synopsys Panel Updates on the State of Multi-Die Systems

Synopsys Panel Updates on the State of Multi-Die Systems
by Bernard Murphy on 10-12-2023 at 6:00 am

multi die 525x315 light

Synopsys recently hosted a cross-industry panel on the state of multi-die systems which I found interesting not least for its relevance to the rapid acceleration in AI-centric hardware. More on that below. Panelists, all with significant roles in multi-die systems, were Shekhar Kapoor (Senior Director of Product Management, Synopsys), Cheolmin Park (Corporate VP, Samsung), Lalitha Immaneni (VP Architecture, Design and Technology Solutions, Intel), Michael Schaffert (Senior VP, Bosch), and Murat Becer (VP R&D, Ansys). The panel was moderated by Marco Chiappetta (Co-Founder and Principal Analyst, HotTech Vision and Analysis).

A Big Demand Driver

It is common under this heading to roll out all the usual suspects (HPC, Automotive, etc) but that list sells short maybe the biggest underlying factor – the current scramble for dominance in everything LLM and generative AI. Large language models offer new levels of SaaS services in search, document creation and other capabilities, with major competitive advantages to whoever gets this right first. On mobile devices and in the car, superior natural language-based control and feedback will make existing voice-based options look primitive by comparison. Meanwhile generative methods for creating new images using Diffusion and Poisson flow models can pump out spectacular graphics drawing on text or a photograph complemented by image libraries. As a consumer draw this could prove to be the next big thing for future phone releases.

While transformer-based AI presents a huge $$$ opportunity it comes with challenges. The technologies that make such methods possible are already proven in the cloud and emerging at the edge, yet they are famously memory hungry. Production LLMs run anywhere from billions to trillions of parameters which must be loaded to the transformer. Demand for in-process workspace is equally high; diffusion-based imaging progressively adds noise to a full image then works its way back to a modified image, again through transformer-based platforms.

Apart from an initial load, none of these processes can afford the overhead of interacting with external DRAM. Latencies would be unacceptable and power demand would drain a phone battery or would blow the power budget for a datacenter. All the memory needs to be near – very near – the compute. One solution is to stack SRAM on top of the accelerator (as AMD and now Intel have demonstrated for their server chips). High bandwidth memory in-package adds another somewhat slower option but still not as slow as off-chip DRAM.

All of which requires multi-die systems. So where are we at in making that option production-ready?

Views on where we are at

I heard a lot of enthusiasm for growth in this domain, in adoption, applications and tooling. Intel, AMD, Qualcomm, Samsung are all clearly very active in this space. Apple M2 Ultra is known to be a dual die design, and AWS Graviton 3 a multi-die system. I am sure there are plenty of other examples among the big systems and semiconductor houses. I get the impression that die are still sourced predominantly internally (except perhaps for HBM stacks), and assembled in foundry packaging technologies from TSMC, Samsung or Intel. However, Tenstorrent just announced that they have chosen Samsung to manufacture their next generation AI design as a chiplet (a die suitable to be used in a multi-die system), so this space is already inching to towards broader die sourcing.

All panelists were naturally enthusiastic about the general direction, and clearly technologies and tools are evolving fast which accounts for the buzz. Lalitha grounded that enthusiasm by noting that the way that multi-die systems currently being architected and designed is still in its infancy, not yet ready to launch an extensive reusable market for die. That doesn’t surprise me. Technology of this complexity seems like it should mature first in tight partnerships between system designers, foundries and EDA companies, maybe over several years before it can extend to a larger audience.

I’m sure that foundries, system builders and EDA companies aren’t showing all their cards and may be further along than they choose to advertise. I look forward to hearing more. You can watch the panel discussion HERE.


Placement and Clocks for HPC

Placement and Clocks for HPC
by Paul McLellan on 10-11-2023 at 10:00 am

cts

You are probably familiar with the acronym PPA, which stands for Power/Performance/Area. Sometimes it is PPAC, where C is for cost, since there is more to cost than just area. For example, did you know that adding an additional metal layer to a chip dramatically increases the cost, sometimes by millions of dollars? It requires a minimum of two masks (interconnect, and vias) plus all the additional associated process steps. And interconnect layers normally come in pairs, vertical and horizontal, so usually it is four masks.

There are many inputs into optimizing PPAC, and a significant one is designing the clock tree. The clock can consume a lot of the power, and a lot of the interconnect, and obviously affects performance. The process of designing the clock tree is usually called Clock Tree Synthesis, usually abbreviated to CTS. Siemens EDA recently published a white paper Placement and CTS Techniques for High-Performance Computing Designs.

One challenge EDA tools face is that you only get the true quality of results when you have finished the design. In practice, this means that tools need to either use pessimism to guard band the results, or increase accuracy by having much better correlation between the tool in use and the final results.

The white paper discusses how to solve the placement and clock tree challenges in HPC designs using the Aprisa digital implementation solution, as these steps are fundamental to achieving the desired performance metrics during place and route. While most other place-and-route tools require waiting to post-route optimization to discover the true quality of the results, Aprisa offers users excellent correlation throughout the place-and-route implementation, which allows designers to gain confidence on the results much earlier in the flow at the placement and clock tree synthesis (CTS) stages. Aprisa is ideally suited to help designers deliver HPC IC innovations faster.

Aprisa is the Siemens digital implementation solution for hierarchical and block-level designs. Under the hood, it has a detail-route-centric architecture to reduce time to design closure, partly by pulling the implications of decisions early in the design process as opposed to waiting until the design is completed to find it has problems that were introduced earlier. A key to a modern implementation flow is to have consistent timing, extraction, DRC, and more across the whole flow.

Aprisa delivers optimal performance, power and area (PPA) for advanced nodes, and it has complete support for design methodologies and optimization to achieve both lowest power and highest performance.

The white paper uses an example design, an Arm Cortex-A76 in 5nm running at 2.75 GHz, and using 12 layers of metal for interconnect. I don’t have space here to go into the design in detail, you’ll have to read the white paper for a deeper dive.

The focus of the exercise was to analyze using 10 layers of metal versus 12 layers of metal (I said interconnect layers usually come in pairs already). The analysis revealed that, for the 10-layers option, the frequency would have to be lowered by 9 percent to achieve the desired power target. However, it resulted in significant cost savings for the entire project. Obviously, Aprisa cannot make the decision for you as to whether 9% performance hit is worth it to reduce the cost.

The focus of the white paper is clock tree synthesis (CTS), one of the big challenges in any HPC design. Aprisa supports useful skew, starting at placement optimization and continuing all the way to route optimization, to make certain that challenging design frequency targets are met. A strength of Aprisa CTS technology is that the push and pull offsets generated during placement optimization are realized during clock tree implementation.

Clocks generally go to flip-flops, and an optimization that modern cell libraries include are multi-bit flip-flops with a common clock. Aprisa has the capability to merge or demerge multi-bit flip-flops and clone/declone integrated clock gates. Aprisa does this based on the timing, physical location of the cells and criticality of the paths.

Post-CTS optimization in Aprisa includes congestion recovery that recovers congestion created during clock tree synthesis. Congestion recovery is a clock-aware approach that does not degrade timing and so reduces iterations back to placement optimization that otherwise would be required.

Aprisa supports different types of clock tree structures such as H-tree, multi-point CTS and custom mesh. Multi-point is the most popular approach for HPC designs and is the one described in the white paper.

There is a lot more to an implementation flow than synthesizing the clock tree, of course! But CTS is a critical stage, especially for demanding HPC designs, because there is so little room for deviation to achieve the desired performance and meet PPA requirements.

Aprisa is certified by the top foundries for the most advanced nodes. It ensures all PPA metrics are carefully balanced for HPC design implementation through high-quality clock trees. Not to  mention placement and routing technologies that reduce timing closure friction between the block and top-level during assembly.

Once again, the white paper can be downloaded here.

Also Read:

AI for the design of Custom, Analog Mixed-Signal ICs

Optimizing Shift-Left Physical Verification Flows with Calibre

Reducing Electronic Systems Design Complexity with AI


Long-standing Roadblock to Viable L4/L5 Autonomous Driving and Generative AI Inference at the Edge

Long-standing Roadblock to Viable L4/L5 Autonomous Driving and Generative AI Inference at the Edge
by Lauro Rizzatti on 10-11-2023 at 6:00 am

Table I

Two recent software-based algorithmic technologies –– autonomous driving (ADAS/AD) and generative AI (GenAI) –– are keeping the semiconductor engineering community up at night.

While ADAS at Level 2 and Level 3 are on track, AD at Levels 4 and 5 are far from reality, causing a drop in venture capital enthusiasm and money. Today, GenAI gets the attention, and VCs eagerly invest billions of dollars.

Both technologies are based on modern, complex algorithms. The processing of their training and inference shares a few attributes, some critical, others important but not essential: See table I.

Table I caption: Algorithm training and inference share some but not all critical attributes. Source: VSORA

The remarkable software progress in these technologies has until now not been replicated by advancements in algorithmic hardware to accelerate their execution. For example, state-of-the-art algorithmic processors do not have the performance to answer ChatGPT-4 queries in one or two seconds at a cost of ¢2 per query, the benchmark established by Google search, or to process the massive data collected by the AD sensors in less than 20 milliseconds.

That is until French startup VSORA invested brainpower to address the memory bottleneck known as the memory wall.

The Memory Wall

The memory wall of the CPU was first described by Wulf and McKee in 1994. Ever since, memory accesses have become the bottleneck of computing performance. Advancements in processor performance have not been mirrored in memory access progress, driving processors to wait ever longer for data delivered by memories. At the end, processor efficiency drops way below 100% utilization.

To solve the problem, the semiconductor industry created a multi-level hierarchical memory structure with multiple levels of cache nearer the processor that reduces the amount of traffic with the slower main and external memories.

Performance of AD and GenAI processors depends more than other types of computing devices on wide memory bandwidth.

VSORA, founded in 2015 to target 5G applications, invented a patented architecture that collapses the hierarchical memory structure into a large high bandwidth, tightly coupled memory (TCM) accessed in one clock cycle.

From the perspective of the processor cores, the TCM looks and acts like a sea of registers in the amount of MBytes versus kBytes of actual physical registers. The ability to access any memory cell in the TMC in one cycle yields high execution speed, low latency, and low-power consumption. It also requires less silicon area. Loading new data from external memory into the TCM while the current data is processed does not affect system throughput. Basically, the architecture allows for 80+% utilization of the processing units through its design. Still, there is a possibility to add cache and scratchpad memory if a system designer so wishes. See figure 1.

Figure 1 caption: The traditional hierarchical memory structure is dense and complicated. VSORA’s approach is streamlined and hierarchical.

Through a register-like memory structure implemented in virtually all memories across all applications, the advantage of the VSORA memory approach cannot be overstated. Typically, cutting-edge GenAI processors deliver single digits percentage efficiency. For instance, a GenAI processor with nominal throughput of one Petaflops of nominal performance but less than 5% efficiency delivers usable performance of less than 50 Teraflops. Instead, the VSORA architecture achieves more than 10 times greater efficiency.

VSORA’s Algorithmic Accelerators

VSORA introduced two classes of algorithmic accelerators –– the Tyr family for AD applications and the Jotunn family for GenAI acceleration. Both deliver stellar throughput, minimal latency, low-power consumption in a small silicon footprint.

With nominal performance of up to three Petaflops, they boast a typical implementation efficiency of 50-80% regardless of algorithm type, and a peak power consumption of 30 Watts/Petaflops. These are stellar attributes, not reported by any competitive AI accelerator yet.

Tyr and Jotunn are fully programmable and integrate AI and DSP capabilities, albeit in different amounts, and support on-the-fly selection of arithmetic from 8-bit to 64-bit either integer or floating-point based. Their programmability accommodates a universe of algorithms, making them algorithm agnostic. Several different types of sparsity are also supported.

VSORA processors’ attributes propel them to forefront of the competitive algorithmic processing landscape.

VSORA Supporting Software

VSORA designed a unique compilation/validation platform tailored to its hardware architecture to ensure its complex, high-performance SoC devices have plenty of software support.

Meant to put the algorithmic designer in the cockpit, a range of hierarchical verification/validation levels –– ESL, hybrid, RTL and gate –– deliver push-button feedback to the algorithmic engineer in response to design space explorations. This helps him or her select the best compromise between performance, latency, power and area. Programming code written at a high level of abstraction can be mapped targeting different processing cores transparently to the user.

Interfacing between cores can be implemented within the same silicon, between chips on the same PCB or through an IP connection. Synchronization between cores is managed automatically at compilation time and does not require real-time software operations.

Roadblock to L4/L5 Autonomous Driving and Generative AI Inference at the Edge

A successful solution should also include in-field programmability. Algorithms evolve rapidly, driven by new ideas that obsolete overnight yesterday’s state of the art. The ability to upgrade an algorithm in the field is a noteworthy advantage.

While hyperscale companies have been assembling huge compute farms with multitudes of their highest performance processors to handle advanced software algorithms, the approach is only practical for training, not for inference at the edge.

Training is typically based on 32-bit or 64-bit floating-point arithmetic that generates large data volumes. It does not impose stringent latency and tolerates high-power consumption as well as substantial cost.

Inference at the edge is typically performed on 8-bit floating-point arithmetic that generates somewhat less amounts of data, but mandates uncompromising latency, low energy consumption, and low cost.

Impact of Energy Consumption on Latency and Efficiency

Power consumption in CMOS ICs is dominated by data movement not data processing.

A Stanford University study led by Professor Mark Horowitz showed that the power consumption of memory access consumes orders of magnitude more energy than basic digital logic computations. See table II.

Table II Caption: Adders and multipliers dissipate from less than one Picojoule when using integer arithmetic to a few Picojoule when processing floating point arithmetic. The energy spent accessing data in cache jumps one order of magnitude to 20-100 PicoJoule and up to three orders of magnitude to over 1,000 PicoJoule when data is accessed in DRAM. Source: Stanford University.

AD and GenAI accelerators are prime examples of devices dominated by data movement posing a challenge to contain power consumption.

Conclusion

AD and GenAI inference pose non-trivial challenges to achieve successful implementations. VSORA can deliver a comprehensive hardware solution and supporting software to meet all critical requirements to handle AD L4/L5 and GenAI like GPT-4 acceleration at commercially viable costs.

More details about VSORA and its Tyr and Jotunn can be found at www.vsora.com.

About Lauro Rizzatti

Lauro Rizzatti is a business advisor to VSORA, an innovative startup offering silicon IP solutions and silicon chips, and a noted verification consultant and industry expert on hardware emulation. Previously, he held positions in management, product marketing, technical marketing and engineering.

Also Read:

Soitec is Engineering the Future of the Semiconductor Industry

ISO 21434 for Cybersecurity-Aware SoC Development

Predictive Maintenance in the Context of Automotive Functional Safety


Synopsys – TSMC Collaboration Unleashes Innovation for TSMC OIP Ecosystem

Synopsys – TSMC Collaboration Unleashes Innovation for TSMC OIP Ecosystem
by Kalar Rajendiran on 10-10-2023 at 10:00 am

L.C. OIP 2023

As the focal point of the TSMC OIP ecosystem, TSMC has been driving important initiatives over the last few years to bring multi-die systems to the mainstream. As the world is moving quickly toward Generative AI technology and AI-based systems, multi-die and chiplet-based implementations are becoming essential. TSMC recently hosted its annual Open Innovation Platform (OIP) Ecosystem Forum in Santa Clara, CA. During the keynote address, TSMC executives highlighted the significant advances that have been accomplished through ecosystem alliances and collaborative efforts since last year’s event.

Specifically, significant strides have been made in multi-die system innovation initiatives, including the unveiling of 3Dblox 2.0 (the 3rd generation of 3Dblox), the formation of 3Dblox committee and the expansion of its 3DFabric Alliance to 21 partners. The 3Dblox Committee is an independent standard group aimed at creating industry-wide specifications for system design with chiplets from any vendor. The 3DFabric Alliance now has subgroups collaborating on design, memory, substrate, testing, manufacturing and packaging. These developments highlight TSMC’s commitment to advancing 3D IC technologies and fostering industry collaboration to drive innovation in AI, HPC, and mobile applications. And the commitment is reflected in the following quote from Dr. L.C. Lu, TSMC fellow and vice president of Design and Technology Platform. “As our sustained collaboration with OIP ecosystem partners continues to flourish, we’re enabling customers to harness TSMC’s leading process and 3DFabric technologies to reach an entirely new level of performance and power efficiency for the next-generation artificial intelligence (AI), high-performance computing (HPC), and mobile applications.”

Synopsys Spotlight

The event was marked by many insightful presentations that showcased the collaborative efforts between TSMC and various OIP ecosystem partners in accelerating semiconductor innovation. As the Silicon to Software™ partner for companies developing electronic products and software applications, Synopsys made a number of announcements recently. This article shines a spotlight on how Synopsys’ collaborative efforts with TSMC and other ecosystem partners unleashes innovation for customers as well as the entire ecosystem.

Certified Flows for TSMC N2 Process Accelerate 2nm Innovation

Synopsys partnered with TSMC to fast-track advanced-node System-on-Chip (SoC) designs on TSMC’s N2 process technology. Key highlights of the collaboration include certified design flows, powered by Synopsys.ai™ for improved productivity and IP portfolio development for HPC, AI, and mobile applications. AI-driven design optimization with Synopsys DSO.ai™ aims to enhance power efficiency, performance, and chip density.

“The Synopsys digital and analog design flows for the TSMC N2 process represent a significant investment by Synopsys across the full EDA stack,” said Sanjay Bali, vice president of strategy and product management for EDA at Synopsys. “This helps designers jumpstart their N2 designs, differentiate their SoCs with increasingly better power, performance, and chip density, and accelerate their time to market.”

Dan Kochpatcharin, Head of Design Infrastructure Management Division at TSMC, emphasized delivering high-quality results and faster time to market as the hallmarks of the longstanding collaboration between Synopsys and TSMC. This collaboration showcases Synopsys’ commitment to comprehensive EDA and IP solutions, supporting innovation and competitiveness. For example, below are some of the methodology innovations and Fusion Compiler innovations that are the results of Synopsys’ collaboration with TSMC. When moving from N3E to N2, power grid impacted routability ratio. Synopsys worked with TSMC to improve it and reduce the impact to less than 2% compared to N3E. Synopsys updated its clock tree synthesis methodology to be able to handle the wider clock library cells in N2. The company upgraded its Fusion Compiler tool to become vertically aware to accommodate N2’s multiple double height cells with different OD for scaling speed and power.

You can find the full news release related to the above, here.

AI-Driven Analog Design Migration Flow for TSMC’s Advanced Process Technologies (N2, N3E, N4P and many others)

Synopsys expanded its analog design migration flow to cover TSMC’s advanced process technologies, including N4P, N3E, and N2. This flow, part of the Synopsys Custom Design Family, incorporates AI-driven circuit optimization, reducing manual effort and improving design quality. It includes interoperable process design kits (iPDKs) for TSMC FinFET nodes and an RF design reference flow for Radio Frequency Integrated Circuit (RFIC) designs. Sanjay Bali from Synopsys stressed the importance of AI-driven solutions in complex chip design and how customers can unlock massive productivity gains with efficient migration of their designs from node-to-node.

Dan Kochpatcharin from TSMC highlighted the significant performance and power efficiency advantages of TSMC’s advanced processes and the benefits of migrating existing analog designs to them.

You can find the full news release related to the above, here.

Broadest Portfolio of Automotive-Grade IP on TSMC N5A Process

Synopsys has introduced a comprehensive portfolio of automotive-grade Interface and Foundation IP designed for TSMC’s N5A process. This IP is tailored to meet the demanding requirements of automotive System-on-Chip (SoCs) in terms of reliability and high-performance computing. It includes logic libraries, embedded memories, GPIOs, SLM PVT monitors, and PHYs for LPDDR5X/5/4X, PCIe 4.0/5.0, 10G USXGMII Ethernet, MIPI C-PHY/D-PHY and M-PHY, and USB. The portfolio adheres to the ISO 26262 standard for random hardware faults, facilitating the development of safety-critical SoCs for applications like advanced driver assistance systems (ADAS) and highly automated driving (HAD) systems.

“New generations of automotive SoC designs will need to support massive amounts of safety-critical data processed at extreme speeds and with high reliability,” said John Koeter, senior vice president of marketing and strategy for IP at Synopsys. “Synopsys’ high-quality, automotive-grade Interface and Foundation IP on TSMC’s N5A process enables automotive OEMs, Tier 1s, and semiconductor companies to minimize IP integration risk and help meet the required functional safety, performance, and reliability levels for their SoCs.”

During a chat with John, he pointed to how electrification of automobiles is driving massive demand for automotive IP.  And while EVs are just 15% of the market today, they are projected to make 2/3rds of the market by 2030. The OIP collaboration between Synopsys and TSMC supports this evolution of software-defined vehicles, enabling the processing of large volumes of safety-critical data with high reliability and performance.

You can find the full news release related to the above, here.

Optimized Multi-Die System Design Solution for Higher Quality of Results

Synopsys has expanded its collaboration with TSMC to enhance multi-die system designs. It has introduced a solution supporting the 3Dblox 2.0 standard and TSMC’s 3DFabric™ technologies. Key elements include the unified exploration-to-signoff platform, 3DIC Compiler, which streamlines multi-die system design, and UCIe PHY IP, which achieved first-pass silicon success on TSMC’s N3E process, facilitating low-latency, low-power, high-bandwidth connectivity between dies. This collaboration addresses the challenges in high-performance computing, data center, and automotive applications, offering a comprehensive and scalable solution for optimized multi-die system designs.

“There’s a lot of work to be done to make multi-die systems a reality,” said Koeter. “Working closely with TSMC in many different areas is really key to executing and helping the industry to move to this new level of complexity.”

You can find the full news release related to the above, here.

Accelerating RFIC Design with Reference Flow for TSMC N4PRF Process

Synopsys joined forces with Keysight Technologies and Ansys to introduce a new reference flow for TSMC N4PRF, a cutting-edge 4-nanometer (nm) radio frequency (RF) FinFET process technology. This collaboration addresses the growing complexity of RF integrated circuit design in next-gen wireless systems (WiFi-7 systems), which demand higher bandwidth, lower latency, and broader coverage. The reference flow, based on the Synopsys Custom Design Family, provides an open RF design environment with higher predictive accuracy and productivity. It integrates with Keysight’s RFIC design and electromagnetic analysis tools and incorporates Ansys’ EM modeling and signoff power integrity solutions. The outcome of this collaboration empowers RF designers to tackle the challenges of designing advanced RFICs for high-performance wireless systems.

You can find the full news release related to the above, here.

Summary

The TSMC OIP Ecosystem Forum showcased the power of collaboration within the semiconductor ecosystem, with companies from various disciplines demonstrating their offerings leveraging TSMC’s technology. This article spotlighted the results from various collaborative efforts between Synopsys and TSMC as publicized through different news releases leading to the OIP Ecosystem Forum event. It is clear that Synopsys’ collaborative and development oriented investments and efforts are well aligned to its overarching Synopsys.ai and 3DIC initiatives to support futuristic electronic systems. And that in turn serves the TSMC OIP ecosystem well and helps unleash next-generation innovations.

To learn more, visit www.Synopsys.com.

Also Read:

Transformers Transforming the Field of Computer Vision

The True Power of the TSMC Ecosystem!

TSMC’s First US Fab


Disaggregated Systems: Enabling Computing with UCIe Interconnect and Chiplets-Based Design

Disaggregated Systems: Enabling Computing with UCIe Interconnect and Chiplets-Based Design
by Kalar Rajendiran on 10-10-2023 at 6:00 am

AresCORE UCIe PHY Support for All Package Types

The world of computing is evolving rapidly, with a constant demand for more powerful and efficient systems. Generative AI has driven exponential growth in the amount of data that is generated and processed at very high data speeds and very low latencies. Traditionally, computing systems have been built using monolithic designs, where all components, such as the central processing unit (CPU), memory, and I/O interfaces, are integrated onto a single monolithic die. While this approach has served us well for many years, it has limitations in terms of scalability, power efficiency, and flexibility. This is where chiplets come into play.

Chiplets are smaller, modular semiconductor components that can be designed and manufactured independently. These chiplets can serve various functions, such as CPUs, GPUs, accelerators, memory controllers, and I/O interfaces. By breaking down the monolithic design into these smaller building blocks, chiplets offer several advantages. This concept of breaking down traditional monolithic computing architectures into chiplets leads to what is termed a disaggregated system. In addition to lower NRE, lower power and smaller die, disaggregated systems enable easier upgradability and scalability as per workload/application requirements. This approach also results in improved yield and cost efficiency and enhanced system performance and energy efficiency.

UCIe Interconnect

While chiplets-based designs bring several benefits, they also present a challenge, which is the issue of efficiently interconnecting these chiplets to create a cohesive computing system.  The Universal Chiplet Interconnect Architecture (UCIe) addresses this challenge. UCIe is a standardized interconnect technology designed to provide high-speed, low-latency communication between chiplets and the motherboard. It serves as the glue that binds chiplets together, ensuring they can work seamlessly as a unified system.  It enables energy efficiency, high bandwidth density, low end-to-end latency, and robustness.

Use Cases

Disaggregated systems enable data center operators to tailor their computing resources to specific workloads, improving resource utilization and energy efficiency. This is especially valuable in cloud computing environments. High performance computing clusters can benefit from the flexibility of chiplets, allowing for specialized accelerators to be added or replaced as needed, maximizing computational power. In edge computing deployments, where space and power constraints are significant, disaggregated systems can be customized for specific edge applications, such as AI inference or data processing.

At the Recent TSMC Open Innovation Platform (OIP) Ecosystem Forum

At the most recent TSMC OIP Ecosystem Forum, there were many interesting presentations from various ecosystem partners. One presentation that covered the topic of disaggregated systems was from Letizia Giuliano of Alphawave Semi.

UCIe Complete Solution from Alphawave Semi

At the physical layer, the solution includes an Electrical PHY (AFE) that leverages silicon-proven analog IP. This component handles essential functions like clocking, link training, and sideband signals. Additionally, it incorporates a Logical PHY with Multi-Module PHY logic, providing a top-level floorplan for flexible package options.

The UCIe Die-to-Die (D2D) Adapter ensures smooth D2D interconnectivity. It manages link state, negotiates parameters crucial for chiplet interoperability, and ensures a reliable link by implementing CRC and link-level retry mechanisms. At the protocol layer, the solution natively maps PCIe and CXL protocols via Flit-Aware Mode and offers a Streaming Protocol Bridge for diverse SoC interfaces. Furthermore, Alphawave Semi provides a comprehensive platform for electrical, physical form factor, and protocol compliance, along with a complete set of test vehicles to facilitate interoperability testing.

Together, the above components enable a robust and complete UCIe solution, addressing various aspects of die-to-die chiplet integration and ensuring seamless functionality support for disaggregated systems.

Summary

Chiplets have emerged as a game-changer in the world of System-on-Chip (SoC) design, especially within advanced manufacturing nodes. Compared to traditional technologies, chiplets offer significant advantages, allowing for diverse SoC design structures. A robust and open chiplet ecosystem relies on Interface IPs and the UCIe Die-to-Die (D2D) standard is fostering such an open ecosystem. It facilitates seamless communication between chiplets from different manufacturers, ensuring compatibility and interoperability. Additionally, the integration of higher-level packaging takes chiplets to a new level, offering a wide range of utilization scenarios.

As a forward-thinking industry player, Alphawave Semi provides comprehensive D2D IP Subsystem Solutions, application optimized Chiplet Architectures and complete custom silicon solutions on leading edge nodes down to 3nm, to meet the needs of future System-in-Packages (SiPs). As a long-standing partner of TSMC in the Open Innovation Platform®, Alphawave Semi is very active in TSMC’s IP Alliance, Virtual Channel Aggregator (VCA), Design Center Alliance (DCA) and the new 3DFabricTM Alliance.

To learn more, visit

Chiplets

D2D Subsystem

Advanced Packaging

Custom Silicon

Also Read:

Interface IP in 2022: 22% YoY growth still data-centric driven

Alphawave Semi Visit at #60DAC

Coherent Optics: Synergistic for Telecom, DCI and Inter-Satellite Networks