DAC2025 SemiWiki 800x100

Sensing – Who needs it?

Sensing – Who needs it?
by Dave Bursky on 05-24-2022 at 6:00 am

Analog Bits Sensing SemiWiki

In a simple answer – everyone.  A keynote presentation “Sensing the Unknowns and Managing Power” by Mahesh Tirupattur, the Executive Vice President at Analog Bits at the recent Siemens User2User conference, discussed the need and application of sensors in computing and power applications. Why sense? As Mahesh explains, sensing provides the middle ground between the pure analog functions and the digital systems. The need for sensing is everywhere, and in today’s latest system-on-chip designs the challenges start with the doubling of performance while halving the power consumption. With that comes the integration of billions of transistors and if any one component fails, the entire SOC could fail. Those challenges escalate with the use of FINFET transistors due to their exacting manufacturing requirements.

Challenges with such a design include the difficulties of exhaustively verifying the design before tape-out as well dealing with an almost infinite range of manufacturing variations. Additional issues include dealing with dynamic power spikes superimposed on PVT variations in mission mode. Large die sizes with multiple cores can also cause significant local temperature variations of 10 to 15 degrees across the die, and sensing can quickly detect and take corrective actions, such as software load balancing. Process variations can also be detected through the use of multiple Vt devices. Power distribution and power-supply integrity are also challenges for large chips and sensing can monitor and take instantaneous corrective actions at high processing speeds. With large numbers of processing cores on a chip, dynamic current surges can cause internal voltages to exceed functional limits.

As an example, Mahesh examines the design of the world’s largest AI chip, the Cerebras WSE-2. This wafer-sized “chip” has an area of 46,225 mm2 and contains 2.6 trillion transistors, trillions of wires, and 850,000 AI optimized compute cores (see the photo). Fabricated using TSMC’s 7 nm process technology, the device also contains 40 Gbytes of on-chip memory and delivers a 220 Petabit/s fabric bandwidth. Multiple sensors are embedded on the wafer – 840 glitch detectors and PVT sensors designed by Analog Bits provide real-time coverage of supply health, monitoring functional voltage and temperature limits.

The sensors can detect anomalies with significantly higher bandwidth than other solutions that miss short-duration events. Able to provide high-precision real-time power-supply monitoring exceeding 5 pVs sensitivity, the sensor intellectual property (IP) blocks are highly user programmable for trigger voltages/temperature, depth of glitch and time-span of the glitches. The ability to monitor multiple thresholds simultaneously provides designers and system monitors with a wealth of data to optimize the instantaneous current spikes suppression and overall effectiveness. Additionally, the fully-integrated analog macro can directly interface to the digital environment, can be abutted for multiple value monitoring and packs an integrated voltage reference.

Mahesh also sees the need for other power related sensors – on-die PVT sensors with accuracies trimmable to within +/-1C, integrated power-on-reset sensors that detect power stability in both core circuits and I/O circuits, and also offer brown-out detection. These sensors are just one piece of the puzzle that IP designers are facing. We have to design a test chip in a brand new process—it takes us several months to do the design and about nine months to get the test chip back from the fab. Then it may take a year or more for the customer to incorporate the IP in their design. As an IP company our challenges are even greater—customers are not just designing a chip, but designing a system, and that means that they have to co-optimize everything together. Thus, monitoring power is not just the power as a single chip, but power as an entire system and there comes the challenges of voltage spikes and power integrity and those issues, if not sensed and dealt with, can basically kill the whole system. Thus monitoring the thresholds and spikes, and quickly responding to the issues can result in more reliable systems.

In addition to power-related IP blocks, Analog Bits also developed a “pinless” phase-locked loop (PLL) that solves some of the on-chip clocking issues.  The PLL can be powered by the SOC’s core voltage rather than requiring a separate supply pin. That reduces system bill of materials costs by eliminating filters and pins, and the IP can be placed anywhere without and power pad bump restrictions. Last but not least, Analog Bits also has family of SERDES IP blocks that are optimized for high-performance, low-power SOC applications. The IP blocks are available in over 200 different process nodes, including 5 nm (silicon proven), 4 nm, and 3 nm (both in tape out), as well as older nodes, from all major foundries.

Also read:

Analog Bits and SEMIFIVE is a Really Big Deal

Low Power High Performance PCIe SerDes IP for Samsung Silicon

On-Chip Sensors Discussed at TSMC OIP

 


Unlocking PA design with predictive DPD

Unlocking PA design with predictive DPD
by Don Dingee on 05-23-2022 at 10:00 am

Predictive DPD virtual test bench

Next up in this series on modulated signals is an example of multi-dimensional EM design challenges: RF power amplifiers (PAs). Digital pre-distortion (DPD) is a favorite technique for linearizing PA performance. Static effects are easy to model and correct, but PAs are notorious for interrelated dynamic effects spoiling EVM and other key metrics. In a shift left, unlocking PA design with predictive DPD transforms how PA manufacturers design for applications. When designers know what to expect, PA manufacturers win more RF system designs faster.

Basics of DPD applied to PAs

Complex waveforms spanning wide bandwidths stress PA designs. Communications “rogue waves” show up with sudden high-power peaks. Under extreme peak-to-average power ratio (PAPR) conditions, balancing requirements for PA energy efficiency, signal quality, and output power level becomes difficult. Performance degrades, PA instability can develop, and sharp signal peaks can create interference affecting more users.

Critical performance metrics like in-band EVM (error vector magnitude) and out-of-band ACLR (adjacent-channel leakage ratio) are first to suffer these signal peaks. EVM reflects how well transmitted symbols match their intended spot in a QAM constellation – see the example for 16-QAM below. If points are closer together than they should be, accurate discrimination between adjacent points gets tougher. For example, 5G EVM specs allow no more than 3.5% total error for 256-QAM modulation in sub-6 GHz systems.

Designers have two fundamental choices. One is backing off to a lower output power, which eases non-linearity. The other is running at higher power levels but compensating for PA non-linearity. DPD is a preferred compensation technique because of its digital flexibility, often included within baseband ASICs or FPGA co-processors. From non-linearity measurements an inverse response is created, linearizing the PA output when the pre-distorter is cascaded in front of it. The magic of DPD is fitting a precise curve. With higher order polynomial representations and more weighting coefficients, the response can be tuned.

 

Dynamic effects weigh in on applications

Assuming one has good non-linearity measurements and a grip on linear algebra, amplifier linearization has mostly been a solvable problem. Historically, PAs were designed, built, then characterized in a lab with pure sine waves. An RF system designer picked up the PA manufacturer’s datasheet, which probably included a PA frequency response curve, then determined how to best fit it in their application.

That was before wideband complex modulated signals became common in RF systems. Hitting difficult specs like EVM suggests designing a DPD-PA combination tuned for a specific waveform. But wireless systems have different complex waveforms. A DPD-PA combo that performs well in one wireless application may hurt EVM performance in others.

Worse still, one DPD-PA implementation may not even hold up in the same application under different conditions. Dynamic interactions between waveform-related effects and operating point-related effects take hold, changing PA performance. Troublesome factors like charge trapping and self-heating, hard to reproduce in physical test setups, create “memory” effects.

Authentic waveforms change the PA design workflow

If we know a PA is going into a 5G, Wi-Fi 7, or similar design with complex modulated signals, and we know non-linearity shows up at high PAPR and crushes EVM, why is it left to RF system integrators to solve, after the fact? Today, we have authentic signals – often from the system specification. We can measure PA non-linearity and derive DPD. In fact, we can explore PA performance for an intended application in virtual space applying predictive DPD as a design tool, then characterize designs in context before shipping them. The physical measurement science looks like this:

A vector signal generator (VSG) fires up an ideal waveform to start. A vector signal analyzer (VSA) measures EVM and other system performance metrics, and iterates the DPD response automatically, adjusting the authentic waveform stimulus. This completely isolates a PA device under test – effects from fixturing and instrumentation are nulled out, leaving the exact PA response in results.

Now bring this concept to virtual space. In Keysight PathWave System Design or PathWave ADS, behavioral models account for detailed real-world effects – non-linearity, memory, bandwidth variations, harmonics, thermal, and more. Every variable can be swept, making hard-to-reproduce effects easier to see in virtual space. Authentic waveforms come from Keysight libraries for 5G, Bluetooth, Wi-Fi, GNSS/GPS, and other systems. With the PA design virtually modeled, the PathWave VTB Engine handles the virtual predictive DPD block.

Steps like compact test signals and fast envelope simulation provide control speed and detail. Instead of a big-bang simulation at the end as a verification step, simulating EVM contours against a parametric scan provides rapid results. Incremental PA design changes address those difficult optimizations for efficiency, signal quality, and output power – in virtual space, before committing to PA hardware.

See what a PA customer sees before they see it

With authentic waveforms and measurements factored into virtual models and simulations, PA manufacturers can evaluate their designs under their customer’s conditions. RF system designers know exactly what they can expect from a PA in their application. Plus, they might be able to borrow the transportable model for their RF system modeling and simulation. This doesn’t mean there must be PAs optimized for each waveform, or that DPD isn’t required to fine-tune a physical RF system design. What it does mean is RF system designers have higher confidence when they select a PA for their application that it will deliver system-level results for them – and that translates to more design wins for the PA manufacturer.

Shifting from thinking of DPD as an after-market product to unlocking PA design with predictive DPD is a straightforward workflow change with a big payoff. Teams don’t need to be experts in DPD theory or implementation to leverage Keysight tools for better results. For more on DPD as a design tool for PAs, see this Keysight video on Practical Power Amplifier Design.

Also read:

Shift left gets a modulated signal makeover

WEBINARS: Board-Level EM Simulation Reduces Late Respin Drama

 


Protecting High-Speed Interfaces in Data Centers with Security IP

Protecting High-Speed Interfaces in Data Centers with Security IP
by Kalar Rajendiran on 05-23-2022 at 6:00 am

SoCs Have Many Interfaces That Require Security

The never ending appetite for higher bandwidths, faster data interfaces and lower latencies are bringing about changes in how data is processed at data centers. The expansion of cloud to the network edge has introduced broad use of artificial intelligence (AI) techniques for extracting meaning from data. Cloud supercomputing has resulted in innovative data accelerators and compute architectures within data centers. At the same time, threats are coming from many directions and in many forms. The threat entry could be in the internet communications infrastructure in the form of DoS, BotNet, Ransomware, Spyware, etc. Or it could be via cloud API vulnerabilities and account hijacking. All these threats could be broadly classified into communications attacks, software attacks, invasive hardware attacks and non-invasive hardware attacks.

The combination of the above two trends has increased the need for enhanced data security and data privacy within data centers. The challenge data centers face is how to maintain data security without compromising on throughput and latencies. At the end of the day, it all comes down to enabling security at the chip/SoC level.

Dana Neustadter, Sr. Manager, Product Marketing for Security IP at Synopsys was responsible for presenting at IP-SoC Silicon Valley 2022 last month. Her presentation focused on challenges and ways of protecting data in motion and at rest at data centers. She discusses the trends that are driving heightened requirements and presents SoC solutions for ensuring security of various high-speed interfaces. This post is a synthesis of her presentation. You can download her presentation slides from here.

Requirements for Effective Security Solutions

SoCs incorporate a number of different high-speed interfaces to move data between systems, memories, storage, and peripheral devices. Effective security mechanisms need to protect both data-in-motion as well as data-at-rest. This involves

  • Authentication and Key Management for implementing

– Identification & Authentication

– Key generation

– Key distribution

– Control

  • Integrity and Data Encryption between endpoints for ensuring

– Confidentiality

– Integrity

There are of course challenges when it comes to the implementation of these security mechanisms. How to add security on the data path while maintaining high throughput at low latencies? How to securely identify the data path endpoints without adding a large processing overhead? How to establish a trusted zone for managing and handling keys without adding to silicon area?

Key Aspects of Delivering Effective Security Solutions

There are upcoming standards for building trusted virtual machines at the application core/logic level for handling data paths from PCIe and CXL interfaces. These standards are being driven respectively by PCI-SIG and CXL Consortiums.

Meanwhile the areas of authentication and key management and integrity and data encryption are well defined and driven by existing standards.

  • Authentication, attestation, measurement, and key exchange are implemented by the following standards.

-DMTF: Security protocol and Data Module (SPDM), Management Component Transport Protocol (MCTP)

-PCI-SIG: Component Measurement and Authentication (CMA), Data Object Exchange (DOE), Trusted Execution Environment I/O (TEE-I/O)

  • The integrity and data encryption (IDE) component that addresses the confidentiality, integrity and replay protection is based on the AES-GCM crypto algorithm and is defined by PCIe 5.0/6.0 and CXL 2.0/3.0 standards.

Synopsys’ Security IP Offerings

Synopsys offers a comprehensive set of IP offerings to enable security against data tampering and physical attacks. The following chart shows an example data center networking SoC showing various Synopsys Security IP to secure the data.

PCIe 5.0/ PCIe 6.0 IDE Security Modules

Integrity and Data Encryption (IDE) for PCIe is offered through a PCIe IDE Security Modules. There is a version available for PCIe 5.0 that supports all down speeds. And a separate version supporting the latest generation PCIe 6.0 and all down speeds. These modules work as plug-and-play with Synopsys PCIe controllers to match clock configurations, data bus widths, and lane configurations. The packet encryption, decryption and authentication are based on highly efficient AES-GCM crypto with very low latency. The modules are FIPS 140-3 certification ready.

CXL 2.0 IDE Security Module

As with the PCIe IDE security modules, the CXL security module  is standards compliant and seamlessly integrates with Synopsys CXL controllers. All three protocols, CXL.io, CXL.cache, CXL.mem are supported, with latency as low as 0 cycles for  .cache/.mem protocols in skid mode. This IP is also ready for FIPS 140-3 certification.

Inline Memory Encryption (IME) Security Module

This security module uses the AES-XTS algorithm with two sets of 128-bit or 256-bit keys, one for data encryption/decryption and another for tweak value calculation and is FIPS 140-3 certification ready. The memory encryption subsystem handles encryption, decryption, and management of tweaks and keys with very low latency. Other key features include full duplex support for read/write channels, efficient and protected key control and refresh, and a bypass mode.

Hardware Secure Module with Root of Trust

This security module provides a Trusted Execution Environment (TEE) to manage sensitive data and operations. It is the foundation for secure remote lifecycle management and service deployment. Key security features include a secure boot, secure authentication/debug and key management. Secure instruction and data controllers are included to provide external memory access protection and runtime tamper detection. Scalable cryptography options are available to accelerate encryption, authentication and public key operations.

Summary

Secure infrastructure is key to protecting data. Authentication and key management in the control plane and integrity and data encryption in the data plane are essential components of a complete security solution. Securing high-speed interfaces needs to be highly efficient with optimal latency. Synopsys provides complete solutions to secure SoCs, their data, and communications.

For more information, visit Synopsys Security Modules for Standard Interfaces

Also Read:

Bigger, Faster and Better AI: Synopsys NPUs

The Path Towards Automation of Analog Design

Design to Layout Collaboration Mixed Signal


Double Diffraction in EUV Masks: Seeing Through The Illusion of Symmetry

Double Diffraction in EUV Masks: Seeing Through The Illusion of Symmetry
by Fred Chen on 05-22-2022 at 7:00 am

Double Diffraction in EUV Masks

At this year’s SPIE Advanced Lithography conference, changes to EUV masks were particularly highlighted, as a better understanding of their behavior is becoming clear. It’s now confirmed that a seemingly symmetric EUV mask absorber pattern does not produce a symmetric image at the wafer, as a conventional DUV mask would [1, 2]. The underlying reason for this is the mask is illuminated by a spread of angles skewed to one side of the vertical axis. Each angle has a different reflection from the multilayer of the EUV mask. Moreover, the EUV light is diffracted twice, one time before entering the multilayer, and a second time after exiting the multilayer (Figure 1).

Figure 1. Double diffraction in an EUV mask. Each color represents a particular family of diffraction orders originating from one reflected diffraction order from the first diffraction of light entering the multilayer.

The first diffraction from an array of lines on the EUV mask with pitch p produces a Fourier series of the form

… + A_1 exp[-i 2pi x/p] + A0 + A1 exp[i 2pi x/p] +…

Each series term An exp [i 2pi n x/p] represents a diffraction order propagating in a given direction. Each propagating direction leads to a different multilayer reflectance Rn for that order.

… + A_1 R_1 exp[-i 2pi x/p] + A0 R0 + A1 R1 exp[i 2pi x/p] +…

Passing through the array again when exiting the multilayer, each order generates a second set of diffraction orders, with their amplitudes labeled by B instead of A. The directions of these second sets of diffraction order overlap those of the first. By combining the waves coherently propagating in the same directions, we get:

… + [… + A_1 R_1 B0 + A0 R0 B_1 + A1 R1 B_2 + …] exp[-i 2pi x/p] + [… + A_1 R_1 B1 + A0 R0 B0 + A1 R1 B_1 + …] + [… + A_1 R_1 B2 + A0 R0 B1 + A1 R1 B0 + …] exp[i 2pi x/p] + …

Since the mask structures in the array are symmetric as fabricated, we may approximate A1 = A_1, B1 = B_1. However, R1 does not qual R_1, as they correspond to different angles, and even a seemingly small angular difference can drop the reflectance significance. With this consideration, when we compare the -1st harmonic amplitude [… + A_1 R_1 B0 + A0 R0 B_1 + A1 R1 B_2 + …] with the 1st harmonic amplitude [… + A_1 R_1 B2 + A0 R0 B1 + A1 R1 B0 + …], they are not equal. In a symmetric structure that a rectangular profile is expected to be, these should be equal. However, physically, the structure is not behaving so much as a rectangular wave profile but more like a trapezoidal wave profile, largely due to the shadowing effect from the off-axis incident direction of the light (Figure 2). The effective slope of the sidewall is dependent on the shadowing, and hence, the illumination angle.

Figure 2. (Left) A conventional photomask provides a symmetric absorption profile. (Right) On the other hand, for an EUV mask, the trapezoidal profile is a more accurate depiction of the asymmetry effects from shadowing, even though the EUV mask structures themselves are symmetric as fabricated. The illumination angle(s) will determine the shadowing, and hence, the effective slope angle.

Relying on a trapezoidal absorption profile for the mask is a notable departure from the symmetric rectangular profile that has always been used in conventional DUV photomasks. It causes the numerous anomalies of EUV imaging that have already been published over the many years of EUV development. As EUV lithography especially makes more use of dipole illumination (beams coming from two different angles), the angle asymmetry between +1st and -1st diffraction orders becomes more relevant [3].

References

[1] C. van Lare, F. Timmermans, J. Finders, “Mask-absorber optimization: the next phase,” J. Micro/Nanolith. MEMS MOEMS 19, 024401 (2020).

[2] A. Erdmann, H. Mesilhy, P. Evanschitzky, “Attenuated phase shift masks: a wild card resolution enhancement for extreme ultraviolet lithography?,” J. Micro/Nanolith. MEMS MOEMS 21, 020901 (2022).

[3] https://www.linkedin.com/pulse/pattern-shifts-induced-dipole-illuminated-euv-masks-frederick-chen; https://semiwiki.com/lithography/305907-pattern-shifts-induced-by-dipole-illuminated-euv-masks/

Also Read:

Demonstration of Dose-Driven Photoelectron Spread in EUV Resists

Adding Random Secondary Electron Generation to Photon Shot Noise: Compounding EUV Stochastic Edge Roughness

Intel and the EUV Shortage


Take a Leap of Certainty at DAC 2022

Take a Leap of Certainty at DAC 2022
by Daniel Nenni on 05-22-2022 at 6:00 am

Ansys DAC 2022

The live events I have attended thus far this year have been very good. As much as I liked the virtual events, attending in the comfort of my home or sailboat, it is great to be live and networking inside the semiconductor ecosystem, absolutely.

Ansys has been a great supporter of the Design Automation Conference but this year they are going big. Ansys is also a strong supporter of SemiWiki and a joy to work with.

We have written extensively about 3D-IC, 2.5D/3D packaging, power integrity, and other multiphysics challenges so this will be a great time to sync up on where we are today and what Moore’s Law has in store for us tomorrow. Bespoke Silicon is also a trending topic as the systems companies make their own chips so you don’t want to miss that.

Ansys is also great at customer engagements so stop by their theater to see who is using what tools and why. Here is the Ansys preview, I hope to see you  at DAC 2022!

Request a meeting or demo

WE’RE SHOWING THE LATEST IN DESIGN TECHNOLOGY

The semiconductor and electronics industries collide as 3D-IC technology, enabling companies to design differentiating bespoke silicon. The advent of 3D-IC requires more physics domains in a multiphysics challenge requiring new tools and approaches to building electronic design teams. At DAC 2022, we’ll share the latest technologies for 5nm power integrity signoff, dynamic voltage drop coverage, electrothermal signoff for chips & PCBs, advanced 2.5D/3D packaging, and photonic design.

See the Latest Multiphysics Signoff Technology

Did you know? Ansys delivers the industry’s broadest range of foundry-certified golden signoff tools for semiconductor design, electronic design, and full system design.

Stop by our booth to see the latest advances in Power Integrity, Thermal Analysis, Electromagnetics, and Photonics for semiconductor and board designers. Our technical experts are available to answer your questions. Or you can schedule a meeting in our booth. 

Grab a seat in our booth theater featuring short presentations by our customers, partners, and technologists on a variety of topics at regular intervals during the exhibit hours. While there, you just might pick up a unique NFT.

Learn From the Experts at DAC

We’re participating in the DAC Pavilion Roundtable Discussion on “Bespoke Silicon“ where industry experts look at the technical and business implications of companies increasingly turning to tailored silicon solutions to differentiate their key products.

Listen to Ansys customers present their actual semiconductor projects during DAC’s Engineering Tracks and Poster Sessions, listed below.

WHAT ARE THE BIG OPPORTUNITIES IN THE NEXT RENAISSANCE OF EDA?

Research Panel: Tuesday, July 12th, 3:30pm-5:30pm PDT
Prith Banerjee, CTO, Ansys

NEW DIRECTIONS IN SILICON SOLUTIONS

Engineering Track: Tuesday, July 12th, 3:30pm-5:00pm PDT
Norman Chang, Fellow & Chief Technologist, Ansys

BESPOKE SILICON-TAILOR MADE FOR MAXIMUM PERFORMANCE

DAC Pavilion Panel: Wednesday, July 13th, 2:00pm-2:45pm PDT
John Lee, VP & GM, Electronics & Semiconductors, Ansys

About Ansys

If you’ve ever seen a rocket launch, flown on an airplane, driven a car, used a computer, touched a mobile device, crossed a bridge or put on wearable technology, chances are you’ve used a product where Ansys software played a critical role in its creation. Ansys is the global leader in engineering simulation. Through our strategy of Pervasive Engineering Simulation, we help the world’s most innovative companies deliver radically better products to their customers. By offering the best and broadest portfolio of engineering simulation software, we help them solve the most complex design challenges and create products limited only by imagination. Founded in 1970, Ansys is headquartered south of Pittsburgh, Pennsylvania, U.S.A. Visit www.ansys.com for more information.

Ansys and any and all ANSYS, Inc. brand, product, service and feature names, logos and slogans are registered trademarks or trademarks of ANSYS, Inc. or its subsidiaries in the United States or other countries. All other brand, product, service and feature names or trademarks are the property of their respective owners.

Also Read:

Bespoke Silicon is Coming, Absolutely!

Webinar Series: Learn the Foundation of Computational Electromagnetics

5G and Aircraft Safety: Simulation is Key to Ensuring Passenger Safety – Part 4


Radiodays Europe: Emotional Keynote

Radiodays Europe: Emotional Keynote
by Roger C. Lanctot on 05-21-2022 at 6:00 am

Radiodays Europe Emotional Keynote

One doesn’t expect to get emotional at the kickoff keynote for an industry event, but Radiodays Europe 22 flipped the script with live music and a bulletin from Ukrainian broadcasters beamed in from a bunker in Ukraine. The bunker broadcast followed speeches from Swedish and Finnish broadcasting executives including Cille Benko, director general and CEO of Swedish Radio.

Benko noted in her comments that the local broadcaster in Sweden went live eight minutes after the start of Russia’s invasion at 4 a.m., Feb. 24th, with a 48-hour live broadcast of bulletins from the front. Benko noted a boost of up to half a million radio listeners out of a total population of 10M reflecting the power of live radio broadcasts in such emergency circumstances.

Benko’s insight was made even more powerful by the fact that both print and television media have seen a steady audience erosion. In the current environment, radio is shining as a source of immediate and trusted information.

The choice of Malmo, Sweden, for the Radiodays Europe 22 event was particularly prescient given the announced intent of Sweden and Finland to join the North Atlantic Treaty Organization (NATO) in the wake of the Ukrainian invasion. Benko gave a strong boost to the notion of radio as a live and local medium, calling for the broadcasters in attendance to leverage technology for more content creation outside the studio and for taking advantage of podcasts and on-demand engagement with listeners.

Far from being passe or secondary to video content, Benko pronounced audio as ascendant and with decided advantages over other forms of content. Helsinki-based Stefan Moller, President of the Association of European Radio, added to the emotional undercurrent with his own comments regarding the urgency of radio’s role in the current environment.

But the most powerful message came from the sub-terranean Ukrainians, Andriy Tarano and Dmytro Khorkin, representing Ukrainian Public Broadcasting, who joined the event via video to thank their international supporters and describe their ongoing efforts to broadcast via all means available including mobile apps. They also noted that only AM broadcasts were currently available in Mariupol.

Radio advocates have long noted radio’s ability to endure and deliver critical information in times of crisis such as earthquakes and hurricanes, when cellular and Internet communications are often knocked offline. The Radiodays Europe keynote highlighted the extent to which political events can reshape the media environment and alter the conventional wisdom governing public perceptions. There are few positive messages to be taken from the disastrous invasion of Ukraine by Russia, but one such message is the enduring power and relevance of broadcast radio.

SOURCE: Dmytro Khorkin and Andriy Taranov of Public Broadcasting Company of Ukraine speaking via video to Radiodays Europe 22 opening session in Malmo, Sweden.

Also Read:

Why Traceability Now? Blame Custom SoC Demand

Taxis Don’t Have a Prayer

The Jig is Up for Car Data Brokers


Podcast EP81: The Future of Neural Processing with Quadric’s Steve Roddy

Podcast EP81: The Future of Neural Processing with Quadric’s Steve Roddy
by Daniel Nenni on 05-20-2022 at 10:00 am

Dan is joined by Steve Roddy, chief marketing officer of Quadric, a leading processor technology intellectual property (IP) licensor. Roddy brings more than 30 years of marketing and product management expertise across the machine learning (ML), neural network processor (NPU), microprocessor, digital signal processor (DSP) and semiconductor IP industries.

Dan and Steve discuss the current state of AI deployment, today and tomorrow. Steve provides an overview of the products being developed by Quadric – how they fit today and a bit about where the company will take the industry tomorrow.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Intel to present Intel 4 process at the VLSI Technology Symposium

Intel to present Intel 4 process at the VLSI Technology Symposium
by Scotten Jones on 05-20-2022 at 8:00 am

VLSI Symposium 2022 SemiWiki 1

The VLSI Symposium on Technology & Circuits will be held in Hawaii from June 12th to June 17th. You can register for the conference here.

The tip sheet for the conference has been released and one thing that caught my eye is some data from the Intel 4 paper that Intel will be presenting at the conference.

Intel’s old roadmap had 14nm, 10nm and then 7nm processes with 7nm being the first EUV based process and providing a 2x density improvement over 10nm. Intel eventually updated their roadmap to be more consistent with the numbering scheme used by Samsung and TSMC.

Intel has several versions of their 10nm process, the original version (or two) and then the super fin and enhanced super fin versions. Under the new scheme Intel’s 10nm enhanced super fin version became Intel 7, and the former 7nm process was replaced by Intel 4.

Intel 10nm has a transistor density of approximately 100 million transistors per millimeter squared, that is consistent with the density of Samsung and TSMC’s 7nm processes. I also believe Intel’s enhanced super fin process has performance as good or better than either of the foundry 7nm processes. Renaming Intel’s 10nm enhanced super fin to Intel 7 is therefore a designation more consistent with the foundry numbers.

When Intel announced Intel 4 they said it would provide a 20% performance per watt improvement and a significant density improvement but they didn’t provide a number. I thought this might mean they were relaxing the 2x density improvement, but the tip sheet disclosed that it is still 2x relative to 7nm. This would put the density between TSMC’s 5nm and 3nm processes, so Intel 4 is once again a name consistent with the foundry naming convention.

Does this mean Intel 4 will be around 200 million transistors per millimeter squared? This is actually a less straight forward question than you might think. When companies disclose dimensions for their processes, they often disclose values that are smaller than what are seen in standard cells. For example, TSMC says their 7nm process has a 54nm contacted poly pitch (CPP) but our strategic partner TechInsights measured 57nm in standard cells on actual designs. When we characterize a process what we have standardized on is using the densest standard cell seen on an actual part (once parts are available for analysis). TechInsights first saw 10nm Intel parts in 2018 in what TechInsights referred to as generation 1.

Generation 1 had a 54nm CPP consistent with what Intel claims. TechInsights saw generation 2 parts in 2019 that also had a 54nm CPP (the fins were taller than generation 1 suggesting is was a new generation). When Intel introduced the super fin version of 10nm they added an optional 60nm CPP for high performance cells. TechInsights analyzed these parts (generation 3) and saw both 54nm and 60nm CPP cells. Based on our convention this still works out to approximately 100 million transistor per millimeter squared. Where this gets interesting is the recent analysis TechInsights did on the enhanced super fin process (10nm generation 4, now known as Intel 7). This process also has an optional 60nm CPP, but what is interesting is in the standard cell logic, TechInsights only saw the 60nm CPP, no 54nm CPP and a taller track height. This results in a calculated density of approximately 60 million transistors per millimeter squared. So is Intel 4 going to be 200 million transistors per millimeter squared (100 x 2) or 120 million transistor per millimeter squared (60 x 2)?

My belief is it will be 200 million transistor per millimeter squared but it will be interesting to see how much of an actual design utilizes that density.

There is more data in the tip sheet to help answer this. The tip sheet discloses that the CPP is 50nm and the minimum metal pitch is 30nm. Current leading-edge processes all use a single diffusion break so we will assume that here as well. The only remaining question is track height, if I assume a 1 fin per cell 5-track cell then the density is right around 200 million transistors per millimeter squared. A single fin cell would likely require aggressive performance enhancements to meet Intel’s performance requirements, there might also be other design-technology-co-optimization in the process. A 5-track cell is possible for FinFETs without Buried Power Rail so this could be a solution.

It will be interesting to see what other data is included in the full paper. The fact that Intel is giving this paper does add additional weight to Intel being on track to introduce Intel 4 late this year.

SemiWiki blogger Tom Dillinger will be attending the event so you can read more from him after the event.

Also read:

The Lost Opportunity for 450mm

Intel and the EUV Shortage

Can Intel Catch TSMC in 2025?


CEO Interview: Vaysh Kewada of Salience Labs

CEO Interview: Vaysh Kewada of Salience Labs
by Daniel Nenni on 05-20-2022 at 6:00 am

Salience Vaysh Kewada

Vaysh Kewada is cofounder and CEO at Salience Labs, a company developing an ultra high-speed multi-chip processor that packages a photonics chip together with standard electronics to enable exascale AI. Salience is funded by Oxford Sciences Enterprise, Cambridge Innovation Capital, Arm-backed Deeptech Labs, former Dialog Semiconductor CEO Jalal Bagherli and former Temasek board member Yew Lin Goh. Prior to launching Salience Labs, Vaysh worked at Oxford Sciences Enterprises, a $745M VC fund focused on deep-tech investments. Prior to that, she was a management consultant at McKinsey & Company. Vaysh holds an undergraduate and Masters degree in Physics from Imperial College London, where her thesis focussed on genetic algorithms.

Tell us about Salience Labs?
Salience Labs was spun out of Oxford and Münster universities in 2021 to commercialise an ultra-high-speed multi-chip processor that packages a photonics chip together with standard electronics. By using light to execute operations, we can deliver massively parallel processing performance – bringing ultra-high speed compute to a wide array of new and existing AI processes and applications.

The compute requirements of AI double every 3-4 months, as the world needs ever-faster chips to grow AI capability. The current semiconductor industry can’t keep pace with this demand. What’s required now is not further incremental innovations on transistor technology. If we are to realise the tremendous potential of AI, nothing short of a paradigm shift in the way we compute will do. One that delivers an immediate step change in performance and speed, while also offering a long-term future roadmap of scaling improvements.

Multi-chip processors – ones that package together several platform technologies – is that step-change, allowing us to package electronics together with silicon photonics, and to move compute from electronics to the realm of light. By using light to execute operations, it’s possible to achieve massively parallel performance and deliver high throughput, low latency matrix maths – at the root of almost all AI applications. And it’s possible to do this with clocking speeds in the 10s of GHz – where currently the limitation of even the most cutting-edge chips is just 2-3 GHz.

Why was Salience Labs founded?
Salience was founded with the vision of creating an exa-scale processor, by packaging a photonics chip together with standard electronics. The technology is based on decades of research at University of Oxford and Münster University in Germany.

The key inventors and researchers of the technology: Professor Wolfram Pernice, Professor Harish Bhaskaran and Dr. Johannes Feldmann, are co-founders in the company, giving Salience Labs significant depth of knowledge in this field.

What makes Salience Labs technology unique?
While other photonic chip companies execute operations in the phase of light, we use a proprietary amplitude-based approach to photonics, resulting in modular, dense computing chips clocking at 10’s of GHz. It also allows for high levels of parallelization, by using different wavelengths of light to send many calculations through the chip. Salience uses a multi-chip design, with the photonic processing mapping directly on top of the Static Random Access Memory (SRAM). This novel ‘on-memory compute’ architecture allows for the fast compute in the photonic domain to be fully utilized, delivering an exceedingly dense computing chip without having to scale the photonics chip to large sizes. This architecture can be adapted to the application-specific requirements of different market verticals, making it ideal for realising AI inference use-cases in communications, robotics, vision systems, healthcare and other data workloads.

How has the company evolved since you founded it?
We originally spun-out of the University of Oxford and the University of Münster in 2021 and have just closed our seed round of $11.5 million from a number of leading VCs including Cambridge Innovation Capital, Oxford Science Enterprises and Arm-backed Deeptech Labs participating, plus some leading names in the semiconductor industry including former CEO of Dialog Semiconductor Jalal Bagherli and Yew Lin Goh. Since closing our seed round, our focus has been on the tape out of our next test chip, developing our software models and packaging solutions. We are also building relationships with customers across a range of market verticals.

You are participating in the Silicon Catalyst incubator programme. What has been the impact on the business?
We joined the Silicon Catalyst programme in 2021, right after spinning out from Münster and Oxford Universities. The greatest benefit is the access it gives us to advisors – individuals who have made a significant impact on the global semiconductor industry. In fact, we met our chairman Dan Armburst through the programme, who is a Silicon Catalyst Co-founder and Board Director. Through those advisors, we gained highly valuable commercial introductions to foundries, IP providers, and EDA providers at a very early-stage of the company. It has given Salience Labs’ a commercial jump start. For example, we’ve just closed our seed round but we’re already working with production level foundries on the fabrication of our next test chip. Silicon Catalyst has been a tremendous accelerator for our business.

What can we hope to see from Salience Labs in the future?
We’re at a very interesting point in time where the industry is recognising the potential of multi-chip processors to solve the tremendous processing bottleneck currently hampering AI growth. Salience Labs’ technology has the potential for breakthrough performance and power capability beyond what the established CMOS roadmap offers. We’re talking to customers across a range of market verticals who are excited about the performance improvements silicon photonics will offer and the new AI processes and applications this will enable. We welcome any additional approaches from potential customers who are interested in understanding the capabilities of silicon photonics.

Also read:

CEO Interview: Chuck Gershman of Owl AI

CEO Interviews: Dr Ali El Kaafarani of PQShield

CEO Interview: Dr. Robert Giterman of RAAAM Memory Technologies


Joseph Sawicki of Siemens EDA at User2User

Joseph Sawicki of Siemens EDA at User2User
by Daniel Payne on 05-19-2022 at 10:00 am

Joseph Sawicki

I attended the annual user group meeting called User2User in Santa Clara this year, hosted by Siemens EDA, with 51 presentations by customers in 11 tracks, and keynotes during each lunch hour from semiconductor executives. Joseph Sawicki, Executive VP, IC Segment, at Siemens EDA presented on a Tuesday, along with Prashant Varshney, Microsoft, and Mahesh Tirupattur, Analog Bits. This blog focuses on what I heard from Mr. Sawicki.

The host Harry Foster said that each keynote was like a Ted Talk, and they certainly lived up to that.  Joseph’s topic was, From ICs to Systems – New Opportunities for the Semiconductor Industry. Digitalization is driving across all industries: Aerospace, auto, consumer, electronics and semi, energy, heavy, medical, marine, industrial.

There is now a pervasive AI enablement; in sensors, edge computing, 5G/wireless comm, cloud and data centers. The share of semiconductors in electronic systems from the time period of 1992- 2014 was 16%, but now has grown to about 24%, and predictions show that electronic systems will reach $3.2T revenue by 2025, so quite the growth market.

Systems companies are becoming IC designers, and there are many examples: Apple, Amazon, Google, ZTE, Tesla, Bosch, Huawei, Facebook. Foundries have seen the Systems companies grow from just 1% of revenue in 2011, now to 21% in 2021, that’s big growth and Apple has become the number one customer of TSMC. At the Hot Chips conference in 2006 just 16% of the accepted papers came from systems companies, while by 2021 that number had grown to 33% of the papers. So the systems companies are driving innovation in chip design.

Consider the history of Apple, their first 64 bit application processor was introduced way back in 2013, but why do that? Even in 2022 you still don’t need 64 bits for the larger RAM address space. The answer was for performance, an ARM core can run either in 32 or 64 bit modes, and running in 64 bit mode has 31% better performance.

Apple A7, Source: Chipworks

There’s some new trends in automotive, Cars as a Service, where Volvo plans to reach 50% of their revenue through services by 2025. Tesla provides OTA (Over The Air) update services and new feature upgrades, adding revenue after the initial sales. Gartner Group reports that half of the top 10 auto OEMS will be designing some of their own chips by 2025, with 7 of 10 already announced by 2021. Ford and Globalfoundries will partner in IC design in order to smooth out the supply chain issues that have hurt the industry since the COVID pandemic started in 2020.

The gradual electrification of vehicles is a major driver of new IC design starts, and the semiconductor revenue per vehicle should reach $500 per car by 2028, so that’s a $24B market. 5G communication will be important to automotive for OTA updates and services, and is growing 3X per year.

The total number of sensors connected to Internet was 1.6B in 2015, exploding in growth to 29.6B sensors by 2025, so that’s big growth of video, data, and data center.

Source: IC Insights

Semiconductor content inside of Data Centers is growing to $242B by 2030, which is a 14% CAGR, per IBS, Sept. 2021.

With all of these demands on semiconductors in growth markets, how is our industry going to meet the them? Joseph summarized that there are three trends to meet demand:

  1. Technology scaling – new nodes and 3D IC
  2. Design scaling – silicon integration
  3. System scaling – digital twin, verifying a device against spec and the full SW stack with apps

For technology scaling we can look at Moore’s Law, it’s not quite dead, because look at the A-series from Apple over time. From 2013 to 2021, we saw transistor counts growing from 1 billion to 15 billion, so it’s still scaling pretty well at 15X. Dennard Scaling has died – so clock cycle rates are not improving by 16X over that 8 year time frame. Looking at single core CPU performance , the Geekbench scores have ranged from 269 to 1734, so it’s growing on track. Even the foundries have another 8 years to grow process technology in their road maps.

Monolithic integration is growing, yes, but 3D design is coming along too, combined with innovative packaging. System in a Package is a new trend. System and design technology co-optimization is needed to be successful.

On design scaling there are charts that claim that 7nm designs cost $280M, but is that reality? That number sounds too big, yet the trend line is true, as small nodes drive up the design and verification costs. One method to counter that increase in costs is to move from RTL up to C level design for systems designs. Consider the example of NVIDIA, where a small team of just 10 engineers in 6 months taped out a new chip for deep learning inference accelerator by using C++ with an HLS (High Level Synthesis) methodology, as reported at Hot Chips in Aug 2019. Google is another systems company using HLS to help manage SoC design costs.

For System Scaling the idea is to create a true digital twin. One example of digital twin is something called PAVE360 – it’s a way to validate automotive models with traffic, people and vehicles. You can run this digital twin in order to validate virtual models, or run SW for your ADAS system, to model power, vehicle modeling (power train, chassis, seating, effect of road on occupants). It’s a way to safely validate before production starts.

The final topic was lifecycle management, so consider a data center with hundreds of thousands of blades, where you can actually monitor all of the blades in real time, debug any reliability issues in that data center, and you can analyze all of the embedded sensor data, literally tracking the health of the data center.

Summary

A trillion dollar semiconductor industry is shortly approaching us, so this is an exciting time to be part of the EDA industry which enables all of this growth, as systems designers are taking on new design starts, AI is everywhere, electrification of vehicles continues, and digital twins are being adopted. The mood of the presentation was quite upbeat, and well received by the audience.

Related Blogs