ads mdx semiwiki building trust gen 800x100ai

Managing Service Level Risk in SoC Design

Managing Service Level Risk in SoC Design
by Bernard Murphy on 06-21-2023 at 6:00 am

Traffic

Discussion on design metrics tends to revolve around power, performance, safety, and security. All of these are important, but there is an additional performance objective a product must meet defined by a minimum service level agreement (SLA). A printer display may work fine most of the time yet will intermittently corrupt the display. Or the nav system in your car intermittently fails to signal an upcoming turn until after you pass the turn. These are traffic (data) related problems. Conventional performance metrics only ensure that the system will perform as expected under ideal conditions; SLA metrics set a minimum performance expectation within specified traffic bounds. OEMs ultimately care about SLAs, not STAs. Meeting/defining an SLA is governed by interconnect design and operation.

What separates SLA from ideal performance?

Ideally, each component could operate at peak performance, but they share a common interconnect, limiting simultaneous traffic. Each component in the design has a spec for throughput and latency – perhaps initially frames/second for computer vision, AI recognition, and a DDR interface, mapping through to gigabytes/second and clock cycles or milliseconds in a spreadsheet. An architect’s goal is to compose these into system bandwidths and latencies through the interconnect, given expected use cases and the target SLA.

Different functions generally don’t need to be running as fast as possible at the same time; between use cases and the SLA, an architect can determine how much she may need to throttle bandwidths and introduce delays to ensure smooth total throughput with limited stalling. That analysis triggers tradeoffs between interconnect architecture and SLA objectives. Adding more physical paths through the interconnect may allow for faster throughput in some cases while increasing device area. Ultimately the architect settles on a compromise defining a deliverable SLA – a baseline to support a minimum service level while staying within PPA goals. This step is a necessary precursor but not sufficient to define an SLA; that step still needs to factor in potential traffic.

Planning for unpredictable traffic

Why not run simulations with realistic use cases? You will certainly do that for other reasons, but ultimately, such simulations will barely scratch the surface of SLA testing across an infinite range of possibilities. More useful is to run SystemC simulations of the interconnect with synthetic initiators and targets. These don’t need to be realistic traffic models for the application, just good enough to mimic challenging loads. According to Andy Nightingale (VP of product marketing at Arteris), you then turn all the dials up to some agreed level and run. The goal is to understand and tune how the network performs when heavily loaded.

An SLA will define incoming and outgoing traffic through minimum and maximum rates, also allowing for streams which may burst above maximum limits for short periods. The SLA will typically distinguish different classes of service, with different expectations for bandwidth-sensitive and latency-sensitive traffic. Between in-house experience in the capabilities of the endpoint IP together with simulations the architect should be able to center an optimum topology for the interconnect.

The next step is to support dynamic adaptation to traffic demands. In a NoC, like FlexNoC from Arteris, both the network interface units (NIUs) connecting endpoint IPs and the switches in the interconnect are programmable, allowing arbitration to dynamically adjust to serve varying demands. A higher-priority packet might be pushed ahead of a lower-priority packet or routed through a different path if the topology allows for that option, or a path might be reserved exclusively for a certain class of traffic. Other techniques are also possible, for example, adding pressure or sharing a link to selectively allow high priority low-latency packets to move through the system faster.

It is impossible to design to guarantee continued high performance under excessive or burst traffic, say, a relentless stream of video demands. To handle such cases, the architect can add regulators to gate demand, allowing other functions to continue to operate in parallel at some acceptable level (again, defined by the SLA).

In summary, while timing closure for ideal performance is still important, OEMs care about SLAs. Meeting those expectations must be controlled through interconnect design and programming. Arteris and their customers have been refining the necessary Quality of Service (QoS) capabilities offered in their FlexNoC product line for many years. You can learn more HERE.


DDR5 Design Approach with Clocked Receivers

DDR5 Design Approach with Clocked Receivers
by Daniel Payne on 06-20-2023 at 10:00 am

DFE min

At the DesignCon 2023 event this year there was a presentation by Micron all about DDR5 design challenges like the need for a Decision Feedback Equalizer (DFE) inside the DRAM. Siemens EDA and Micron teamed up to write a detailed 25 page white paper on the topic, and I was able to glean the top points for this much shorter blog. The DDR5 specification came out in 2020 with a data transfer bandwidth of 3200MT/s, requiring equalization (EQ) circuits to account for the channel impairments.

DFE is designed to overcome the effects of Inter-Symbol Interference (ISI), and the designers at Micron had to consider the clocking, Rx eye evaluation, Bit Error Rate (BER) and jitter analysis in their DRAM DFE. IBIS-AMI models were used to model the DDR5 behavior along with an EDA tool statistical simulation flow.

Part of the DDR5 specification is the four-tap DFE inside the DRAM’s Rx, and the DFE looks at past received bits to remove any ISI from the bits. The DFE first applies a voltage offset to remove ISI, then the slicer quantizes the current bit as high or low.

Typical 4-tap DFE from DDR5 Specification

With DDR5 the clocking is a differential strobe signal (DQS_t, DQS_c), and it’s forwarded along the single-ended data signals (DQ) to the Rx. The DQS signal is buffered up and then fanned out to the clock input of up to eight DQ latches, causing a clock tree delay.

DQS Clock tree delay

The maximum Eye Height is 95mV and the max Eye Width is 0.25 Unit Interval (UI), or just 78.125ps.. Using a statistical approach to measuring BER of 1e-16 is most practical.

IBIS models have been used for many generations of DDR systems, enabling  end-to-end system simulation, yet starting with DDR5 adding EQ features and  BER eye mask requirements, a new simulation model and analysis are sought. With IBIS-AMI modeling there is fast and accurate Si simulation that are portable across EDA tools while protecting the IP of the IO details. IBIS-AMI supports statistical and bit-by-bit simulation modes, and the statistical flow is shown below.

Statistical Simulation Flow

The result of this flow is a statistical eye digram that can be used to measure eye contours at different BER levels.

DDR5 Example Simulation

A DDR5 simulation was modeled in the HyperLynx LineSim tool, with the DQ and DQS IBIS-AMI models provided by Micron, and here’s the system schematic.

DDR5 system schematic

The EDA tool captures the waveform at specified clock times, where timing uncertainties within clock times are transferred into the resulting output eye diagram, reconstructing the voltage and timing margins before quantization by the slicer and its clock.

Variable clock times

Both DQS and and DQ timing uncertainty impact the eye diagram similar to timing margin. Figure A shows jitter injected onto the DQ signal, and figure B has jitter injected onto the DQS signal. DQ (red) and DQS (green) jitter are shown together in figure C.

Timing bathtub curve

Sinusoidal jitter effects can even be modeled on the DQ signal and DQS signal in various combinations to see the BER and timing bathtub curve results. DDR5 has Rj, Dj and Tj measurements instead of period and cycle to cycle jitter measurements. The impact of Rx and Rj values on the BER plots can be simulated, along with the timing bathtub curves.

Rx Rj on data, versus data and clock combined

Going beyond Linear and Time-Invariant (LTI) modeling, the Multiple Edge Response (MER) technique uses a set of rising and falling edges. With a custom advanced IBIS-AMI flow it performs a statistical analysis on each MER edge, then superimposes the combined effect into an output eye diagram.

Bit-by-bit, advanced simulation results

Adding Tx Rj values of 2% in the modeling shows even more realistic degraded BER plot results.

Summary

Signal Integrity effects dominate the design of a DDR5 system, so to get accurate results require detailed modeling of all the new physical effects. The IBS-AMI specification has been updated for Rx AMI models to use a forwarded clock. Micron showed how they used a clocked DDR5 simulation flow to model the new effects, including non-LTI effects, and achieving simulations with BER of 1e-16 and below.

Request and read the complete 25 page white paper online here.

Related Blogs


Synopsys Expands Agreement with Samsung Foundry to Increase IP Footprint

Synopsys Expands Agreement with Samsung Foundry to Increase IP Footprint
by Kalar Rajendiran on 06-20-2023 at 6:00 am

Synopsys Samsung silicon wafer

Many credible market analysis firms are predicting the semiconductor market to reach the trillion dollar mark over the next six years or so. Just compare this to the more than six decades it took for the market to cross the $500 billion mark. The projected growth rate is incredible indeed and is driven by fast growing market segments such as high performance computing (HPC), mobile, client computing, and automotive electronics. The compute demand on systems has also been growing at unbelievable rates every couple of years. The tremendous growth in artificial intelligence (AI) driven systems and advances in deep learning neural network models have certainly contributed to this and pulled us into the “SysMoore Era.” And multi-die systems are becoming essential to address the system demands of the SysMoore Era.

Given the above trends, silicon IP is going to play an even more critical role in the future growth of the semiconductor market. Yesterday’s off-the-shelf IP is not going to cut it when it comes to specific PPA requirements of various applications. It is all about differentiated IP for specific applications and processes. In the SysMoore Era, IP development strategy should be driven not only by looking forward to the next node  but also looking at vertical market requirements, horizontally (process variants) and backwards as multi-die systems enable the optimization of process technologies.

Last week, Synopsys announced an expanded agreement with Samsung Foundry to develop a broad portfolio of IP to reduce design risk and accelerate silicon success for automotive, mobile, and HPC markets, and multi-die designs as well. I had an opportunity to chat with John Koeter, senior vice president of product management and strategy for IP at Synopsys. My discussion focused on understanding how this agreement is different and the important role the supported market segments and multi-die systems trend played in arriving at an expanded agreement. Following is a synthesis of my discussion, highlighting the salient points.

Proactive Collaboration by Looking at Vertical Market Needs

Synopsys and Samsung Foundry have a long history of collaborating when it comes to IP development. Generally speaking, IP development in the past was driven by specific mutual customer demand. Given the compressed time-to-market demand of the SysMoore Era, customers cannot afford to wait for long development cycles after specific IP requests. IP development needs to start proactively based on anticipating future vertical market. And that is what Synopsys and Samsung Foundry are doing per this expanded agreement. They will analyze market segments and develop the needed IP to holistically address vertical market needs. For example, together they will consider what a next-generation ADAS chip or a next-generation MCU or next generation mobile chip will look like and proactively develop IP to address those needs. IP will also be optimized according to the end application needs. For instance, PCIe IP for the HPC market will be optimized for minimum possible latency whereas PCIe IP for the automotive market will be optimized for reliability over a wider temperature range.

For the automotive market in specific, Synopsys will optimize IP for Samsung’s 8LPU, SF5A and SF4A automotive process nodes to meet stringent Grade 1 or Grade 2 temperature and AEC-Q100 reliability requirements. The auto-grade IP for ADAS SoCs will include design failure mode and effect analysis (DFMEA) reports that can save months of development effort for automotive SoC applications.

Anticipating Multi-die Systems Requirements

As monolithic chip implementations give way to multi-die system implementations, it is no longer about just the next advanced process node. A multi-die system could have various dies in different process nodes and still deliver the performance and power requirements at a reduced cost compared to a monolithic implementation. This opens up the opportunity to consider creating advanced IP (say PCIe Gen6) for older process nodes to support I/O chiplets of a multi-die system. Synopsys and Samsung are proactively considering such opportunities and will develop a portfolio of advanced IP on many process nodes as well as collaborating on developing high-speed UCIe IP for chip-to-chip communication.

Agreement Expansion Leading to Increase of IP Footprint

As a result of the above identified IP collaboration strategies, the availability IP for Samsung Foundry processes is going to increase significantly. For customers, that is a significant uptick in terms of access to IP in the age of post-Covid era when clear supply chains are high up on their requirements list. With this agreement, Synopsys IP available or in development for Samsung processes includes logic libraries, embedded memories, TCAMs, GPIOs, eUSB2, USB 2.0/3.0/3.1/4.0, USB-C/DisplayPort, PCI Express 3.0/4.0/5.0/6.0, 112G Ethernet, Multi-Protocol 16G/32G PHYs, UCIe, HDMI 2.1, LPDDR5X/5/4X/4, DDR5/4/3, SD3.0/eMMC 5.1, MIPI C/D PHY, and MIPI M-PHY G4/G5.

Synopsys’ Certified Design Flows Accelerate Time to Silicon Success

A broad portfolio of IP from a single vendor has multiple advantages, in both business and engineering terms. From an engineering perspective, for example, power grid or pin location misalignments when integrating various IP blocks are going to be less likely. Synopsys is also working very closely with Samsung on the EDA side to develop and certify various reference flows which should help accelerate time to silicon success.

To read the full press release, click here. For more information, contact Synopsys.

Also Read:

Requirements for Multi-Die System Success

An Automated Method to Ensure Designs Are Failure-Proof in the Field

Automotive IP Certification


Keynote Sneak Peek: Ansys CEO Ajei Gopal at Samsung SAFE Forum 2023

Keynote Sneak Peek: Ansys CEO Ajei Gopal at Samsung SAFE Forum 2023
by Daniel Nenni on 06-19-2023 at 10:00 am

Image

As one of the world’s leading chip foundries, Samsung occupies a vital position in the semiconductor value chain. The annual Samsung Advanced Foundry Ecosystem (SAFE™) Forum is a must-go event for semiconductor and electronic design automation (EDA) professionals. Ajei Gopal, President and CEO of Ansys, has the honor of delivering the opening keynote for this year’s SAFE Forum on June 28th at 10:15 a.m. in San Jose, California.

Ansys is the world leader in both system and electronic multiphysics simulation and analysis, with a strong reputation in the semiconductor market for the reliability and accuracy of its Ansys RedHawk-SC family of power integrity signoff products. Ajei’s keynote, “The 3Ps of 3D-IC,” draws from the company’s unique market position that encompasses chip, package, and board design. Leading semiconductor product designers have adopted 2.5D and 3D-IC packaging technologies that allow multiple, heterogeneous silicon die to be assembled or stacked in a small form-factor package. This provides huge advantages in performance, cost, and flexibility — but heightens analysis and design challenges, including thermal analysis, electromagnetic coupling, and mechanical stress/warpage. Samsung Foundry has been on the forefront of enabling 3D-IC with manufacturing innovations and design reference flows that include best-of-breed solutions like those offered by Ansys.

Learn how to Clear 3D-IC Hurdles

Ajei will present an executive perspective of the challenges facing multi-die chip and system designers. Ansys is a multibillion-dollar company with a deep technology background in an array of physics, from chip power integrity to thermal integrity, mechanical, fluidics, photonics, electromagnetics, acoustics, and many more. This broad portfolio gives Ansys a unique perspective of how 3D-IC technology is compressing traditional chip, package, and board design into a single, new, interlinked optimization challenge.

Ajei will explain how this new reality creates three sets of hurdles for chip design teams that threaten to slow the broader adoption of 3D-IC technology by the mainstream IC market. In response to these challenges, Ajei will present his “3Ps,”: which suggest a program of thoughtful solutions for how the design community can tackle these obstacles and move 3D-IC design toward widespread adoption.

One of the 3Ps stands for partnerships, which are key to Ansys’ successful collaboration with Samsung Foundry. It is clear to any experienced observer of the EDA market that the complexity of today’s design challenges have grown beyond the ability of any one company to solve. This is just as true for semiconductor design tools as it is for the semiconductor manufacturing equipment industry.  – No one vendor delivers all the equipment used in a fab, and no one software vendor can meet all design tool requirements. The way forward is to engage deeply with ecosystem initiatives like SAFE and ensure that customers have access to the best-in-class tools for every step of their design process.

Register for the Samsung Foundry Forum and SAFE Forum and join Ansys in fostering industry collaborations and partnerships to improve the capabilities of the semiconductor industry. Visit the Ansys booth at the SAFE exhibit (June 28 @ Signia by Hilton, San Jose, CA) to speak with EDA experts on 3D-IC design techniques and requirements.

Also Read:

WEBINAR: Revolutionizing Chip Design with 2.5D/3D-IC design technology

Chiplet Q&A with John Lee of Ansys

Multiphysics Analysis from Chip to System


Application-Specific Lithography: 28 nm Pitch Two-Dimensional Routing

Application-Specific Lithography: 28 nm Pitch Two-Dimensional Routing
by Fred Chen on 06-19-2023 at 6:00 am

Brightfield (red) and darkfield (purple) sidelobes in 84 nm

Current 1a-DRAM and 5/4nm foundry nodes have minimum pitches in the 28 nm pitch range. The actual 28 nm pitch patterns are one-dimensional active area fins (for both DRAM and foundry) as well as one-dimensional lower metal lines (in the case of foundry). One can imagine that, for a two-dimensional routing pattern, both horizontal and vertical lines would be present, not only at 28 nm minimum pitch, but also larger pitches, for example, 56 or 84 nm (2x or 3x minimum pitch, respectively). What are the patterning options for this case?

0.33 NA EUV

Current 0.33 NA EUV systems are unable to simultaneously image both horizontal and vertical 28 nm line pitch, as they each require incompatible illumination dipole illuminations (Figure 1). Hence, two exposures (at least) would be needed for a two-dimensional layout. In fact, even unidirectional 28 nm pitch could require double patterning [1].

Figure 1. Vertical lines require the X-dipole (blue) exclusively while the horizontal lines require the Y-dipole (orange) exclusively.
High-NA EUV

Planned high-NA (0.55 NA) EUV systems can image both horizontal and vertical 28 nm pitch lines simultaneously, but runs into a different problem for the 56 nm and 84 nm pitches. When the dipole illumination targets the 28 nm anchor pitch, the central obscuration removes the first diffraction order for the 56 nm pitch. The 56 nm pitch case essentially becomes the 28 nm pitch. Thus, it would have to be exposed separately with different illumination. The central obscuration also removes the first and second diffraction orders for the 84 nm pitch, causing sidelobes to appear in the intensity profile [2]. The sidelobes are valleys for the brightfield case, and peaks for the darkfield case (Figure 2).

Figure 2. Brightfield (red) and darkfield (purple) sidelobes in 84 nm pitch for 28 nm pitch dipole illumination with 0.55 NA. The first and second diffraction orders have been removed by the central obscuration of the pupil.

These sidelobes lead to random photon numbers crossing the printing threshold around the sidelobe locations, leading to stochastic defects (Figures 3 and 4).

Figure 3. 40 mJ/cm2 absorbed dose, 84 nm pitch, brightfield case. The dark spots in the orange space indicate locations of stochastic defects corresponding to the sidelobe valleys in Figure 2.

Figure 4. 40 mJ/cm2 absorbed dose, 84 nm pitch, darkfield case. The narrow orange lines are the result of sidelobe printing, corresponding to the sidelobe peaks in Figure 2.

Figure 4. 40 mJ/cm2 absorbed dose, 84 nm pitch, darkfield case. The narrow orange lines are the result of sidelobe printing, corresponding to the sidelobe peaks in Figure 2.

DUV immersion lithography with SAQP and selective cuts

Surprisingly, the more robust method would involve DUV lithography, when used with self-aligned quadruple patterning (SAQP) and two selective cuts [3]. This scheme, shown in Figure 5, builds on a grid-based layout scheme developed by C. Kodama et al. at Toshiba (now Kioxia) [4].

Figure 5. Flow for forming a 2D routing pattern by SAQP with two selective cuts. One cut selectively etches the covered green areas (1st spacer), while the other selectively etches the covered purple areas (core/gap). The etched areas are refilled with hardmask (dark blue). The final pattern (orange) is made by etching both the remaining green and purple areas.

Figure 5. Flow for forming a 2D routing pattern by SAQP with two selective cuts. One cut selectively etches the covered green areas (1st spacer), while the other selectively etches the covered purple areas (core/gap). The etched areas are refilled with hardmask (dark blue). The final pattern (orange) is made by etching both the remaining green and purple areas.

Of course, where available, EUV self-aligned double patterning (SADP) may also be used as an alternative to DUV SAQP, but the two selective etch exposures will still be additionally needed. While SAQP has an extra iteration of spacer (or other self-aligned) double patterning over SADP, this extra complexity is much less than the staggering infrastructure difference between EUV and DUV. Conceivably, players without EUV are still able to continue to produce chips with two-dimensional interconnecting patterns, at least down to ~25-26 nm pitch.

References

[1] D. De Simone et al., Proc. SPIE 11609, 116090Q (2021).

[2] F. Chen, Printing of Stochastic Sidelobe Peaks and Valleys in High NA EUV Lithography, https://www.youtube.com/watch?v=sb46abCx5ZY, 2023.

[3] F. Chen, Etch Pitch Doubling Requirement for Cut-Friendly Track Metal Layouts: Escaping Lithography Wavelength Dependence, https://www.linkedin.com/pulse/etch-pitch-doubling-requirement-cut-friendly-track-metal-chen/,2022.

[4] T. Ihara et al., DATE 2016.

This article first appeared in LinkedIn Pulse: Application-Specific Lithography: 28 nm Pitch Two-Dimensional Routing 

Also Read:

A Primer on EUV Lithography

SPIE 2023 – imec Preparing for High-NA EUV

Curvilinear Mask Patterning for Maximizing Lithography Capability

Reality Checks for High-NA EUV for 1.x nm Nodes


Podcast EP168: The Extreme View of Meeting Signal Integrity Challenges at Wild River Technology with Al Neves

Podcast EP168: The Extreme View of Meeting Signal Integrity Challenges at Wild River Technology with Al Neves
by Daniel Nenni on 06-16-2023 at 10:00 am

Dan is joined by Al Neves, Founder and Chief Technology Officer at Wild River Technology. Al has 30 years of experience in design and application development for semiconductor products and capital equipment focused on jitter and signal integrity. He is involved with the signal integrity community as a consultant, high-speed system-level design manager and engineer.

Dan explores signal integrity challenges of high performance design with Al. Wild River’s unique combination of process, products and skills are explained by Al, along with the motivation for the company’s approach to addressing signal integrity. It turns out success demands an all-or-nothing approach across the entire design and development process.  The best partner is an organization with the expertise and attitude to win, no matter what it takes.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Requirements for Multi-Die System Success

Requirements for Multi-Die System Success
by Daniel Nenni on 06-16-2023 at 6:00 am

Synopsys Chiplet Report 2023

Chiplets continue to be a hot topic on SemiWiki, conferences, white papers, webinars and one of the most active chiplet enabling vendors we work with is Synopsys. Synopsys is the #1 EDA and #1 IP company so that makes complete sense.

As you may have read, I moderated a panel on Chiplets at the last SNUG which we continue to write about. Hundreds of thousands of people around the world have read our chiplet coverage making it the #1 trending topic on SemiWiki for 2023 and I expect this to continue into 2024, absolutely.

In fact, Synopsys just released an industry insight report titled “How Quickly Will Multi-Die Systems Change Semiconductor Design?” that is well worth the read. The report also includes insights on multi-die systems from Ansys, Arm, Bosch, Google, Intel, and Samsung. Additionally, Synopsys CEO Aart de Geus wrote the opening chapter:

“As angstrom-sized transistors intersect with multi-die Si-substrates, we see classic Moore pass the baton to SysMoore,” writes de Geus. “Today, Synopsys tracks more than 100 multi-die system designs. Be it through hardware / software digital twins, multi-die connectivity IP, or AI-driven chip design, we collaborate closely with the leading SysMoore companies of tomorrow.”

Here is the report introduction:

For many decades, semiconductor design and implementation has been focused on monolithic, ever-larger and more complex single-chip implementation. This system-on-chip approach is now changing for a variety of reasons. The new frontier utilizes many chips assembled in new ways to deliver the required form-factor and performance.

Multi-die systems are paving the way for new types of semiconductor devices that fuel new products and new user experiences.

This Synopsys Industry Insight brings together a select group of keystone companies who are advancing multi-die systems. You’ll read the thoughts of senior executives from various levels of the technology stack. You’ll also hear from Synopsys’ CEO, its president and a panel of Synopsys technology experts. We’ll discuss our achievements, what lies ahead and how we are partnering with the industry to drive change.

Synopsys also recently completed an excellent webinar series on the topic which is well worth your time. You can watch this On-Demand HERE.

Synopsys Chiplet Webinar Series abstract:

The industry is moving to multi-die systems to benefit from the greater compute performance, increased functionality, and new levels of flexibility. Challenges for multi-die systems are exacerbated and require greater focus on a number of requirements such as early partitioning and thermal planning, die/package co-design, secure and robust die-to-die connectivity, reliability and health, as well as verification and system validation. Attend this webinar series to find out about some of the essential requirements that can help you overcome multi-die system challenges and make your move to multi-die systems successful.

Topics include:
  • Multi-Die System Trends, Challenges and Requirements
  • Benefits of Early Architecture Design for Multi-Die Systems
  • Innovations in Multi-Die System Co-Design and System Analysis
  • Successful Connectivity of Heterogeneous Dies with UCIe IP
  • Identifying and Overcoming Multi-Die System Verification Challenges
  • Optimizing Multi-Die System Health from Die to Package to In-Field

Bottom line: Chiplets are a disruptive technology driving the semiconductor design ecosystem without a doubt. If you want to explore chiplets in greater detail Synopsys would be a great start.

Also read:

Chiplet Interconnect Challenges and Standards

Chiplet Q&A with Henry Sheng of Synopsys

Chiplet Q&A with John Lee of Ansys

Multi-Die Systems: The Biggest Disruption in Computing for Years


Crypto modernization timeline starting to take shape

Crypto modernization timeline starting to take shape
by Don Dingee on 06-15-2023 at 10:00 am

CNSA Suite 2.0 crypto modernization timeline

Post-quantum cryptography (PQC) might be a lower priority for many organizations, with the specter of quantum-based cracking seemingly far off. Government agencies are fully sensitized to the cracking risks and the investments needed to mitigate them and are busy laying 10-year plans for migration to quantum-safe encryption. Why such a bold step, given that experts still can’t say precisely when quantum threats will appear? PQShield has released its first installment of an e-book on PQC and crypto modernization subtitled “Where is your Cryptography?” outlining the timeline taking shape and making the case that private sector companies have more exposure than they may realize and should get moving now.

Ten crypto years is not that much time

Folks who survived the Y2K scramble may recall thinking it was far away and probably not as big a problem as all the hype projected. In retrospect, it ended up being a non-event with almost no catastrophic failures – but only because organizations took it seriously, audited their platforms, vendors, and development efforts, and proactively made fixes ahead of the deadline.

PQC has some of the same vibes, with two exceptions. There is no firm calendar date for when problems will start if not mitigated. Many of today’s platforms have crypto technology deeply embedded, and there are no fixes for quantum threats to public-key algorithms short of PQC redesign. It’s fair to say that if an organization doesn’t explicitly understand where platforms have PQC embedded, all platforms without it must be considered vulnerable. It’s also fair to say that the potential for lasting damage is high if a problem starts before a plan is in place.

That makes the NSA advisory on the Commercial National Security Algorithm Suite 2.0 (CNSA Suite 2.0) noteworthy. Released in September 2022, it identifies a crypto modernization timeline for six classes of systems with a target of having all systems PQC-enabled by 2033.

 

 

 

 

 

 

 

 

Those earlier milestones for some system classes in the timeline starting in 2025, combined with requiring new full-custom application development to incorporate PQC, shorten the ten-year horizon. PQShield puts it this way in their e-book:

“The message for [public and private sector] organizations is both clear and urgent: the time to start preparing for migration to PQC is now, and that preparation involves assessing and prioritizing an inventory of systems that use cryptography, and are candidates for migration.”

Where to start with crypto modernization

Many veterans who guided organizations through Y2K have retired – but left behind a playbook that teams can use today for crypto modernization. Initial steps involve a risk assessment looking at internally-developed and vendor-supplied systems. Mitigation strategies will vary, with some considerations including how sensitive the data a system handles is, how long that data possibly lives, and if the system is public-facing.

 

 

 

 

 

 

 

 

 

PQShield makes two vital points here. First, it may not be possible, especially for vendor-supplied systems, to make an immediate replacement. Enterprise-class system replacements need careful piloting not to disrupt operations. The good news is for most commercial application and system vendors, PQC will not be a surprise requirement.

The second point is that hybrid solutions may overlap with both PQC and pre-quantum legacy crypto running, with a containment strategy for the legacy systems. This overlap may be the case for infrastructure, where the investment will be enterprise-wide, and the priority may be protecting public-facing platforms with PQC first.

Moving to an industry-specific discussion for PQC

After discussing infrastructure concerns in detail, PQShield devotes about half of this e-book installment to industry-specific considerations for PQC. They outline ten industries – healthcare, pharmaceuticals, financial services, regulatory technology, manufacturing, defense, retail, telecommunications, logistics, and media – highlighting areas needing specific attention. The breadth of the areas discussed shows how many systems we take for granted today use cryptography and will fall vulnerable soon.

Crypto modernization is a complex topic, made more so by the prevalence of crypto features in many vendor-supplied systems organizations don’t directly control. Awareness of timelines in place, along with places to look for vulnerabilities, is a meaningful discussion.

To download a copy of the e-book, please visit the PQShield website:

Cryptography Modernization Part 1: Where is your Cryptography?


S2C Accelerates Development Timeline of Bluetooth LE Audio SoC

S2C Accelerates Development Timeline of Bluetooth LE Audio SoC
by Daniel Nenni on 06-15-2023 at 6:00 am

actt

S2C has been shipping FPGA prototyping platforms for SoC verification for almost two decades, and many of its customers are developing SoCs and silicon IP for Bluetooth applications.  Prototyping Bluetooth designs before silicon has yielded improved design efficiencies through more comprehensive system validation, and by enabling hardware/software co-design prior to silicon availability.  When Bluetooth IP and SoC prototypes can be connected directly to real system hardware, running at hardware speeds, running real software prior to silicon, the resulting design efficiencies enable reduced development times, and higher quality products.

Bluetooth Low Energy (“BLE”) is a wireless communication technology that is used in a wide variety of applications including smart home devices, fitness trackers, and medical devices such as Neuralink’s Brain-Computer Interface – applications that require low-power operation, and short-range wireless connectivity between devices (up to 10 meters).  The Bluetooth protocol was originally introduced by the Bluetooth Special Interest Group (“Bluetooth SIG”) in 1998, followed by Bluetooth Low Energy (BLE) in 2009, and most recently the Bluetooth Low Energy Audio (“BLE Audio”) specification was released in 2022.  BLE Audio focuses on higher power efficiency than the classic version of Bluetooth, provides for higher audio quality than standard Bluetooth, and introduces new features – and was the largest specification development project in the history of the Bluetooth SIG.

One provider company of silicon IP and SoC design services that chose S2C’s FPGA-based prototyping solutions for their SoC verification and system validation platform was Analog Circuit Technology Inc. (“ACTT”).  ACTT was founded in 2011 and specializes in the development of low power physical IP and full SoC design services.  ACCT’s portfolio includes ultra-low power analog/mixed-signal IP, high reliability eNVM, wireless RF IP, and wired interface IP.  ACTT’s IP is widely used in 5G, Internet of Things (“IoT”), smart home, automotive, smart power, wearables, medical electronics, and industrial applications.

For one of its BLE projects, ACTT planned for a design verification and system validation platform that would take on several significant challenges;

  1. A System-level Verification platform for a BLE Audio SoC that would enable comprehensive validation of the entire system’s functionality, and would also support industry regulation compliance testing.
  2. A Hardware/Software Co-Design platform that would provide the software development team with a platform for early software development and hardware/software co-design.
  3. A Stability Testing platform – and as it turned out, several issues were surfaced by the verification platform that required highly-targeted debugging to ensure product stability and performance standards compliance.

Working together with ACCT on their BLE Audio project, ACCT selected S2C’s VU440 Prodigy Logic System prototyping hardware platform, prototyping software, and debugging tools for a comprehensive FPGA prototyping platform.  As part of their complete prototyping solutions, S2C offers a wide range of versatile daughter cards (“Prototype-Ready IP”), such as I/O expansion boards, peripheral interface boards, RF interface boards, and interconnect cables.  S2C’s Prototype-Ready IP supports prototyping interfaces for JTAG, SPI FLASH, UART, I2S, SD/MMC, and RF, with speeds of up to 60MHz.  S2C’s off-the-shelf Prototype-Ready IP enables faster time-to-prototyping, and reliable plug-and-play interconnection to S2C prototyping platforms.

ACTT’s Deputy General Manager, Mr. Yang, offered an enthusiastic retrospective of ACTT’s use of S2C’s FPGA-based prototyping platform: “During the development of our BLE Audio SoC, we effectively used S2C’s Prodigy Logic System for hardware verification and concurrent hardware/software development.  This innovative approach enabled us to complete the software SDK development well ahead of the chip product’s tape-out phase, resulting in a remarkable timesaving of approximately 2 to 3 months in our overall product development timeline.”

Through committed collaboration with customer-partners, such as ACTT, S2C has a reputation for stimulating independent innovative thinking about SoC verification, and enhancing its customers’ competitiveness in their respective markets. By working closely with its customer-partners, S2C fosters a thriving collaborative working environment that encourages the timely exchange of ideas, resources, and SoC development expertise. With a shared vision of success, S2C and its customer-partners strive to achieve successful SoC development outcomes like ACCT’s, that delivers compelling value to our customer-partners.

About S2C:

S2C is a leading global supplier of FPGA prototyping solutions for today’s innovative SoC and ASIC designs, now with the second largest share of the global prototyping market. S2C has been successfully delivering rapid SoC prototyping solutions since 2003. With over 600 customers, including 6 of the world’s top 15 semiconductor companies, our world-class engineering team and customer-centric sales team are experts at addressing our customer’s SoC and ASIC verification needs. S2C has offices and sales representatives in the US, Europe, mainland China, Hong Kong, Korea and Japan. For more information, please visit: www.s2cinc.com

Also Read:

S2C Helps Client to Achieve High-Performance Secure GPU Chip Verification

Ask Not How FPGA Prototyping Differs From Emulation – Ask How FPGA Prototyping and Emulation Can Benefit You

A faster prototyping device-under-test connection


Semico Research Quantifies the Business Impact of Deep Data Analytics, Concludes It Accelerates SoC TTM by Six Months

Semico Research Quantifies the Business Impact of Deep Data Analytics, Concludes It Accelerates SoC TTM by Six Months
by Kalar Rajendiran on 06-14-2023 at 10:00 am

Design Costs Comparison

The semiconductor industry has been responding to increasing device complexity and performance requirements in multiple ways. To create smaller and more densely packed components, the industry is continually advancing manufacturing technology. This includes the use of new materials and processes, such as extreme ultraviolet lithography (EUV) and 3D stacking. To meet performance requirements, the industry is developing new chip architectures that enable more efficient data processing and power consumption. This includes open-domain-specific-architectures (ODSA) incorporating specialized processors and artificial intelligence (AI) accelerators. To reduce costs and improve performance, the industry is integrating more components onto a single chip, resulting in System on Chip (SoC) designs or opting for multi-die systems using chiplets-based implementations. There is also increasing levels of collaboration within the ecosystem including the equipment suppliers, foundries, package and assembly houses.

At the same time, time-to-market (TTM) is taking on more and more importance for product companies. In today’s fast evolving markets, the market window for a product may be just two years. A company cannot afford to be late to any market, let alone these kind of fast moving markets. Thus, each company utilizes its own tested and proven ways of deriving TTM advantages to get to market first. Of late, deep data analytics is being leveraged by many companies to accelerate their SoC product development efforts. By leveraging deep data analytics, design issues can be caught early in the development process, reducing the need for expensive and time-consuming re-spins. It can also identify potential performance bottlenecks and optimization opportunities. In essence, deep data analytics can not only reduce TTM but also help improve product performance, increase power efficiency and enhance reliability of a product. The product company gets to enjoy bigger market share at significantly improved return on investment (ROI) and longer term customer satisfaction.

proteanTecs is a leading provider of deep data analytics for advanced electronics monitoring. Its solution utilizes on-chip monitors and machine learning techniques to deliver actionable insights during development through production and in-field deployment. The company hosted a webinar recently where Rich Wawrzyniak, Principal Analyst for ASIC and SoC at Semico Research, presented a head-to-head comparison of two companies designing a similar multicore SoC on a 5nm technology node. One of the two companies in this comparison leveraged proteanTecs technology in its product development and gained a six-month TTM advantage over the other.

The webinar is based on a Semico Research white paper, which we covered in the article, “How Deep Data Analytics Accelerates SoC Product Development.”

Here are some excerpts from the webinar.

The Cost Edge

Below is a design costs comparison table for two competing solutions for the same application based on current industry design and production costs. Company A’s solution leveraged proteanTecs analytics-based design methodology and Company B’s solution used standard methodology. The solution is a data center accelerator SoC product, details of which are shared by Rich in the webinar. Company A’s cost savings amounted to about 9% over Company B.

The Time-to-Market (TTM) Benefit

Using proteanTecs approach for deep data analytics, Company A met their market window with on-time entry, allowing it to capture the majority of the target market. The company gained a 6-month TTM advantage over Company B. It also recovered its design investment even as their market was still growing, allowing for increased revenues and profitability.

In-Field Advantage

As highlighted in the Figure below, proteanTecs analytics solution not only helps during design, bring up and manufacturing phases but also after a product has been deployed in the field. This helped Company A monitor for and correct potential problems in the field under real world operating conditions. This kind of analytics insights could be used for preventive maintenance and fine tuning for power consumption and product performance in the field. Marc Hutner, Senior Director of Product Marketing at proteanTecs, presented this information during the webinar.

Cloud-Based Platform Demo

To conclude the webinar, Alex Burlak, Vice President, Test & Analytics at proteanTecs, showed a demo of the proteanTecs cloud-based analytics platform. He highlighted the platform’s capabilities and revealed the different types of insights users receive from proteanTecs’ on-chip monitors, also called Agents.

Summary

Anyone involved with semiconductor product development will find the information presented in the webinar very useful. You can watch the webinar on-demand here.

Also Read:

Maintaining Vehicles of the Future Using Deep Data Analytics

Webinar: The Data Revolution of Semiconductor Production

The Era of Chiplets and Heterogeneous Integration: Challenges and Emerging Solutions to Support 2.5D and 3D Advanced Packaging