SILVACO 073125 Webinar 800x100

Podcast EP169: How Are the Standards for the Terabit Era Defined?

Podcast EP169: How Are the Standards for the Terabit Era Defined?
by Daniel Nenni on 06-30-2023 at 10:00 am

Dan is joined by Priyank Shukla of Synopsys and Kent Lusted of Intel.

Priyank Shukla is a Sr. Staff Product Manager for the Synopsys High-Speed SerDes IP portfolio. He has broad experience in analog, mixed-signal design with strong focus on high performance compute, mobile and automotive SoCs.

Kent Lusted is a Principal Engineer focused on Ethernet PHY Standards within Intel’s Network and Edge Group. Since 2012, he has been an active contributor and member of the IEEE 802.3 standards development leadership team. He continues to work closely with Intel Ethernet PHY debug teams to improve the interoperability of the many generations of SERDES products (10 Gbps, 25 Gbps, 50 Gbps and beyond). He is currently the electrical track leader of the IEEE P802.3df 400 Gb/s and 800 Gb/s Ethernet Task Force as well as the electrical track leader of the IEEE P802.3dj 200 Gb/s, 400 Gb/s, 800 Gb/s, and 1.6 Tb/s Ethernet Task Force.

Dan explores the process of developing high-performance Internet standards and supporting those standards with compliant IP. The relationships between the IEEE and other related communication standards are discussed. The complex, interdependent process of developing and validating new products against emerging standards is explored.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


TSMC Redefines Foundry to Enable Next-Generation Products

TSMC Redefines Foundry to Enable Next-Generation Products
by Mike Gianfagna on 06-30-2023 at 6:00 am

TSMC Redefines Foundry to Enable Next Generation Products

For many years, monolithic chips defined semiconductor innovation. New microprocessors defined new markets, as did new graphics processors, and cell-phone chips. Getting to the next node was the goal, and when the foundry shipped a working part victory was declared. As we know, this is changing. Semiconductor innovation is now driven by a collection of chips tightly integrated with new packaging methods, all running highly complex software. The implications of these changes are substantial. Deep technical skills, investment in infrastructure and ecosystem collaboration are all required. But how does all of this come together to facilitate the invention of the Next Big Thing? Let’s look at how TSMC redefines foundry to enable next-generation products.

What Is a Foundry?

The traditional scope of a foundry is wafer fabrication, testing, packaging, and delivery of a working monolithic chip in volume. Enabling technologies include a factory to implement a process node, a PDK, validated IP and an EDA design flow. Armed with these capabilities, new products are enabled with new monolithic chips. All this worked quite well for many decades. But now, the complexity of new product architectures, amplified by a software stack that is typically enabling AI capabilities, demands far more than a single, monolithic chip. There are many reasons for this shift from monolithic chip solutions and the result is a significant rise in multi-die solutions.

Much has been written about this shift in the innovation paradigm it enables. In the interest of time, I won’t expand on that here. There are many sources of information that explain the reasons for this shift. Here is a good summary of what’s happening.

The bottom line of all this is that the definition of product innovation has changed substantially. For many decades, the foundry delivered on the technology needed to drive innovation – a new chip in a new process. The requirements today are far more complex and include multiple chips (or chiplets) delivering various parts of the new system’s functionality. These devices are often accelerating AI algorithms. Some are sensing the environment, or performing mixed signal processing, or communicating with the cloud. And others are delivering massive, local storage arrays.

All this capability must be delivered in a dense package to accommodate the required form factor, power dissipation, performance, and latency of new, world-changing products. The question to pose here is what has become of the foundry? Delivering the enabling technology for all this innovation requires a lot more than in the past. Does the foundry now become part of a more complex value chain, or is there a more predictable way?  Some organizations are stepping up. Let’s examine how TSMC redefines foundry to enable next-generation products.

The Enabling Technologies for Next Generation Products

There are new materials and new manufacturing methods required to deliver the dense integration required to enable next-generation products. TSMC has developed a full array of these technologies, delivered in an integrated package called TSMC 3DFabric™.

Chip stacking is accomplished with a front-end process called TSMC-SoIC™ (System on Integrated Chips). Both Chip on Wafer (CoW) and Wafer on Wafer (WoW) capabilities are available. Moving to back-end advanced packaging, there are two technologies available. InFO (Integrated Fan-Out) is a chip-first approach that provides redistribution layer (RDL) connectivity, optionally with local silicon interconnect. CoWoS® (Chip on Wafer on Substrate) is a chip-last approach that provides a silicon interposer or an RDL interposer with optional local silicon interconnect.

All of this capability is delivered in one unified package. TSMC is clearly expanding the meaning of foundry. In collaboration with IP, substrate and memory suppliers, TSMC also provides an integrated turnkey service for end-to-end technical and logistical support for advanced packaging. The ecosystem tie-in is a critical ingredient for success. All suppliers must work together effectively to bring the Next Big Thing to life. TSMC has a history of building strong ecosystems to accomplish this.

Earlier, I mentioned investment in infrastructure. TSMC is out in front again with an intelligent packaging fab. This capability makes extensive use of AI, robotics and big data analytics. Packaging used to be an afterthought in the foundry process. It is now a centerpiece of innovation, further expanding the meaning of foundry.

Toward the Complete Solution

All the capabilities discussed so far bring us quite close to a fully integrated innovation model, one that truly extends what a foundry can deliver. But there is one more piece required to complete the picture. Reliable, well-integrated technology is a critical element to successful innovation, but the last mile for this process is the design flow. You need to be able to define what technologies you will use, how they will be assembled and then build and verify a model of your semiconductor system and verify it will work before building it.

Accomplishing this requires the use of tools from several suppliers, along with IP and materials models from several more. It all needs to work in a unified, predictable way. For the case of advanced multi-chip designs, there are many more items to address. The choice of active and passive dies, how they are connected, both horizontally (2.5D) and vertically (3D) and how they will all interface to each other are just a few of the new items to consider.

I was quite impressed to see TSMC’s announcement at its recent OIP Ecosystem Forum to address this last mile problem. If you have a few minutes, check out Jim Chang’s presentation. It is eye-opening.

The stated mission for this work is:

  • Find a way to modularize design and EDA tools to make the 3DIC design flow simpler and efficient
  • Ensure standardized EDA tools and design flows are compliant with TSMC’s 3DFabric technology
3Dblox Standard

With this backdrop, TSMC introduced the 3Dblox™ Standard. This standard implements a language that provides a consistent way specify all requirements for a 2.5/3D design. It is an ambitious project that unifies all aspect of 2.5/3D design specification, as shown in the figure.

Thanks to TSMC’s extensive OIP ecosystem, all the key EDA providers support the 3Dblox language, making it possible to perform product design in a unified way, independent of a specific tool flow.

This capability ties it all together for the product designer. The Next Big Thing is now within reach, since TSMC redefines foundry to enable next-generation products.

Also Read:

TSMC Doubles Down on Semiconductor Packaging!

TSMC Clarified CAPEX and Revenue for 2023!

TSMC 2023 North America Technology Symposium Overview Part 3

 


Is Your RTL and Netlist Ready for DFT?

Is Your RTL and Netlist Ready for DFT?
by Daniel Payne on 06-29-2023 at 10:00 am

Synopsys Test Family ready for DFT

I recall an early custom IC designed at Wang Labs in the 1980s without any DFT logic like scan chains, then I was confronted by Prabhu Goel about the merits of DFT, and so my journey on DFT began in earnest. I learned about ATPG at Silicon Compilers and Viewlogic, then observability at CrossCheck where I met Jennifer Scher, now she’s at Synopsys. We talked last week by video along with Synopsys Product Manager Ramsay Allen, who previously worked at UK IP Vendor Moortec, another SemiWiki client acquired by Synopsys. Test expert and R&D Director, Chandan Kumar also joined the call. Over the years Synopsys has both acquired and developed quite a broad range of EDA and IP focused on testability, so I’d say yes, they are ready for DFT.

Our discussion centered on the TestMAX Advisor tool and how it helps on testability issues that can be addressed early at the RTL stage, like:

  • DFT violation checks – ensures RTL is scan ready
  • ATPG coverage estimation – does RTL design achieve fault coverage goals
  • Test robustness – reliability in presence of glitches, Xs, edge inconsistencies
  • Test Point selection – finds hard-to-test areas
  • Connectivity validation – DFT connections at SoC assembly

The focus of this interview however was the latest test robustness and reliability capabilities that Advisor provides in the form of glitch monitoring and X capture.

Glitches

A digital circuit that produces glitches on certain nets can cause temporary errors, something to be avoided in making an IC operate robustly and reliably. Three classes of glitches can be identified automatically by TestMAX Advisor:

  • Clock Merge
  • Reset Glitch
  • DFT Logic Glitch

Here’s an example logic cone for each type of glitch:

In functional mode the designer needs to ensure that a single clock passes through the Clock Gating Cells by controlling the enabled pins, but then in test mode only one clock signal can propagate. The above example shows how two clock signals combine to create a clock merge glitch, which needs to be found and fixed before tape out.

Every violation detected by Synopsys TestMAX Advisor includes the RTL source code line number to consider changing, so designers know what is causing the issue. Tool users can even define any logic path between two points in their design to search for glitches. Glitches are painful to find, especially if they aren’t found until late in the logic design cycles or even during silicon debug. Glitches can be triggered on rising or falling edges of internal signals, so it’s paramount to discover these early in the design process when changes are much easier to make. The automated checking understands the unateness of each logic path.

Another example of glitch detection was shown when a signal called test_mode would transition.

Glitches due to Mode Transition

The actual error report for this glitch was:

Clock(s) and 1 Clock Enable(s) logic reconverges at or near test.clk_en‘.(Count of reconvergence start points = ‘1’ reconvergence end = ‘test.clk_en‘)[Affects ‘1’ Flip-flops(s)]

The final type of glitch detection was for buses driven by tri-state buffers, where clock edge inconsistencies and bus contention were caught.

Summary

RTL design and debug is a labor-intensive process, so having proper automation tools like Synopsys TestMAX Advisor are an insurance policy against re-spins caused by testability issues like glitches and Xs in an IC design. Early warning on the DFT robustness is a smart investment that pays off in the long wrong by improving the chances for a first silicon success. Design engineers run Synopsys TestMAX Advisor on every level of their hierarchical design, including the final, full-chip level.

Designers save time by using an automated checking tool, instead of relying upon manual detection methods.

For more information on Synopsys TestMAX products, please, visit the website.

Related Blogs


Unique IO & ESD Solutions @ DAC 2023!

Unique IO & ESD Solutions @ DAC 2023!
by Daniel Nenni on 06-29-2023 at 6:00 am

DAC photo Certus Semiconductor

The semiconductor industry continues to drive innovation and constantly seeks methods to lower costs and improve performance. The advantages of custom I/O libraries versus free libraries can be seen as cost-savings or, more importantly, new markets, new customers, and new business
opportunities.

At DAC 2023, Certus Semiconductor will share the advantages of high-performance I/O libraries and having the opportunity to collaborate on new ideas, incorporating unique features that will open new markets and new opportunities for your company.

Certus Semiconductor is a Unique IO & ESD Solution Company

Certus has assembled several of the world’s foremost experts in IO and ESD design to offer clients the ability to affordably tailor their IO libraries into the optimal fit for their products.

Certus expertise cross all industries. They have tailored IO and ESD libraries for low power and wide voltage ranges, and RF low cap ESD targeting the IoT, wireless and consumer electronics markets. There are IO Libraries customized for  flexible interface, multi-function, and high performance that target the FPGA and high performance computing markets. Certus expertise also includes radiation  hardened, high reliability and high-temperature IO libraries for the aerospace, automotive and industrial markets. Certus leverages this expertise to work directly with you – that means meeting with your architects, marketing team, circuit & layout designers and reliability engineers to ensure that the Certus IO and ESD solutions provide the most efficient and competitive solutions for your products and target markets.

Stephen Fairbanks, CEO of Certus Semiconductor has stated, “Our core advantages is our ability to truly work with our customers, understand their product goals and customer applications  and then to help them create IO and ESD solutions that give their products a true market differentiation and competitive advantage.  All our repeat business has been born out of these types of collaborations.“

Certus has silicon-proven libraries in a variety of foundry processes. These can be licensed off-the-shelf or can be customized for your application, and are available as full libraries or on a cell-by-cell basis.

In addition to these processes, Certus has consulted on designs in many others and can be contracted for development in any of them. Our foundry experience, includes all major foundries such as Samsung, Intel, TowerJazz, DongBu HiTek, UMC, pSemi, Silanna, Lfoundry, Silterra, TSI, XFab,Vanguard and many others.The Design Automation Conference (DAC) is the premier event devoted to the design and design automation of electronic chips and systems. DAC focuses on the latest methodologies and technology advancements in electronic design.  The 60th DAC will bring together researchers, designers, practitioners, tool developers, students and vendors.

Certus is one of the  more than 130 companies supporting this industry leading event and they invite you to meet with the Certus I/O and ESD experts on the exhibit floor. You can contact Certus HERE to schedule a meeting at booth #1332. I hope to see you there!

Also Read:

The Opportunity Costs of using foundry I/O vs. high-performance custom I/O Libraries

CEO Interview: Stephen Fairbanks of Certus Semiconductor

Certus Semiconductor releases ESD library in GlobalFoundries 12nm Finfet process

 


Semiconductor CapEx down in 2023

Semiconductor CapEx down in 2023
by Bill Jewell on 06-28-2023 at 2:00 pm

Semiconductor Capex 2021 2022 2023

Semiconductor capital expenditures (CapEx) increased 35% in 2021 and 15% in 2022, according to IC Insights. Our projection at Semiconductor Intelligence is a 14% decline in CapEx in 2023, based primarily on company statements. The biggest cuts will be made by the memory companies, with a 19% drop. CapEx will drop 50% at SK Hynix and 42% at Micron Technology. Samsung, which only increased CapEx by 5% in 2022, will hold at about the same level in 2023. Foundries will decrease CapEx by 11% in 2023, led by TSMC with a 12% cut. Among the major integrated device manufacturers (IDMs), Intel plans a 19% cut. Texas Instruments, STMicroelectronics and Infineon Technologies will buck the trend by increasing CapEx in 2023.

Companies which are significantly cutting CapEx are generally tied to the PC and smartphone markets, which are in a slump in 2023. IDC’s June forecast had PC shipments dropping 14% in 2023 and smartphones dropping 3.2%. The PC decline largely affects Intel and the memory companies. The weakness in smartphones primarily impacts TSMC (with Apple and Qualcomm as two of its largest customers) as well as the memory companies. The IDMs which are increasing CapEx in 2023 (TI, ST, and Infineon) are more tied to the automotive and industrial markets, which are still healthy. The three largest spenders (Samsung, TSMC and Intel) will account for about 60% of total semiconductor CapEx in 2023.

The high growth years for semiconductor CapEx tend to be the peak growth years for the semiconductor market for each cycle. The chart below shows the annual change in semiconductor CapEx (green bars on the left scale) and annual change in the semiconductor market (blue line on the right scale). Since 1984, each significant peak in semiconductor market growth (20% or greater) has matched a significant peak in CapEx growth. In almost every case, the significant slowing or decline in the semiconductor market in the year following the peak has resulted in a decline in CapEx in one or two years after the peak. The exception is the 1988 peak, where CapEx did not decline the following year but was flat two years after the peak.

This pattern has contributed to the volatility of the semiconductor market. In a boom year, companies strongly increase CapEx to increase production. When the boom collapses, companies cut CapEx. This pattern often leads to overcapacity following boom years. This overcapacity can lead to price declines and further exacerbate the downturn in the market. A more logical approach would be a steady increase in CapEx each year based on long-term capacity needs. However, this approach can be difficult to sell to stockholders. Strong CapEx growth in a boom year will generally be supported by stockholders. But continued CapEx growth in weak years will not.

Since 1980, semiconductor CapEx as a percentage of the semiconductor market has averaged 23%. However, the percentage has varied from 12% to 34% on an annual basis and from 18% to 29% on a five-year-average basis. The 5-year average shows a cyclical trend. The first 5-year average peak was in 1985 at 28%. The semiconductor market dropped 17% in 1985, at that time the largest decline ever. The 5-year average ratio then declined for nine years. The average eventually returned to a peak of 29% in 2000. In 2001 the market experienced its largest decline ever at 32%. The 5-year average then declined for twelve years to a low of 18% in 2012. The average has been increasing since, reaching 27% in 2022. Based on our 2023 forecasts at Semiconductor Intelligence, the average will increase to 29% in 2023. 2023 will be another major downturn year for the semiconductor market. Our Semiconductor Intelligence forecast is a 15% decline. Other forecasts are as low as a 20% decline. Will this be the beginning of another drop in CapEx relative to the market? History shows this will be the likely outcome. Major semiconductor downturns tend to scare companies into slower CapEx.

The factors behind CapEx decisions are complex. Since a wafer fab currently takes two to three years to build, companies must project their capacity needs several years into the future. Foundries account for about 30% of total CapEx. The foundries must plan their fabs based on estimates of the capacity needs of their customers in several years. The cost of a major new fab is $10 billion and up, making it a risky proposition. However, based on past trends, the industry will likely see lower CapEx relative to the semiconductor market for the next several years.

Also Read:

Steep Decline in 1Q 2023

Electronics Production in Decline

Automotive Lone Bright Spot


Better Randomizing Constrained Random. Innovation in Verification

Better Randomizing Constrained Random. Innovation in Verification
by Bernard Murphy on 06-28-2023 at 10:00 am

Innovation New

Constrained random methods in simulation are universally popular, still can the method be improved? Paul Cunningham (Senior VP/GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and now Silvaco CTO) and I continue our series on research ideas. As always, feedback welcome.

The Innovation

This month’s pick is Balancing Scalability and Uniformity in SAT Witness Generator. A refined version of the paper was published by Springer in 2015 in “Proceedings of the 21st International Conference on Tools and Algorithms for the Constructions and Analysis of Systems”. The authors are/were from IIT Bombay and Rice University, Houston TX.

Constrained random effectiveness depends on the quality of constraints, also the uniformity of the distribution generated against those constraints since uneven distributions will bias away from triggering bugs in lightly covered states. Unfortunately generation methods tend either to high uniformity but impractical run-times for large designs or to good scalability but weak uniformity. The authors’ paper describes a method to provide approximate guarantees of uniformity with better performance than prior papers on the topic.

The authors also reported a refinement to this method in a later publication apparently only available from Springer-Verlag.

Paul’s view

A key engine at the heart of all commercial logic simulators is the constraint solver, responsible for picking pseudo-random input values each clock cycle in a constrained-random test. These solvers must be ultra-fast but also pick a good spread of random values across the solution space of the constraints. For the scale of constraint complexity in modern commercial SOC tests this is a really hard problem and EDA vendors have a lot of secret sauce in their solvers to tackle it.

Under the hood of these solvers, constraints are munged into Boolean expressions, and constraint solving turns into a Boolean SAT problem. In formal verification we are trying to find just one solution to a massive Boolean expression. In constrained-random simulation we are trying to find a massive number of “uniformly distributed” solutions to smaller Boolean expressions.

The way solvers achieve uniformity is conceptually simple: partition the set of all solutions into n groups of roughly equal size, first pick a random group and then find a solution that belongs to that group. This forces the solver to spread its solutions over all the groups, and hence over the solution space. Implementing this concept is super hard and involves ANDing the Boolean expression to be solved with a nasty XOR-based Boolean expression encoding a special kind of hash function. This hash function algebraically partitions the solution space into the desired number of groups. The smaller the groups (i.e. the larger n is) the more uniform the solver, but the slower the solver is, so picking the right number of groups is not easy and must be done iteratively.

There are two key innovations in this paper: one relates to dramatically reducing the size of the XOR hash function expression, the other to dramatically reducing the number of iterations required to get the right group size. Both innovations come with rigorous proofs that the solver still meets a formal definition of being “uniform”. It’s impressive work and way too complex to explain fully here, but the results are literally 1000x faster than prior work. If you have the energy to muscle through this paper it is well worth it!

Raúl’s view

A “SAT witness” is a satisfying assignment of truth values to variables such that a Boolean formula F evaluates to true. Constraints in Constrained Random Verification (CRV) of digital circuits are encodable as Boolean formulae, so generation of SAT witnesses is essential for CRV. Since the distribution of errors in a design is not known a priori, all solutions to the constraints are equally likely to discover a bug. Hence it is important to sample the solution space uniformly at random, meaning that if there are RF SAT witnesses, the probability PR of generating a value is 1/RF; the paper uses “almost” uniformly defined as 1/(1+ℇ)RF ≤ PR ≤ (1+ ℇ)/RF. Uniformity poses significant technical challenges when scaling to large problem sizes. At the time of publication of the paper (2014) the proposed algorithm, UniGen, was the first to provide guarantees of almost-uniformity and scaling to hundreds of thousands of variables.  The algorithm is based on 3-independent hash-functions and it is beyond the scope of this blog to delve into it; the reader is referred to section 4 in the paper.

The experimental results show that for problems with up to 486,193 variables witnesses can be generated in less than 782 secs with a probability of 0.98 of success. Comparisons with UniWit, the state-of-the-art at the time, show runtimes that are 2-3 orders of magnitude lower. UniGen manages all 12 examples, while UniWit only can complete 7 out of 12. Uniformity is shown by comparing UniGen to a uniform sampler that simply picks solutions out of RF randomly (Figure 1).

The industry has been moving towards increased automation and advanced verification methodologies, making commercial CRV tools more prevalent, particularly for complex digital designs. Constraint solving techniques have been a fundamental part of CRV. Recent advances have focused on improving constraint solving algorithms, optimizing random test generation, and addressing scalability challenges. And although much progress has been achieved since 2014, the reviewed paper (cited 81 times) is an excellent example that illustrates these advances.

Also Read:

Tensilica Processor Cores Enable Sensor Fusion For Robust Perception

Deep Learning for Fault Localization. Innovation in Verification

US giant swoops for British chipmaker months after Chinese sale blocked on national security grounds


Clock Verification for Mobile SoCs

Clock Verification for Mobile SoCs
by Daniel Payne on 06-28-2023 at 6:00 am

Clock duty cycle distortion

The relentless advancement of mobile phone technology continues to push boundaries, demanding SoCs that deliver ever-increasing performance while preserving extensive battery life. To meet these demands, the industry is progressively embracing lower technology nodes with current designs being taped-out at 5nm or below. Designing and verifying clocks at these lower geometries brings mounting complexities and increasing verification challenges. In this rapidly evolving landscape, current clock verification methodologies must be reassessed to ensure optimal clock performance and reliability.

The existing clock methodologies primarily rely on Static Timing Analysis (STA) as a standalone solution or a more advanced approach, that combines STA with a SPICE simulator to analyze critical paths.  This flow necessitates the involvement of a CAD department to establish the flow and a strict methodology to produce accurate and timely results, but even then, for an SoC level clock signal at a lower process node, the simulator may lack capacity and/or accuracy requirements. Moreover, the identification of critical paths relies heavily on the judgment and experience of engineers. This approach leads to unnecessary guard-banding, leaving valuable timing margin untapped and limiting overall performance.

At the 7nm, 5nm and 3nm process nodes both the transistor and interconnect dimensions are reduced, resulting in sensitivities to a variety of design and process-related issues, like rail-to-rail failures and duty cycle distortion in the clock signal.

Rail-to-rail Failures

If a clock net has a weak driver, long interconnect and large capacitive loading, then it can lead to increased insertion delays, and worst-case a rail-to-rail failure. The voltage levels on the clock simply don’t reach the VSS and VDD levels in a rail-to-rail failure. Running STA alone will not detect this failure mechanism because STA measures timing at specific voltage thresholds.

An increase in clock frequency reduces clock period, resulting in a shorter time window for the clock to reach the supply rail voltage levels. Voltage scaling also makes the clock signal more vulnerable to rail-to-rail failure, as the smaller gap between the supply and Vth leads to increase in non-linear operation, reducing the drive strength. Even process variations in Vth, transistor W and L variations, or parasitic capacitances will contribute to rail-to-rail failure. Local power supply levels will bounce around from IR drop effects, which then degrade signal levels and timing in the clock signal.

Clock rail-to-rail failure detection

Clock duty cycle distortion

When a clock signal propagates through a series of gates with asymmetric pull-up and pull-down drive strengths, then it causes duty-cycle-distortion (DCD). An ideal duty cycle for a clock is 50% low and 50% high pulse width. Increased clock frequencies can amplify timing imbalances and cause signal integrity issues like DCD. Clock interconnect is impacted by capacitive and resistive effects, which change the slew rate for rise and fall times, delaying the clock and causing asymmetry, making DCD effects more pronounced. Process variations directly alter interconnects, adding imbalances in circuit timing, adding to DCD.

Clock duty cycle distortion

For process nodes with asymmetric PVT corners the DCD becomes more pronounced. Results from a STA tool are focused on insertion delay, so it is less accurate to report DCD and Minimum-Pulse-Width (MPW).

Slew Rate and Transition Distortion

At lower process nodes, the parasitic interconnect has more pronounced resistive-shielding and capacitive coupling, degrading slew rate and clock edge transitions. STA tools use a simplified model for interconnect parasitics which can then underestimate the clock signal degradations.

Power-supply induced jitter

Noise in the Power Delivery Network (PDN) impacts clock timing, producing jitter which impacts clock performance.  When the power supply experiences fluctuations or noise, it can introduce voltage variations that directly affect the clock signal’s stability and integrity. Power supply induced jitter can lead to timing errors in clock signals, causing them to arrive earlier or later than expected. This can result in setup and hold violations, leading to potential functional failures in the clock.  The increased jitter can also reduce the timing margin, making the design more susceptible to timing violations and potential performance degradation. STA tools primarily focus on analyzing the timing behavior of a design based on a static representation of the circuit and cannot do Jitter. Designers typically use an approximation for jitter effects, so it is really just another guard-band approach.

Power Supply Noise

Topologies using clock grids and spines

Grid and Spine architecture, especially at 7nm and below technology nodes can offer significant advantages including enhanced signal integrity and power and area efficiency.  Grid and spine structures provide a regular and structured framework for routing clock signals, reducing the impact of the increased process variations of lower technology nodes, improving signal integrity and mitigating issues like clock skew, jitter and noise. In addition, grid and spine architecture allows for optimized routing of clock signals.

Circuit simulation is the only accurate method to verify grids and spines, but most commercial SPICE simulators do not handle the capacity for such large meshes.  Designing a lower technology node clock with grids and spines without an adequate, fast and accurate verification methodology can be a risky proposition.

Summary

Mobile devices require mobile processors, and they often drive the bleeding-edge of IC process technology. Meeting PPA goals in a timely manner is paramount to the success of mobile SoCs. At 7nm and below technology nodes, a fresh approach to clock verification becomes imperative. Failing to adopt such an approach entails increased guard-banding, leading to increased area and power requirements. Most importantly, the conservative nature of guard-banding, leaves valuable performance on the table.

Enter Infinisim’s ClockEdge, an off-the-shelf solution specifically engineered for thorough clock verification and analysis. ClockEdge boasts an exceptional ability to analyze every path within the entire clock domain with SPICE-level accuracy. This has the potential to unlock unparalleled analysis opportunities that are otherwise unattainable using conventional methodologies. Moving to 7nm and below technology node is a costly endeavor, yet it offers significant benefits in Power, Performance and Area (PPA) efficiency. However, guard-banding practices can diminish these advantages. Infinisim’s solution identifies all potential failures and optimizes PPA by minimizing the need for excessive guard-banding, thus capitalizing on the advantages afforded by a move to a lower technology node.

With a well-established reputation, Infinisim has a proven track record in the industry. Their solutions have been adopted as a sign-off tool by their mobile SoC customers, solidifying their position as a trusted partner. Infinisim’s expertise in clock analysis spans a wide range of designs, from 28nm to the most advanced 3nm process node. They provide extensive support for all major foundries, including TSMC, Samsung and GlobalFoundries.

Related Blogs

 


Samsung Foundry on Track for 2nm Production in 2025

Samsung Foundry on Track for 2nm Production in 2025
by Daniel Nenni on 06-27-2023 at 3:00 pm

Samsung Foundry Forum 2023

On the heels of the TSMC Symposium and the Intel Foundry update, Samsung held their Foundry Forum today live in Silicon Valley. As usual it was a well attended event with hundreds of people and dozens of ecosystem partners. The theme was the AI Era which is appropriate. As I have mentioned before, AI will touch most every chip and there will never be enough performance or integrated memory so leading edge process and packaging technology is foundry critical, absolutely.

Samsung Foundry has always met customer needs by being ahead of the technology innovation curve and today we are confident that our gate-all-around (GAA)-based advanced node technology will be instrumental in supporting the needs of our customers using AI applications,” said Dr. Siyoung Choi, president and head of Foundry Business at Samsung Electronics. “Ensuring the success of our customers is the most central value to our foundry services.”

The Samsung Foundry market breakdown for 2022 was not surprising:

  • Mobile 39%
  • HPC 32%
  • IoT 10%
  • Consumer 9%

Moving forward however HPC is expected to dominate the foundry business ( > 40% ) as AI takes more than their fair share of leading edge wafers.

The most significant announcement was that Samsung 2nm is on track to start production in 2025, which was the date given at the previous Samsung Foundry Forum. Staying on track with the published roadmap is a big part of foundry trust. Remember, if a fabless company is going to bet their company jewels on a foundry partnership they have to trust that the wafers will be delivered on time matching the PDK specifications.

Highlights include:
  • Expanded applications of its 2-nanometer (nm) process and specialty process
  • Expanded production capacity at its Pyeongtaek fab Line 3
  • Launched a new ‘Multi-Die Integration (MDI) Alliance’ for next-generation packaging technology

At the event, Samsung announced detailed plans for the mass production of its 2nm (horizontal nanosheet) process, as well as performance levels. Samsung, like Intel, are their own foundry customer so first production is with internal products versus external foundry customers. This of course is the advantage of an IDM foundry, developing your own silicon in concert with process technologies. Samsung has the added advantage of developing leading edge memories.

Samsung will begin mass production of the 2nm process for mobile applications in 2025, and then expand to HPC in 2026 with backside power delivery, and automotive in 2027. Samsung’s 2nm (SF2) process has shown a 12% increase in performance, a 25% increase in power efficiency, and a 5% decrease in area, when compared to its 3nm process (SF3). Mass production of the follow-on 1.4nm is slated for 2027.

TSMC overwhelming won the 3nm node with the N3X process family, however, the 2nm node is undecided. TSMC N2, Intel 18A, and Samsung 2nm are very competitive on paper and should be ready for external customers in the same time frame. It will all depend on how the PDKs proceed. According to the ecosystem, customers are looking at all three processes so it is a three horse race which is great for the foundry business. No one enjoys a one horse race except for that one horse.

The other big announcement was packaging, another advantage of an IDM foundry. Intel and Samsung have been packaging chips before foundries existed. Now they are opening up their packaging expertise to external foundry customers. We will be writing more about packaging later but it is a very big opportunity for foundries to empower customers.

For packaging Samsung announced the MDI Alliance in collaboration with partner companies as well as major players in 2.5D and 3D,  memory, substrate packaging, and testing. Packaging is now a very important part of the foundry business. With the advent of chiplets and the ability to mix and match die from different processes and foundries, packaging is a new foundry arms race and it is good to see three strong horses competing for our business.

This was an excellent networking event, the food is always great, and the Samsung people are very polite and professional. Samsung Foundry will be at DAC 2023 in San Francisco the week of July 9th. I hope to see you there.

Also Read:

Synopsys Expands Agreement with Samsung Foundry to Increase IP Footprint

Keynote Sneak Peek: Ansys CEO Ajei Gopal at Samsung SAFE Forum 2023

A Memorable Samsung Event


Keysight at #60DAC

Keysight at #60DAC
by Daniel Payne on 06-27-2023 at 10:00 am

ads hsd hpc design cloud min

Keysight EDA will have a large presence at this year’s DAC in San Francisco July 9-13. For a better understanding of what’s happening with Keysight EDA at DAC I talked to my contacts to learn that they have three main messages this year:

• Automate
• Collaborate
• Innovate

Demos: Booth 1531

You may recall that Keysight acquired Cliosoft for their design data and IP management back in February 2023, so that fits into the collaborate category. On the automate and innovate points you can see demos of RF/uW and mmWave IC design using the PathWave Advanced Design System (ADS), and using HPC to accelerate EM and circuit simulations. Find out how Python scripting helps automate your IC design workflow.

Panels

On Monday, July 20th in the DAC Pavilion there’s a panel discussion on FaaS, from 2PM – 2:45PM, and it’s located on level 2 in the Exhibit Hall. Circuit simulations, SI and electromagnetic modeling can all be accelerated using HPC technology. Come and learn about “microservices” and how to avoid “cold start” issues.

Panelists are from Keysight, Rescale, Meta Reality Labs and Eviden, The moderator is Ben Jordan, from JordanDSP.

Design Cloud for cloud-based high-performance computing

This panel discussion takes place on Tuesday, July 11th, from 1PM – 1:45PM in the Transformative Technologies Theater, moderated by Natesan Vekateeswaran from IBM, with panelists from: Keysight, Ansys, Google, Microsoft. Most EDA tools were initially designed for desktop use, not cloud use. Hear about the journey taking EDA tools to cloud-optimized.

On the final day in the Exhibits you can learn from panelists at Keysight, BAE Systems, Raytheon Technologies and Microsoft. Wednesday, July 12th from 10:30AM – 11:15AM at the Transformative Technologies Theater. The DoD created the Rapid Assured Microelectronics Prototypes using Advanced Commercial Capabilities (RAMP) program. Cliosoft was the original EDA vendor in this program, now Keysight.

Tech Talk

Majid Ahad Dolatsara from Keysight is giving a tech talk on Tuesday, July 11th from 10:30AM – 11:15AM in the Transformative Technologies Theater. Learn how ML has been used for optimizing circuit routing, and NLP methods have extracted design information from text specifications. Hear about the techniques of supervised, unsupervised and reinforcement learning for EDA tools and flows.

Theatre Presentation

I’ve walked the exhibit area at many DACs, and one of the most welcome forms of relief is to simply sit down in a chair and take in a live presentation. Every hour there will be a theatre presentation in Keysight’s Booth 1531 to give you an overview of what they offer for RF/uW and mmWave IC designers in terms of automation as well as IP and design data management for collaboration and reuse. The presentation is both informative and entertaining, plus you get to rest those tired legs a bit.

Tuesday DAC Party

One of the best aspects of attending DAC is the social one, where you get to see and talk with your colleagues all in one place, and this year the party is on Tuesday, July 11th from 6PM – 9PM, on the Level 2 lobby area. Listen for the live music and watch for people holding drink glasses.

I Love DAC

Keysight is one of the sponsors of the annual I Love DAC, which means that you can attend several activities for free, like: Keynotes, SKYTalks, TechTalks, Theater, Exhibits, Networking, Training.

Hack@DAC

There’s a hardware security challenge contest, to find and exploit security-critical vulnerabilities in hardware and firmware, where Keysight is a sponsor. Form a team and be the winning hacker.

Customer Meetings

If you are an existing customer or new prospect interested in scheduling time with Keysight experts in their DAC booth,  submit your request online to reserve a meeting time.

Summary

The profile of Keysight has really grown over the years at DAC, and in 2023 I’d say that this is the most involved that I’ve ever seen their company. Discover how they are positioned by attending the three panel discussions or their Tech Talk. View two different demos at Booth 1531 and look for me at the Tuesday night DAC party.

Related Blogs


Transforming the electronics ecosystem with the component digital thread

Transforming the electronics ecosystem with the component digital thread
by Kalar Rajendiran on 06-27-2023 at 6:00 am

Complexity from Disaggregated Electronics Value Chain

The transformation of the vertically integrated electronics value chain to a disaggregated supply chain has brought tremendous value to the electronics industry and benefits to the consumers. This transformation has driven the various players to become highly specialized in order to support the market trends and demands of the marketplace, creating a highly complex and specialized electronics supply chain network.

While this transformation has helped deliver very advanced and highly complex electronics systems, information exchange still relies on outdated methods from decades ago. Thus, component manufacturers and system design companies still use PDF and Excel files to exchange information during the design and manufacturing phases of a product. These means of communication won’t go away, but it is high time they be augmented with high bandwidth, dynamic methods that more tightly bring together the nodes of the value chain, while speeding the delivery and ensuring the quality of data. To make this a reality, the electronics industry is embracing the adoption of component data exchange through a digital thread. A digital thread is the communications framework that allows a connected data flow and an integrated view of product data throughout the product life cycle– spanning ideation, realization, and utilization.

Siemens EDA recently published a whitepaper that examines how the emergent component digital thread will revolutionize the electronics industry by connecting all nodes of the value chain.

Industry Standard for Data Exchange

The component digital thread relies on an industry standard known as the JEDEC JEP30 Part Model. This standard, ratified in 2018, establishes requirements for exchanging part data in the electronics industry. The JEP30 Part Model defines the digital twin for electronic components, connecting the virtual and physical worlds across the value chain. It combines comprehensive component information into an industry-standard-based digital container, allowing seamless data exchange and integration throughout the product creation lifecycle. The adoption of this standard is a game-changer, enabling frictionless value and direct consumption by tools in the electronics industry.

Creating and Driving Value and Trust into the Supply Chain

The component digital thread empowers component manufacturers to provide accurate and intelligent data, eliminating errors and accelerating the design-in process. These threads contain embedded trust and supply chain intelligence, enabling better decision-making throughout the product lifecycle. The industry is embracing the JEDEC JEP30 Part Model standard to create high-bandwidth connections and leverage AI-enabled insights. As more tools become available to support the transition, component manufacturers are shifting towards part model representations to enhance collaboration and streamline processes.

Inserting Trust

As the shift towards digital twins of component data accelerates, establishing trust in the component information becomes crucial. Part models serve as complete digital representations of parts, and in the future, different sections of these part models may assimilate digital signatures to create an immutable ledger that builds trust throughout the design chain. Parts will have a “root of trust” tied to their digital certificates, verifying their authenticity and integrity. Similarly, at the product level, manufacturers will establish a “root of trust” through digital certificates that attest to the integrity of the manufacturing process. The digital thread is essential for enabling this trust-building process, as it ensures transparency and traceability throughout the supply chain.

Digital Data Sheets

The part model approach offers additional benefits in the form of interactive data sheets and application notes in digital form. With interactive data sheets, users can access 3D views of the package, interact with live test setups tailored to their specific application conditions, and evaluate real-time supply chain information. They can also access the latest ECAD and mechanical models derived directly from the source part model.

Interactive Virtual App Notes and Eval Boards

Component manufacturers can enhance the evaluation process by deploying cost-effective and instantly available virtual evaluation boards. These boards can be accessed directly from the manufacturer’s website, eliminating the need for logistics, assembly, and shipping. Virtual evaluation boards offer the advantage of being application-specific, allowing end customers to explore the capabilities of a component in a system context and address performance issues upfront.

Accelerating Resilient Electronics Systems Designs

By leveraging part models, first-pass success rates are improved as risks associated with time delays and human error are minimized. Part models enable advanced searches for specific component features, facilitating easier selection and incorporation into designs. Furthermore, the design capture process is transformed, significantly reducing the time required to capture complex designs. The overall impact is substantial time and budget savings, eliminating the need for extensive model content searching, resolving data integrity errors, decoding naming conventions, and avoiding unnecessary respins.

Summary

The electronics industry is undergoing a significant transformation by converting component data into industry-standard digital twins. This shift from PDF-based interactions to high-bandwidth, intelligent digital part model threads will unleash innovation and revolutionize the industry. It will accelerate the design process, increase profitability, and unlock the full potential of engineering teams. This digital transformation will have a profound impact on all stakeholders in the electronics value chain, enabling greater efficiency and fostering new levels of innovation.

The whitepaper will be a valuable read for everyone involved in the design and manufacturing of electronic products.

Also Read:

DDR5 Design Approach with Clocked Receivers

Getting the most out of a shift-left IC physical verification flow with the Calibre nmPlatform

Securing PCIe Transaction Layer Packet (TLP) Transfers Against Digital Attacks