SILVACO 073125 Webinar 800x100

3DIC Verification Methodologies for Advanced Semiconductor ICs

3DIC Verification Methodologies for Advanced Semiconductor ICs
by Kalar Rajendiran on 06-06-2024 at 10:00 am

3DIC Flow Challenges

At the recent User2user conference, Amit Kumar, Principal Hardware Engineer, Microsoft, shared the company’s experience from building a 3DIC SoC and highlighted Siemens EDA tools that were used. The following is a synthesis of core aspects of that talk.

3DIC Challenges

Despite the numerous advantages of 3DIC technology, its adoption is accompanied by several challenges. These include higher overall system costs, lower yield for the whole system, increased power density and thermal management difficulties, design complexity, Through-Silicon Vias (TSV) overhead, timing variation uncertainties, the need for new testing and verification methods, hierarchical and modular design requirements, and a lack of standards. Addressing these challenges requires collaborative efforts from industry stakeholders to develop innovative solutions, adopt best practices, and establish industry standards, ultimately unlocking the full potential of 3DIC technology for next-generation electronic systems.

3DIC Flow & Data Management Challenges

3DICs introduce a host of challenges, particularly in design flow and data management. Integrating heterogeneous technologies and managing complex interconnections across multiple layers demand robust data management solutions and comprehensive verification methodologies. Ensuring design integrity and reliability while navigating the intricate 3DIC landscape requires meticulous attention to detail and innovative approaches to data handling.

 

TSMC 3DBlox Open Standard

The 3DBlox technology is an open-source standard and promotes interoperability and collaboration across the semiconductor industry. TSMC’s 3DBlox 1.0 laid the foundation for 3D integration, enabling designers to stack logic and memory dies with TSVs for improved performance and power efficiency. Building upon this foundation, TSMC unveiled 3DBlox 2.0, further enhancing the scalability and flexibility of 3DIC designs with advanced packaging options and improved interconnect technologies. This includes ability to stack chips manufactured on different process nodes, enabling heterogeneous integration and maximizing design flexibility.

Design, Physical Verification, Reliability Analysis for 3DIC

Designing a 3DIC involves a multifaceted process encompassing design, physical verification, and reliability analysis. Designers must meticulously craft layouts that optimize performance, minimize power consumption, and ensure compatibility with heterogeneous technologies. Physical verification and reliability analysis are equally critical, encompassing checks for Design Rule Violations (DRV), Layout versus Schematic (LVS) verification, and reliability assessments such as thermal analysis and electromigration checks.

3DIC Verification Flow

Microbump libraries and Power Delivery Network (PDN) delivery are crucial for efficient signal routing and power distribution in 3DIC design. Microbump libraries offer optimized configurations for inter-die connections, while PDN delivery ensures robust power distribution for high-performance 3DICs. Verification at the die, package, and interposer levels is vital for seamless integration. Die-level verification ensures compliance and reliability, while package and interposer verification validate system integrity, covering signal, thermal, and mechanical aspects. The verification flow includes standalone verification of package-interposer layers, integrated verification, and staged approaches for early issue identification and resolution, ensuring the integrity and reliability of 3DIC designs.

Physical Verification Using Siemens EDA Tools

Verification using Siemens XSI (Xpedition Substrate Integrator) and Calibre 3DSTACK together offer a comprehensive solution for validating the integrity, functionality, and manufacturability of 3DIC designs. These tools leverage heterogeneous data sources, including ODB++, GDS, OASIS, LEF/DEF, and Verilog formats, to build a full system model encompassing all components and layers of the 3DIC design. They generate system-level netlists suitable for Layout vs. Schematic (LVS) and Static Timing Analysis (STA), enabling comprehensive verification of connectivity and timing characteristics. With support for detailed connectivity visualization, device transformation, and creation of interconnected devices, XSI and Calibre 3DSTACK facilitate the seamless integration and validation of 3DIC designs, ensuring successful development and deployment of high-performance and reliable solutions.

Calibre 3DSTACK

Calibre 3DSTACK is essential for verifying the integrity of stacked dies in 3DIC design. It utilizes data from XSI and specific physical details to configure checks for design rules and connectivity. The tool offers a range of checks, focusing on port and text attributes, which users can customize based on their requirements. By detecting issues like floating texts and verifying port connectivity, Calibre 3DSTACK ensures the reliability and manufacturability of 3DIC designs. It integrates seamlessly with XSI, enabling accurate verification and analysis for high-performance 3DIC solutions.

XSI Utilities and Automation

XSI simplifies 3DIC design projects with its utilities and automation features. It enables project database creation, data extraction, and setup propagation for efficient project management. The tool’s bump file splitting utility categorizes bump data, while automatic text identification and alignment streamline text manipulation. Property propagation ensures consistency, and runset creation automates connectivity checks, enhancing verification efficiency. These capabilities enhance productivity and accuracy in 3DIC design workflows, leading to optimal results in design and verification.

Optical Shrink and Thermal Expansion Handling

Optical shrink and thermal expansion pose unique challenges in 3DIC design, necessitating specialized methodologies and tools for accurate modeling. Optical shrink refers to feature distortion during lithography, while thermal expansion affects stacked die stability. XSI and Calibre 3DSTACK support die shrinking and optical shrink, ensuring functionality while reducing feature sizes. Verification tools address die shrinking mismatches, ensuring proper alignment and connectivity between stacked dies. Thermal expansion coefficients are considered to predict and mitigate package expansion effects. Thermal mechanical analysis evaluates thermal expansion impact on 3DIC stack integrity.

Summary

The journey towards realizing the full potential of 3DIC technology is marked by challenges and opportunities. From data management and design flow challenges to physical verification and reliability analysis, each aspect of 3DIC design demands meticulous attention and innovative solutions. By leveraging cutting-edge tools and methodologies, designers can navigate the complexities of 3DIC design and unlock new possibilities for high-performance and compact electronic systems. Siemens EDA is working closely across the ecosystem to deliver cutting edge tools and methodologies that support multi-vendor flow.

Also Read:

Rigid-flex PCB Design Challenges

Will my High-Speed Serial Link Work?

Enabling Imagination: Siemens’ Integrated Approach to System Design


Mastering Copper TSV Fill Part 3 of 3

Mastering Copper TSV Fill Part 3 of 3
by John Ghekiere on 06-06-2024 at 8:00 am

Mastering Copper TSV Fill Part 3 of 3

Establishing void-free fill of high aspect ratio TSVs, capped by a thin and uniform bulk layer optimized for removal by CMP, means fully optimizing each of a series of critical phases. As we will see in this 3-part series, the conditions governing outcomes for each phase vary greatly, and the complexity of interacting factors means that starting from scratch poses an empirical pursuit that is expensive and of long duration.

Robust and void-free filling of TSVs with copper progresses through six phases as laid out below:

  1. Feature wetting and wafer entry (previous article)
  2. Feature polarization
  3. Nucleation
  4. Fill propagation
  5. Accelerator ejection
  6. Bulk layer plating
  7. (Rinsing and drying, which we won’t cover in this series)

Fill Propagation

Ok, so, now that we’ve reached the third part of this 3 part series about TSV fill, we get to talk about the fill part of TSV fill.

We wetted the features completely and our dwell step ensured that copper cations and accelerator molecules were able to gather in the bottom of the via. We initiated a potential on the system, driving a current that causes copper deposition to begin. If our via sidewall (especially low down) was characterized by significant roughness, we spiked that current density to get an even initiation of deposition. All that’s left is to let fill happen. Right?

Well, actually…yeah, largely that should be right. If we chose a good chemistry and we were careful in setting up polarization, confirming the initiation of bottom up fill through FIB/SEM inspection, the fill “should” go as planned.

Here’s where we see the delicate interaction of our organic additives at play. Suppressor, coating the upper wall of the via (as well as the entire surface of the wafer), increases the over-potential required to reduce the ions to metallic copper. Meanwhile, accelerator is adsorbed in concentration on the via floor and to some degree slightly up the via sidewall near the bottom. And, whereas the suppressor is busy making it harder for copper cations to be reduced, the accelerator is making it easier. Current is flowing, and, because in copper plating the plating efficiency approaches 100%, each electron that passes takes part in the reduction of copper ions and thus the formation of copper metal. Thus copper deposition is happening and it is happening in the one place where we made it easiest. The via bottom.

We have flipped the physics on its head.

A few interesting things happen now. As we shared, it takes real time for a cupric ion to diffuse to the via bottom. In the case of a 10X100 micron via, it may take as much as 5 seconds!! Thus the current density must be kept high enough to plate at a useful rate but low enough to avoid consuming the ions faster than they can diffuse down from the top of the via. Technically, it is possible to work a model to estimate this rate. Trial and error works too if you have access to wafers and inspection.

It’s time for a meaningful aside here.

First, it is not necessary to commit an entire wafer to each TSV fill test you run. TSV wafers can be cleaved into pieces (we typically call them coupons) and fill can be optimized by running the tests on the individual coupons. This saves a LOT of wafers. A coupon 4cm X 4cm would suffice. At this size (depending on feature layouts) you could possibly cleave 7 or 8 coupons from a 150mm TSV wafer and as many as 18 from a 200mm. That’s a lot more tests!

There are some tricks to successfully mounting and plating coupons and the supplies that make it easy are readily found in online stores like Amazon. If interested to learn more, hit us up at info@techsovereignpartners.com.

Once you’ve gotten fill optimized on coupons, transferring the recipe to a full wafer is all about scaling the current to maintain the current density (pay close attention to the actual open area of the wafer); and making sure deposition rates are equal across the wafer diameter. Cross-wafer uniformity establishment is a topic all on its own not only applicable in TSV fill but in plating generally; we won’t cover in today’s discussion.

That’s it for our aside, back to propagation of fill. Superconformal deposition, i.e. fill, is happening. The sidewall copper of the via is staying thin but the bottom is getting thicker. If it is not, and instead deposition is conformal (happening on both sidewalls and bottom) or sub-conformal (happening really only on the walls and not the bottom), then there are two likely root cause possibilities (and both may be at play).

Before we explore root causes, let’s talk about how one determines whether super conformal fill is happening in the first place. As we said, and you likely already know, TSV fill can be a long process step, even an hour or more. FIB/SEM is a quite indispensable means of inspection for determining the performance of fill. But even the FIB cut can take a long, long time and FIB time is expensive. For these reasons and more, we recommend that, when working to optimize fill, you do not attempt to fill the entire via. Aim, instead, for a 1/4 or 1/3 fill, i.e. a partial fill. This makes each test go much faster and the FIB cuts faster too. More importantly, though, it shows you with clarity whether superconformal fill is happening or not. As we said in the previous article, if you did not get polarization right in the beginning, no recipe adjustment is going to fix it. Continue optimizing through imaging of partially filled vias until you confirm that copper growth is minimal or undetectable on the upper sidewalls and robust on the bottom.

Figure 1. Partial Fill of 10X100 micron via. Images courtesy ClassOne Technology

Meanwhile, back to root causes when this is not evident. As stated, we likely have two. They are:

  1. The current setpoint is not optimal.
  2. The additive ratios are not optimal.

First, current setpoint. Here’s a dirty little secret about “setting current”. You are actually setting potential (voltage). The cell (electrolyte, hardware of the reactor and the wafer itself) then determine a current based on that potential. Ideally, the power supply you use does allow a current setpoint but what it is actually doing is aiming for a potential that it expects will produce your target current, and adjusting based on ammeter output to settle on target. But there’s more, it isn’t really current you are after; it’s current density. And now, the geometry, size and count of vias plays a direct part in converting that current into a current density. Your chemistry vendor ideally has provided you with a target current density, from which you can calculate a target current based on the dimensions and number of vias on your wafer. If no target current density is provided, you’ve bought a ticket on the DOE express and you have some work to do.

Second is the additive ratios. Really, your chemistry vendor has almost certainly provided you with target additive ratios based on their own studies. You are certainly welcome to perform your early optimization tests by varying these ratios, however we would recommend trusting those ratios at the start. Crafting and executing a DOE around just current setpoints will be time consuming already. Adding a second parameter will of course greatly expand the test set, but including additive ratio tests also means dumping and re-pouring the bath again and again. Again, the chemistry vendor has likely determined an optimal ratio of additives. If they have not, or you have a via of highly non-standard dimensions, you may not be able to escape additive ratio testing.

So, through attentive preparation and testing, you established good bottom up fill, as verified by FIB/SEM imaging. In other words, the floor of the via is rising. This is exciting and you deserve a drink. But don’t forget, you were only doing partial fills so you are actually not done at all.

There are a few effects of this rising floor phenomenon. First, accelerator, generally speaking, does not get integrated into the plated copper, meaning that it continues to “ride the floor” as the via fills. Any accelerator adsorbed to the lower sidewalls is now getting swept up and added to the accelerator on the floor, increasing the concentration further. Incidentally, sidewall suppressor is getting displaced along the way. Acceleration is accelerating, in a sense. Fill may be moving faster in terms of vertical growth. That’s nice for throughput.

The other thing that’s happening is that the long, long time it was taking for copper ions to get down to the via bottom is getting shorter and shorter because the via is getting shorter and shorter. Your high aspect ratio via is becoming a moderate to low aspect ratio via. So this means, if throughput matters to you, you can make edit the recipe such that it starts as a “high aspect ratio via” recipe and modulates to a “moderate” and finally “low aspect ratio via” recipe. You can run what we’d commonly call a stair-step recipe. Which means that, at the start of fill, we use a relatively low current that maintains good fill (based on the current density recommended by your chemistry vendor and your via dimension calculations), then at some point later, when the via is shorter (for a 10X100micron via maybe a quarter shorter) the current can be stepped up higher. And then again later, and then again. Don’t get me wrong, fill “should” progress fine without stepping up the current. But faster is better. A stair-step recipe well constructed should fill a 10X100 micron via in under an hour. Without the stair-step, this fill time can be much, much longer.

Figure 2. Fully Filled 10×100 micron via (FIB/SEM). Image courtesy of ClassOne Technology
Figure 2. Fully Filled 10×100 micron via (FIB/SEM). Image courtesy of ClassOne Technology

Accelerator Ejection

Ok the via floor is approaching the top! This is great news. Unless you forgot about the brakes. Remember, we basically super charged the via floor with spicy, spicy accelerator molecules. They’re doing their job and that via floor is rushing up to the top of the hole. We’d like it to conveniently stop, pretty please, immediately at the top. But will it? In fact, it will not without proper attention and mounds will form above the vias, creating major problems in CMP, typically leaving un-polished copper bumps over every via. This is of course a no go for subsequent steps in the integration.

So what to do about it. The easy button is to, well, not get cheap when selecting a TSV chemistry. Chemical displacement is your friend when it comes to accelerator ejection and the avoidance of forming copper humps above your vias. A carefully and properly formulated leveler polymer is key here. In advanced TSV chemistries, the leveler magically ejects the accelerator as it reaches the top of the via hole. Easy.

Ok, what if your TSV chemistry works fine but doesn’t do the magic ejection thing? There is still a solution for you. You can force the ejection of ALL additives from all surfaces of what is now essentially a flat, copper coated wafer with no holes in it. Doing so requires two things: the right timing and the right power supply. First timing: You need to know when, in your recipe, the copper has reached the top of the hole. Again this can be done in coupon testing for recipe optimization. At the point when copper has reached the top, you can introduce a reverse pulse of the power supply in your recipe. Hence the second point, you would need the right power supply. A supply that is capable to pulse is very valuable in copper plating. A power supply that can pulse with a forward potential (one that deposits copper onto the wafer) and can also pulse with a reverse potential (one that removes copper from a wafer) provides even more flexibility.

Introducing a brief pulse in reverse will cause all additives to eject from the entire wafer surface. The step can be very short because the ejection is nearly instantaneous. We would recommend a short step, perhaps 50 milliseconds, and for simplicity we would recommend using the initial current at the beginning of your fill (not the high current nucleation step). Now you’ll return to forward potential for the bulk layer plating and additive adsorption will be what it will be; you don’t really care because the entire surface will see the same species adsorption now that the advantages associated with tall via holes is eliminated.

Figure 4. Oblique view of final bulk layer film (SEM). Image courtesy of ClassOne Technology

Bulk Layer Plating

Vias are filled. There were no voids. Well, you hope there’s no voids. I’m not sure you really checked every via. Anyway, you also ejected all that accelerator you worked so hard to adsorb (sigh). And now you need a layer of sacrificial copper on top of it all so that the CMP folk can polish it back to leave a perfectly flat hybrid surface of silicon perforated by neat little copper circles belying tall and slender, buried pillars of pristine copper.

Well, generally speaking, you can crank that power supply up now. As long as you’ve established good cross-wafer plating rate uniformity, you can go as fast as the chemistry can keep up with. 40 amps on a 300mm wafer is legitimate with most TSV chemistries. We can do the math. That’s about 18 amp on a 200mm wafer and 10 amp on a 150mm wafer.

So, you’re good at this and that film is very uniform. The question is, how thick do you make it? The answer is, I don’t really know for sure, sorry. For that you’ll have to talk to your CMP people. CMP, of course, takes advantage of insitu uniformity tracking and can make closed loop adjustments to maintain uniform removal. Exactly how quickly a uniform polish can be established depends on the optimization and the capabilities of that tool set. Obviously, CMP is one of the more expensive unit processes in the fab and financial controllers and fab managers will be interested to keep those costs as low as is practical. No, they’ll probably want costs lower than practical but that’s why they hired you. So, this means keeping that film as thin as possible, both from the plating perspective (maintaining uniformity) and the CMP perspective (uniform polish). That sounds like more DOE work. I hope you’ve looked up what “DOE” means by now.

If you have questions about TSV fill or some of the sub-topics raised in these articles, do reach out at info@techsovereignpartners.com

Also Read:

Mastering Copper TSV Fill Part 2 of 3

Mastering Copper TSV Fill Part 1 of 3


Arteris is Solving SoC Integration Challenges

Arteris is Solving SoC Integration Challenges
by Mike Gianfagna on 06-06-2024 at 6:00 am

Arteris is Solving SoC Integration Challenges

The difficulty of SoC integration is clearly getting more demanding. Driven by process node density, multi-chip integration and seemingly never-ending demands for more performance at lower power, the hurdles continue to increase. When you consider these challenges in the context of Arteris, it’s natural to think about hardware integration and the benefits a network-on-chip (NoC) technology offers. But the problem is much bigger than hardware integration and so is the portfolio of solutions offered by Arteris. Managing all the information associated with a complex design and ensuring all teams are working off the same verified information is a daunting task. What you don’t know can hurt you. I recently had the opportunity to learn about the broader solution Arteris offers. Read on to see how Arteris is solving SoC integration challenges.

An Overview of the Problem

 While most folks immediately think “NoC” when they hear Arteris, the company is assembling a much broader portfolio of solutions. The common thread is that they all facilitate more efficient and predictable connectivity and assembly of the design. At the highest level, the way I think about the company’s role is this – focus on your unique design idea and let Arteris help you put it all together.

Insaf Meliane

I was able to spend some time recently with Insaf Meliane, Product Management & Marketing Manager at Arteris. Insaf has been with Arteris for three years. She came to Arteris through the company’s acquisition of Magillem, where she worked for four years prior to joining Arteris. She has a rich background in chip design from places such as NXP Semiconductors, ST Microelectronics and ST-Ericsson. Insaf was able to enlighten me about how Magillem’s technology helps Arteris deliver a holistic solution to the SoC integration problem. She began by describing the problem.

The information Insaf shared was, in part, based on actual customer discussions. She described SoC integration as a massive amount of hardware and software information each design creates and how that data interacts with the design flow and all the teams using it.

She described current designs that can contain 500+ up to 1K IP blocks with 200K+ up to 5M+ registers. She pointed out that the management of this sea of IP blocks is getting more critical to ensure a successful project. Specific challenges include:

  • Content integration from various sources (soft IP, configurable 3rd party IP, internal legacy…)
  • Design manipulation (derivative, repartitioning, restructuring)

Against this backdrop, traditional design methodologies with homegrown solutions are reaching their limits and there is significant manual effort across a large variety of design tasks. She explained that many different forms of design information exist. For example, spreadsheets, Docx, IP-XACT, and SystemRDL to name a few.

This results in limited cross-team synergy. Miscommunication can lead to specification misalignment, which can cause huge problems in the form of late fixes or even re-spins if issues cannot be resolved in software. Management of all this information in a coherent, reliable way can deliver the margin of victory for a complex design project.

The Arteris Approach with Magillem

Insaf described a typical design project where one team uses register documentation, another team uses a verification environment, a third is turning a netlist into a layout and a fourth is developing the primary architecture with top RTL.  The question is, how can you find all inconsistencies in each of these (independent) sources of information and what are the consequences of missing something?

Deploying a single source of truth for the myriad of design information is the answer, and the path to a better design. It turns out IP-XACT offers an industry standard, proven data model. It supports systematic IP reuse and IP interoperability and facilitates IP packaging for efficient handling of all aspects of system integration such as connectivity, configurability and the hardware/software interface. It also offers a solid API to access all design data with a powerful scripting environment. 

As a major Contributing Member of the Spirit/Accellera Consortium (driving IP-XACT standards since inception & Co-chair of the IP-XACT 2021 Accellera committee), Arteris has proven experience with IP-XACT usage. The figure below summarizes the benefits of the approach.

Single Source of Truth for Better Quality Design

This approach delivers a unified environment that can be used by all teams to maximize collaborative work and data accuracy. It allows for the development of a vendor-independent flow and facilitates portability of the design environment to third parties, partners, and customers.

This is the architecture of the Magillem products from Arteris, and it delivers constant repeatability for design derivatives, projects, and design flows. The result is higher productivity and quality with a consistent end-to-end design flow solution. Insaf elaborated on the many capabilities delivered by the Magillem tool. A link is coming so you can dig deeper, but here is a summary list:

For connectivity

  • Project Management: Design navigation and data aggregation
  • Parameters Configuration: Hierarchical propagation or overriding
  • SoC Assembly: Bus i/f detection, rule-based connectivity, bus/signal split/tie/open, hierarchical connection, glue logic insertion, feedthrough
  • Hierarchy Manipulations: Move, merge, and flatten a physical/virtual hierarchy for RTL restructuring/partitioning
  • Platform Derivatives: With the incremental design, automatic update, and design diff and merge capability
  • Comprehensive Checkers: Catch errors as you enter the design information before running any simulation
  • Advanced Generation Capability: RTL Netlist generation, connectivity report in addition to makefile scripts for an extensive range of EDA tools
  • Tool Integration: Tight link with the Registers tool to generate a system address map when both tools are combined

For registers and memory map

  • Single Database: Import and capture memory map information into a single database (IP-XACT)
  • Parameterization: including configurable properties, custom specific access types and register modes
  • Comprehensive Checkers: Catch errors early in the process with built-in and custom checkers
  • Standard Formats Support: Output standard formats for HW design and verification, embedded SW, and documentation
  • Custom Templates: Advanced generation capability with support for custom template-based generators
  • Merge/Flatten IP Memory: Enable easy update/manipulation/creation of new global memory map for a sub-system or SoC

Insaf spent some time on the restructuring capabilities offered by Magillem. This is a very powerful capability to address power and floor-planning constraints. Features include:

  • Automated hierarchy manipulation from the connectivity tool
  • Separating RTL hierarchy and physical hierarchy
  • Feedthrough connections for abutted floorplan
  • Hard macros replication

There are many uses for RTL restructuring, including:

  • to adapt to new design changes
  • to create design derivatives
  • to optimize the netlist for physical design
  • to improve the overall congestion of the design
  • to meet SoC’s power and floor-planning constraints
  • to disaggregate the SoC into chiplets for multi-die designs

She explained these capabilities can reduce design time from weeks to 1-2 days. Indeed the tool automatically updates the design connections after restructuring, enabling quick and safe adaptation before delivering a formally equivalent RTL netlist to the physical design team.

The figure below provides a broad overview of all the capabilities Arteris delivers for SoC integration along with the standards and EDA tools that are supported.

Arteris SoC Integration Automation

A high-level summary of these capabilities includes:

  • Connectivity:
    • IP packaging
    • Design assembly
  • Registers:
    • Memory map intent capture
    • Hardware/software interface generation
  • Suite combining connectivity and registers: system map validation

To Learn More

 I’ve only touched on the basics here. The capabilities offered by Arteris to effectively manage and integrate the information associated with a complex design is substantial. You can learn more about Magillem’s connectivity capabilities here and Magillem’s register management capabilities here. You can also contact Arteris to discuss your specific requirements. And that’s how Arteris is solving SoC integration challenges.


What to Do with All that Data – AI-driven Analysis Can Help

What to Do with All that Data – AI-driven Analysis Can Help
by Rob vanBlommestein on 06-05-2024 at 10:00 am

1 design da

Today’s advanced node chip designs are faced with many new complexities from design and verification down to manufacturing. The solutions used at every stage of chip development generate petabytes of data. Managing, analyzing, understanding, and acting upon that data is overwhelming and paralyzing. Manual interpretation of that data is nearly impossible and at best leads to surface level analysis.

AI has the unique ability to sift through the vast amount of data to identify anomalies and patterns and produce actionable insights that can have significant impact on productivity, design quality and lifecycle, and manufacturing.

Synopsys is hosting our Synopsys.ai Data Analytics Webcast Series to dive deeper into how AI can be used to unlock, connect, and analyze the immensity of data to maximize efficiencies and quality across the full design-to-silicon lifecycle.

The webcast series is segmented into three parts: AI-driven Silicon Analytics, AI-driven Design Analytics, and AI-driven Manufacturing/Fab Analytics.

Integrated SLM Analytics from Design Through Manufacturing

The first presentation in the series takes a look at leveraging design, test, and manufacturing data by automatically highlighting silicon data outliers for improved chip quality, yield, and throughput with Synopsys Silicon.da. It is the first integrated SLM analytics solution that addresses post-silicon challenges by increasing engineering productivity, improving silicon efficiency and providing the tool scalability needed for today’s advanced SoC’s. Silicon.da serves a critical role as part of an overall SLM solution dedicated to improving the health and operational metrics of a silicon device across its complete lifecycle.

In this presentation, Mr. Anti Tseng, Senior Manager at MediaTek, will explain how Silicon.da’s volume diagnostics feature identified systematic issues more efficiently than traditional methods by providing very accurate failure locations within the silicon resulting in improved yield by a single digit percentage and in a shorter amount of time – from weeks to days. Mr. Tseng will also further discuss how utilizing this volume diagnosis analysis technology improves the foundry process for advanced nodes resulting in millions of dollars of cost savings through high volume chip production for fabless companies.

Maximize Productivity with Deep Insights into PPA Trajectories

The second presentation in the series targets design engineers and shows them how to uncover actionable design insights that accelerate the design process with Synopsys Design.da., the industry’s first comprehensive data-visibility and analytic-driven design optimization and signoff closure solution.

The webcast will show how to leverage vast datasets to bring unmatched productivity and a better, faster, and smarter way to design. Techniques will be highlighted on how to siphon metrics data while curating associate analysis data efficiently and automatically to pinpoint areas of focus in real-time and perform analysis to identify PPA bottlenecks and the root-cause. The solution automatically classifies design trends, identifies limitations, and provides prescriptive guided root-cause analysis across the entire design flow.

Comprehensive AI-Driven Process Analytics for Faster Ramp and Efficient High-Volume Manufacturing

The third presentation takes a deeper dive into analyzing data collected throughout the manufacturing process to improve fab yield and throughput, enabling a faster ramp and more efficient high-volume manufacturing (HVM) by utilizing Synopsys Fab.da.

The challenges before semiconductor fabs are expansive and evolving. As the size of chips shrinks from nanometers to eventually angstroms, the complexity of the manufacturing process increases in response. To combat the complexity and sheer intricacy of semiconductor manufacturing, innovative software solutions are required. Synopsys Fab.da is a comprehensive process control solution that utilizes artificial intelligence (AI) and machine learning (ML) to allow for faster production ramp and efficient high-volume manufacturing. Fab.da is a part of the Synopsys.da Data Analytics solutions, which brings together data analytics and insights from the entire chip lifecycle. It can analyze many petabytes of data originating from thousands of equipment in semiconductor fabs with zero downtime.

Learn how the power of AI can help drive your data analytics. Register for these webcasts today.

Also Read:

Synopsys Accelerates Innovation on TSMC Advanced Processes

SoC Power Islands Verification with Hardware-assisted Verification

Synopsys is Paving the Way for Success with 112G SerDes and Beyond

Lifecycle Management, FuSa, Reliability and More for Automotive Electronics


What’s all the Noise in the AI Basement?

What’s all the Noise in the AI Basement?
by Claus Aasholm on 06-05-2024 at 8:00 am

Nvidia is HUGE 2024

My dog yawns every time I say Semiconductor or Semiconductor Supply Chain. Most clients say, “Yawn…. Don’t pontificate – pick the Nasdaq winners for us!”

Will Nvidia be overtaken by the new AI players?

If you follow along with me, you might gain some insights into what is happening in AI hardware. I will leave others to do the technology reviews and focus on what the supply chain is whispering to us.

The AI processing market is young and dynamic, with several key players, each with unique strengths and growth trajectories. Usually, the supply chain has time to adapt to new trends, but the AI revolution has been fast.

Nvidia’s data centre business is growing insanely, with very high margins. It will be transforming its business from H100 to Blackwell during this quarter, which could push margins even higher. AMD’s business is also growing, although at a lower rate. Intel is all talk, even though expectations for Gaudi 3 are high.

All the large cloud providers are fighting to get Nvidia’s AI systems, but they find them too expensive, so they work on developing their chips. Outside the large companies, many new players are popping up like weeds.

One moment, Nvidia is invisible; the next moment, they will lose because of their software.. or their hardware… or …

The only absolute is that everything is absolutely rotating around Nvidia, and everybody has an opinion about how long the Nvidia rule will last. A trip into the AI basement might reveal some more insights that can help predict what will happen in the future of AI hardware.

The Semiconductor Time Machine

The Semiconductor industry boasts an intricate global supply chain with several circular dependencies that are as fascinating (yawn) as they are complex. Consider this: semiconductor tools require highly advanced chips to manufacture even more advanced chips, and Semiconductor fabs, which are now building chips for AI systems, need AI systems to function. It’s a web of interdependencies that keeps the industry buzzing.

You have heard: “It all starts with a grain of sand.” That is not the case – it starts with an incredibly advanced machine from the university cities of Northern Europe. That is not the case either. It begins with a highly accurate mirror from Germany. You get the point now. Materials and equipment propagate and circulate until chips exit the fabs, get mounted into AI server systems that come alive and observe the supply chain (I am confident you will check this article with your favourite LLM).

The timeline from when a tool is made until it starts producing chips can be extended. In the best case, it can take a few quarters to a few years. This extended timeline allows observation. It is possible to see materials and subsystems propagate through the chain and make predictions of what will happen.

Although these observations do not always provide accurate answers, they are an excellent tool for verifying assumptions and adding insight to the decision-making process.

The challenge is that the supply chain and the observational model are ever-changing. Although this allows me to continue feeding my dog, a new model must be applied every quarter.

The Swan and the ugly ducklings

I might lose a couple of customers here, but there is an insane difference between Nvidia and its nearest contenders. The ugly ducklings all have a chance of becoming Swans, but not in the next couple of years.

The latest scorecard of processing revenue can be seen below. This is a market view not including internally manufactured chips that are not traded externally:

This view is annoying for AMD and Broadcom but lethal for Intel. Intel can no longer finance its strategy through retained earnings and must engage with the investor community to obtain new financing. Intel is no longer the master of its destiny.

The Bad Bunch

These are some of Nvidia’s critical customers and other data centre owners who are hesitant to accept the new ruler of the AI universe and have started to make in-house architectures.

Nvidia’s four largest customers each have architectures in progress or production in different stages:

  1. Google Tensor
  2. Amazon Inferentium and Trainium
  3. Microsoft Maia
  4. Meta MTIA

A short overview of the timing of these architectures can be seen here

Unlike established chip manufacturers, Google only has real manufacturing traction with the TPU architecture. This research shows that there are rumours that it is more than ordinary traction.

Let’s go and buy some semiconductor capacity.

As the semiconductor technology needed for GPU silicon is incredibly advanced, it is evident that all the new players will have to buy semiconductor capacity. An excellent place to start is TSMC of Taiwan. Later, TSMC will be joined by Samsung and Intel, but for now, TSMC is the only show in town. Intel is talking a lot about AI and becoming a foundry, but the sad truth is that they currently get 30% of their chips made outside, and it will take some time before that has changed. Even when Intel gets the manufacturing capacity, they still need customers to switch which is not an easy or cheap task. With new ASML equipment in Ireland and Israel, they are like the first intel locations to go online.

The problem for the new players is that access to advanced semiconductor capacity is a strategic game based on long-term alliances. It is not like buying potato chips.

TSMC’s most important alliances

The best way to understand TSMC’s customer relationships is through revenue by technology.

TSMC’s most crucial alliance has been with Apple. As Apple moved from dependence on Intel to reliance on its home-cooked chips, the coalition grew to the point that Apple is the only customer with access to 3nm, TSMC’s most advanced process. This will change as TSMC introduces a 2nm technology, which Apple will try to monopolise again. You can rightfully taunt the consumer giant for not being sufficiently innovative or having lost the AI transition, but the most advanced chips ever made can only be found in Apple products, and that is not about to change soon.

As a side note, it is interesting to see that the $8.7B/Qtr High Performance Computer division is fuelling the combined revenue of the datacenter business approaching $25B and all of the MAC production of 7.5B$ plus some other stuff. TSMC is not capturing as much of the value as the customers are.

Nvidia and TSMC

The relationship between Nvidia and TSMC is also very strong and if Nvidia is not already TSMC’s most important customer, it will happen very soon. The prospects of Nvidia’s business is higher than those of Apple’s business.

Both the Apple and Nvidia relationships with TSMC are at the C-level as they are of strategic importance for all the companies. It is not a coincidence that you see selfies of Jensen Huang and Morris Chang eating street food together in Taiwan.

Like Apple has owned the 3nm process at, Nvidia has owned the 4nm process. Although Samsung is trying to attract Nvidia, it is not likely to be successful as there are other attractions to the TSMC relations as we will dive into later.

TSMC and the rest

With a long history and good outlook, the AMD relationship is also strong, while the dealings with Intel are slightly more interesting. TSMC has a clear strategy of not competing with their customers, which Intel certainly will when Intel Foundry Services becomes more than a wet dream. Intel gets 30% of its chips made externally, and although the company does not disclose where it is not hard to guess. TSMC manufactures for Intel until they are sufficiently strong to compete with TSMC. Although TSMC is not worried about the competition with Intel, I am sure they will keep some distance, and Intel is not first on TSMC’s dance card.

The Bad Bunch is also on TSMC’s customer list, but not with the same traction as the Semiconductor companies. However, they will not have a strong position if Foundry capacity becomes a concern.

As Apple moves to 2nm, it will release capacity at 3nm. However, this capacity is still unknown, with revenue around a modest $2B/Q, and needs to be expanded dramatically to cover all of the new architectures that plan to move into the 3nm hotel. Four companies are committed to 3nm, but the rest will likely follow shortly.

TSMC expects the 2024 3nm capacity to be 3x the 2023 capacity. Right now, there is sufficient capacity at TSMC, but that can change fast. Even though Intel and Samsung lurk in the background, they do not have much traction yet. Samsung has secured Qualcomm for its 2nm process, and Intel has won Microsoft. It is unclear if this includes the Maia AI processors.

TSMC’s investments

TSMC is constantly expanding its capacity, so much so that it can be hard to determine whether it is sufficient to fuel the AI revolution.

These are TSMC’s current activities. Apart from showing that Taiwan’s Chip supremacy will last a few years, they also show that the new 2nm technology needed to relieve 3nm technology is over a year away.

There are other ways of expanding capacity. It can be extended by adding more or faster production lines to existing fabs. A dive into another part of the supply chain can help understand if TSMC is adding capacity.

The combined tool sales are down, mostly in TSMC’s home base, Taiwan, and the other expansion area for 2nm, USA. This matches TSMC’s CapEx to Revenue spend (how much of revenue is spent on Capital Investments—new tools & factories).

Although TSMC is adding a lot of capacity, it might be too late to allow all the new players to get the capacity they need for their expansion. The low tool sales in Taiwan suggest that short-term capacity is not on the TSMC agenda; rather, the company is focusing on the Chips Act-driven expansion in the USA, which will delay capacity.

Samsung is not attracting attention to its foundry business, and Intel is some time away from making a difference. Even though the long-term outlook is good, there are good reasons to fear that there are not enough investments in short-term expansion of leading edge Semiconductor capacity at the moment.

A shortage can seriously impact the new players in AI hardware.

The current capacity limitation

It is not silicon that is limiting Nvidia’s revenue at the moment. It is the capacity of the High-Bandwidth Memory and the advanced packaging needed in the new AI servers.

Electrons and distance are not friends

The simplest way of describing this is that electrons and distance are not friends. If you want high speed, you need to get the processors close to each other and close to a lot of high-bandwidth memory. To do so, semiconductor companies are introducing new ways of packaging the GPUs.

The traditional way is to place dies on a substrate and wire them together (2D), but this is not sufficiently close for AI applications. They are currently using 2.5D technology, where stacks of memory are mounted beside the GPU and communicate through an interposer.

Nvidia is planning to go full 3D with its next-generation processor, which will have memory on top of the GPU.

Well, as my boss used to say, ” That sounds simple—now go do it!” The packaging companies have as many excuses as I have.

Besides having to flip and glue tiny dies together and pray for it to work, DRAM must be extremely close to the oven – the GPU.

“DRAM hates heat. It starts to forget stuff about 85°C, and is fully absent-minded about 125°C.”

Marc Greenberg, Group Director, Cadence

This is why you also hear about the liquid cooling of Nvidia’s new Blackwell.

The bottom line is that this technology is extremely limited presently. Only TSMC is capable of implementing it (CoWoS—Chip-on-Wafer-on-Substrate in TSMC terminology).

This is no surprise to Nvidia, which has taken the opportunity to book 50% of TSMC’s CoWoS capacity for the next 3 (three?) years in advance.

Current AI supply chain insights

Investigating the supply chain has allowed us to peek into the future, as far as 2029, when the last of the planned TSMC fabs goes into production. The focus has been on the near term until the end of 2025, and this is what I base my conclusion on (should anybody be left in the audience). Feel free to draw a different conclusion based on the facts presented in this article:

  • Nvidia is the only show in town and will continue to be so for the foreseeable future.
  • Nvidia is protected by its powerful supplier relations, built over many years.
  • AMD will do well but lacks scale. Intel.. well.. it will take time and money (they don’t have – If they pull it off, they will be a big winner)
  • The bad bunch like Nvidia systems but less so the pricing, so they are trying to introduce home cooked chips.
  • The current structure of the AI supply chain will make it very difficult for the bad bunch to scale their chip volumes to a meaningful level.
  • The CoWoS capacity is Nvidia’s Joker – 3 years of capacity ensured, and they can outbid anybody else for additional capacity.

Disclaimer: I am a consultant working with business data on the Semiconductor Supply Chain. I own shares in every company mentioned and have had them for many years. I don’t day trade and don’t make stock recommendations. However, Investment banks are among my clients, using my data and analysis as the basis for trades and recommendations.

Also Read:

Ncredible Nvidia

Tools for Chips and Dips an Overview of the Semiconductor Tools Market

Oops, we did it again! Memory Companies Investment Strategy

Nvidia Sells while Intel Tells


Arm Client 2024 Growing into AI Phones and AI PCs

Arm Client 2024 Growing into AI Phones and AI PCs
by Bernard Murphy on 06-05-2024 at 6:00 am

AI phone

I wrote last year about the challenge Arm Client/Mobile faces in growing in a saturated phone market and how they put a big focus on mobile gaming to stimulate growth. The gaming direction continues but this year they have added (of course) an AI focus, not just to mobile but also to other clients, notably PCs. It would be easy to be cynical about this direction but there are now indications (see below) that AI in client applications is moving beyond promises and is translating into real products. While there is undeniable debate around risks of AI in personal devices, I believe real value with safety will ultimately win out over both hype and apocalyptic claims. Given existing momentum, product builders must be in the game to have a chance of reaping those benefits.

What’s new in Arm Client 2024

Think AI at the edge requires a dedicated AI accelerator? Think again; according to Chris Bergey (SVP and GM for the Arm Client line of business), 70% of Android 3rd party ML workloads run on Arm CPUs with no plan to move elsewhere. In support of this preference Arm continues to advance CPU and GPU platforms, this year introducing CSS – compute subsystem wrapping CPU and GPU cores, optimized and hardened now down to 3nm processes.

At the core IP level, Cortex-X925 delivers a 36% performance uplift for single-threaded processes, 41% uplift in AI (time to first token for tiny-Llama), while the Immortalis-G925 GPU offers 37% better performance over a range of graphics tasks and 34% improvement in inference performance over a wide set of AI and ML networks. Raytracing, first introduced in 2022, now delivers 52% improved performance on complex objects. For power saving architectures, Cortex-A725 provides 35% improved power efficiency over Cortex-A720 and the “LITTLE” CPU, Cortex-A520 has been further optimized for 15% improved power efficiency. Meanwhile Arm is showing 30% improvement in GPU power for games like Fortnite.

The Arm Client LOB have also introduced new software libraries to squeeze further application performance from these CSS-based designs. The first such libraries are Kleidi CV for computer vision and Kleidi AI for AI applications, exploiting Arm’s SVE2 and SME2 extensions. Based on a little digging around, Kleidi CV offers support for saturation arithmetic, color conversions, matrix transforms, image filters and resize with interpolation. Details are sparser on Kleidi AI but what I can find suggests support for what they call micro-kernels which allow say optimized matrix multiplications to be split into different threads across an output tensor. I think the key takeaway here is that for CSS implementations, just as hardware can be maximally optimized, low-level software functions (say for ONNX) can also be maximally optimized, which is what the Kleidi libraries aim to offer especially in signal processing and AI operations.

Enabling AI phones and PCs

Good story but where’s the market demand? Before I get to AI, Arm has been co-optimizing for Android with Google for improved performance in Chrome, also rippling through to handset OEM browsers, YouTube performance and lower power. Apparently, this collaborative effort is paying off as reflected in a trend back to OEMs building on the Google distribution rather than their own Android variants. (On an unrelated note, did you know that YouTube now ranks as the most popular streaming service on TVs? Food for thought in growth potential for Google and Arm.) Another example of Arm widening the moat, here again through CSS and collaborative development around a market they already dominate.

A downside for AI and CV is increasing complexity in corresponding pipelines and stacks. Proprietary ISPs and AI accelerators are appealing for added performance of course, but if a standard platform can be tuned to offer enough performance at low enough power, I can equally see a cost and time to market case for sticking to hardware platform evolution rather than revolution.

For example, the new Samsung Galaxy AI provides real-time multi language translation among other AI-based features, building on top of Google Gemini. Other phone OEMs like Oppo, Vivo and Xiaomi are introducing their own AI assistants and LLMs in search of differentiation. All sitting on Arm processing.

On the AI-enabled PC front, I’ll first refer you back to my write up on the Cristiano Amon (CEO of Qualcomm) chat at Cadence Live 2024. There he made a big deal about the convergence between phone and PCs and particularly the opportunity to reinvent the PC and the PC market through the Qualcomm Snapdragon X-Elite processor. This processor has already been barnstorming the automotive industry; if you buy a new car, chances are high that the chip behind your infotainment system is Snapdragon X-Elite. Now it’s also taking off in laptops. You can buy such a laptop from Lenovo, Samsung, Dell, HP, Microsoft and ASUS and perhaps others. Given Microsoft support for Arm-based platforms I’m sure other semiconductor systems players are looking hard at this opportunity.

Speaking of Microsoft, Satya Nadella was recently interviewed by the Wall Street Journal and was very excited. He sees AI CoPilot+ PCs besting Macs which is quite a statement given that laptops in general have had little new to offer for quite a long time. He says the new Surface platforms are 58% faster than the M3 MacBook Air and have 20% better battery life. Together with lots of opportunities to AI-enable all sorts of apps locally on the PC. (To be clear, he is talking about at least some of the AI happening on the PC, not in the cloud.) Satya name-dropped Arm multiple times during this interview, so yes, if this AI PC transition is real and market-changing, Arm Client products will also benefit from that transition.

Exciting ideas and trends. Much still to prove of course, not least around safety/privacy implications. As an inveterate optimist I believe issues will shakeout over time and useful/beneficial innovations will survive. You can read the Arm release HERE.


Is it time for PCB auto-routing yet?

Is it time for PCB auto-routing yet?
by Daniel Payne on 06-04-2024 at 10:00 am

PCB routing min

PCB designers have been using manual routing for decades now, so when is it time to consider adding interactive routing technologies to become more productive? Manually routing traces to connect components will take time from a skilled team member and involves human judgement that will introduce errors. When a design change comes in, then iterating the PCB layout manually can be a slow process, leading to project delays. Growing complexity in PCB designs caused by higher component density, and boards using multiple layers really challenge manual routing approaches. Finding PCB designers with expertise to complete manual routing efficiently can be another burden.

PCB automation tools can quickly connect components, which saves designers time and effort. A designer can then spend their time on other challenges, like signal integrity validation, lowering interference and meeting design constraints. An automated tool will produce consistent results, lowering errors. With automation the trace widths are more uniform, clearances are maintained, preventing the need for revisions. Even when a design change arrives, the automated routing approach tackles the task quickly. The learning curve for automation tools is brief, so payback happens quickly. Automation improvements depend on design complexity, routing automation quality, designer skills and project requirements.

The industry isn’t at the level of full automation for PCB routing, so don’t expect a push-button flow for all designs. A PCB design may have unique and complex design constraints and requirements that would make an autorouter ineffective. Component placement, signal integrity (SI), thermal management, and EMI/EMC compliance, can all require manual routing. Analog components and high-speed signals may demand manual routing to achieve the precision required. Some autorouters introduce errors, causing manual rework and unforeseen delays. Learning a new autorouter can require time to become proficient, especially if the tool has too many arcane settings. Achieving a specific PCB layout style may not be possible using an automated tool. Some combination of manual and automated routing will typically yield the best results.

PCB designers are still required to solve issues like the design concept and specification, selecting the proper components, performing signal integrity analysis and optimizing high-speed designs, routing the most critical signals, managing the thermal goals, complying with EMI/EMC requirements. Humans are also best suited to make complex trade-offs between cost, performance, size, and manufacturing. Yes, automation tools will speed up parts of a PCB project, while manual methods will remain for the more abstract tasks.

PADS Professional

Siemens has created a sketch path feature in the PADS Professional tool that enables a PCB designer to achieve high design quality and high completion rates in less time compared to manual routing. Users can route individual traces or hundreds of single-ended differential pairs. Plus, this technology will automatically improve pin escapes during routing while avoiding the use of vias.

Sketch routing allows a PCB designer to automatically fan out, untangle, and route specific nets with hand-route quality, as shown below.

Sketch

Autoroute

With sketch routing a designer quickly draws where routing should go by selecting an unrouted net and “dragging” the cursor on the rough path they’d like. Then the PADS Professional sketch router will automatically complete the routing one net at a time, all with little or no cleanup required. Even using components like BGAs, the sketch routing completes without using extra vias, achieving typical completion rates above 90 percent. The sketch routing in PADS Professional uses dynamic push-and-shove, along with real trace routing.

Summary

The old maxim that time is money still rings true for PCB projects today. Manual routing has been done for many years, yet it can take too much time and likely requires an experienced user. The sketch router feature in PADS Professional is a capable method to automate many routing tasks that used to be done manually, so your project can complete in less time, with fewer experts and higher quality.

Read the 8 page eBook online from Siemens.

Related Blogs

 


Silicon Creations is Enabling the Chiplet Revolution

Silicon Creations is Enabling the Chiplet Revolution
by Mike Gianfagna on 06-04-2024 at 6:00 am

Silicon Creations is Enabling the Chiplet Revolution

The multi-die chiplet-based revolution is upon us. The ecosystem will need to develop various standards and enabling IP to make the “mix and max” concept a reality. UCIe, or Universal Chip Interconnect express is an open, multi-protocol on-package die-to-die interconnect and protocol standard that promises to pave the way to a multi-vendor chiplet market. But delivering an implementation that balances all the requirements for power, performance and form factor can be quite challenging. At the recent IP-SoC Silicon Valley event, Silicon Creations presented a comprehensive strategy to overcome these challenges. Read on to see how Silicon Creations is enabling the chiplet revolution.

About Silicon Creations

Silicon Creations is a self-funded, leading silicon IP provider with development in the US and Poland, and a sales presence worldwide. The company provides world-class IP for precision and general-purpose timing (PLLs), oscillators, low-power, high-performance multi-protocol and targeted SerDes, and high-speed differential I/Os. Applications include smart phones, wearables, consumer devices, processors, network devices, automotive, IoT, and medical devices.

The majority of the world’s top 50 IC companies work with Silicon Creations. 1,000+ chips contain the company’s IP using over 700 unique IP products. Silicon Creations touches over 150 production tape-outs each year with over 400 customers, with 3nm designs in mass production. You can learn more about Silicon Creations at SemiWiki here.

About the Die-to-Die Interface Challenges

Blake Gray

Blake Gray developed a comprehensive presentation for IP-SoC Silicon Valley. He is the Director of Hardware Engineering at Silicon Creations. He’s been with the company for over 12 years. Unfortunately, he fell ill before the event and Jeff Galloway, Principal and Co-Founder at Silicon Creations stepped in to present for Blake. Let’s take a look at the excellent material Blake developed. It begins with a discussion of the design and performance challenges of transmit (TX) clock design.

Clock performance is critical; it is distributed to all TX subcomponents. Furthermore, UCIe employs a two-phase feed-forward clocking architecture where timing jitter is between clock and data edges. Optimizing the TX clock design is a critical element for effective die-to-die communications. If the die-to-die interface becomes the performance bottleneck, the advantages of a chiplet design are potentially lost, so the stakes are high.

Next were the four competing requirements that must be balanced for a successful design – low power, small form factor, ultra-low jitter, and a wide tuning range. For this last point, a clocking solution with a wide tuning range is useful as it can support all data rates required with no need to integrate data rate-specific solutions per project. This makes the whole design effort easier and more reusable. The figure below illustrates these design challengers and some of the design solutions required.

TX Clock Design and Performance Challenges

The Silicon Creations Approach

The presentation then focused on some of the work going on at Silicon Creations to address these challenges. A dedicated die-to-die ring PLL was described that is currently in development on TSMC 7nm FF, but is easily portable to other process nodes. The PLL can be driven by any quality clock source, or even a resonator-based oscillator.

Applying this clocking solution for a standard package, 32GT/s application with a 16-bit data width results in a maximum power for the physical layer = 24mW x 16 = 384mW. More details on power consumption and other parameters are summarized in the diagram below.

Other die-to-die solutions also exist for TS16/12/6/5/4/3/2. The presentation concluded by stating that the immense performance requirements of clocking solutions (ultra-low jitter, low power, wide tuning range, and small form factor) mandate careful design considerations and optimization tradeoffs. The Silicon Creations clocking/XO sustaining circuit IP portfolio is well-positioned to meet the demands of designs requiring optimal die-to-die communications.

To Learn More

You can see the full line of high-performance IP available from Silicon Creations here. If you would like to reach out to the company to learn more, you can do that here. And that’s how Silicon Creations is enabling the chiplet revolution.


Unlocking the Future: Join Us at RISC-V Con 2024 Panel Discussion!

Unlocking the Future: Join Us at RISC-V Con 2024 Panel Discussion!
by Daniel Nenni on 06-03-2024 at 10:00 am

Software Panel pix (1)

Are you ready to dive into the heart of cutting-edge computing? RISC-V Con 2024 is just around the corner, and we’re thrilled to invite you to a riveting panel discussion that promises to reshape your understanding of advanced computing. On June 11th, from 4:00 to 5:00 PM, at the prestigious DoubleTree Hotel in San Jose, California, join us for an insightful exploration into “How RISC-V is Revolutionizing Advanced Computing from AI to Autos!”

Moderated by the eminent Mark Himelstein from Heavenstone, Inc., our panel brings together luminaries in the field, each offering unique perspectives and invaluable insights. Dr. Charlie Su of Andes Technology, Dr. Lars Bergstrom from Google, Barna Ibrahim of RISC-V Software Ecosystem (RISE), and Dr. Sandro Pinto of OSYX Technologies will grace the stage, guiding us through the intricate landscape of RISC-V’s transformative power.

At the heart of our discussion lie two pivotal questions that demand attention:

What are people doing today or in the next 6 – 12 months with RISC-V?

RISC-V isn’t just a theoretical concept; it’s a driving force behind real-world innovation. Our esteemed panelists will delve into the current landscape, shedding light on the groundbreaking projects and initiatives underway. From artificial intelligence to automotive technology, discover how RISC-V is catalyzing progress across diverse industries, paving the way for a future defined by unprecedented efficiency and performance.

What are the key elements needed in the ecosystem in the next 6 – 12 months for RISC-V to make more progress? (Expect some security and hypervisor discussion.)

Progress knows no bounds, but it requires a robust ecosystem to thrive. Join us as we explore the essential components that will propel RISC-V forward in the coming months. From enhancing security measures to advancing hypervisor technology, our panelists will dissect the critical elements necessary to nurture RISC-V’s evolution. Be prepared for a thought-provoking discussion that will shape the trajectory of RISC-V development and adoption worldwide.

This panel discussion isn’t just an opportunity to witness industry leaders in action—it’s a chance to be part of the conversation that’s shaping the future of computing. Whether you’re a seasoned professional, an aspiring innovator, or simply curious about the latest advancements in technology, this event promises to inspire and enlighten.

So mark your calendars and secure your spot at RISC-V Con 2024! Join us on June 11th at the DoubleTree Hotel in San Jose, California, from 4:00 to 5:00 PM, and embark on a journey into the forefront of advanced computing. Together, let’s unlock the limitless potential of RISC-V and forge a path towards a brighter tomorrow.

REGISTER HERE

Also Read:

Andes Technology: Pioneering the Future of RISC-V CPU IP

A Rare Offer from The SHD Group – A Complimentary Look at the RISC-V Market

LIVE WEBINAR: RISC-V Instruction Set Architecture: Enhancing Computing Power


The Case for U.S. CHIPS Act 2

The Case for U.S. CHIPS Act 2
by Admin on 06-03-2024 at 8:00 am

America CHIPs ACT

Photo by Brandon Mowinkel on Unsplash

Despite murky goals and moving targets, the recent CHIPS Act sets the stage for long term government incentives.

Authored by Jo Levy and Kaden Chaung

On April 25, 2024, the U.S. Department of Commerce announced the fourth, and most likely final, grant under the current U.S. CHIPS Act for leading-edge semiconductor manufacturing. With a Preliminary Memorandum of Terms (or PMT) valued at $6.14B, Micron Technologies joined the ranks of Intel Corporation, TSMC, and Samsung, each of which is slated to receive between $6.4 and $8.5B in grants to build semiconductor manufacturing capacity in the United States. Together, these four allotments total $27.64B, just shy of the $28B that Secretary of Commerce Gina Raimondo announced would be allocated to leading-edge manufacturing under the U.S. CHIPS Act. The Secretary of Commerce has stated ambitions to increase America’s global share in leading-edge logic manufacturing to 20% by the end of the decade, starting from the nation’s current position at 0%. But will the $27.64B worth of subsidies be enough to achieve this lofty goal?

Figure #1, Data taken from NIST and the White House

To track achievement toward the 20% goal, one needs both a numerator and a denominator. The denominator consists of global leading edge logic manufacturing while the numerator is limited to leading edge logic manufacturing in the United States. Over the next decade, both the numerator and the denominator will be subject to large-scale changes, making neither figure easy to predict. For nearly half a century, the pace of Moore’s Law’s has placed the term “leading-edge” in constant flux, making it difficult to determine which process technology will be considered leading-edge in five years’ time. More recently, American chip manufacturing must keep pace with foreign development, as numerous countries are also racing to onshore leading-edge manufacturing. These two moving targets make it difficult to measure progress toward Secretary Raimondo’s stated goal and warrant a closer examination of the potential challenges faced.

Challenge #1: Defining Leading Edge and Success Metrics

The dynamic nature of Moore’s Law, which predicts the number of transistors on a chip will roughly double every two years (while holding costs constant), leads to a steady stream of innovation and rapid development of new process technologies. Consider TSMC’s progression over the past decade. In 2014, it was the first company to produce 20 nm technology at high volume production. Now, in 2024, the company is mass producing logic technology within the 3 nm scale. Intel, by comparison, is currently mass producing its Intel 4. (In 2021, Intel renamed its Intel 7nm processors to Intel 4.)

Today, the definition of advanced technologies and leading edge remains murky at best. As recently as 2023, TSMC’s Annual Report identified anything smaller than 16 nm as leading edge. A recent Trend Force report used 16 nm as the dividing line between “advanced nodes” and mature nodes. Trend Force predicts that U.S. manufacturing of advanced nodes will grow from 12.2% to 17% between 2023 and 2027, while Secretary Raimondo declared “leading-edge” manufacturing will grow from 0% to 20% by 2030. This lack of clarity dates back to the 2020 Semiconductor Industry Association (“SIA”) study which served as the impetus for the U.S. CHIPS Act. The 2020 SIA report concluded that U.S. chip production dropped from 37% to 10% between 1999 and 2019 based upon total U.S. chip manufacturing data. To stoke U.S. fears of lost manufacturing leadership, it pointed to the fast-paced growth of new chip factories in China, though none of these would be considered leading-edge under any definition.

A new 2024 report by the SIA and the Boston Consulting Group on semiconductor supply chain resilience further muddies the waters by shifting the metrics surrounding the advanced technologies discourse. It defines semiconductors within the 10 nm scope as “advanced logic” and forecasts that the United States’ position in advanced logic will increase from 0% in 2022 to 28% by 2032. It also predicts that the definition of “leading-edge” will encompass technologies newer than 3 nm by 2030 but fails to provide any projection of the United States’ position within the sub3 nm space. This begs the question: Should Raimondo’s ambition to reach 20% in leading edge be evaluated under the scope of what is now considered as “advanced logic”, or should the standard be held to a more rigorous definition of “leading edge”? As seen within the report, U.S. manufacturing will reach the 20% goal by a comfortable margin if “advanced logic” is used as the basis for evaluation. Yet, the 20% goal may be harder to achieve if one were to hold the stricter notion of “leading edge”.

Figure #2, Data taken from NIST

The most current Notice of Funding Opportunity under the U.S. CHIPS Act defines leading-edge logic as 5 nm and below. Many of the CHIPS Act logic incentives are for projects at the 4 nm and 3 nm level, which presumably meet today’s definition of leading-edge. Intel has announced plans to build one 20A and one 18A factory in Arizona, two leading edge factories in Ohio, bring the latest High NA EUV lithography to its Oregon research and development factory, and expand its advanced packaging in New Mexico. TSMC announced it will use its incentives for 4 nm FinFet, 3 nm, and 2 nm but fails to identify how much capacity will be allocated to each. Similarly, Samsung revealed its plans to build 4nm and 2 nm, as well as advanced packaging, with its CHIPS Act funding. Like TSMC, Samsung has not shared the volume production it expects at each node. However, by the time TSMC’s and Samsung’s 4 nm projects are complete, it’s unlikely they will be considered leading-edge by industry standards. TSMC is already producing 3 nm chips in volume and is expected to reach high volume manufacturing of 2 nm technologies in the next year. Both Intel and TSMC are predicting high volume manufacturing of sub-2nm by the end of the decade. In addition, the Taiwanese government has ambitions to break through to 1 nm by the end of the decade, which may lead to an even narrower criterion for “leading-edge.”

In this way, the United States’ success will be contingent on the pace of developments within the industry. So far, the CHIPS Act allocations for leading-edge manufacturing are slated to contribute to two fabrication facilities producing at 4 nm, and six at 2 nm or lower. If “leading-edge” were to narrow down to 3 nm technologies by 2030 as predicted, roughly a fourth of the fabrication facilities built for leading-edge manufacturing will not contribute to the United States’ overall leading-edge capacity.

If the notion of “leading-edge” shrinks further, even fewer fabrication facilities will be counted towards the United States’ leading-edge capacity. For instance, if the Taiwanese government proves to be successful with its 1 nm breakthrough, it would cast further doubt on the validity of even a 2 nm definition for “leading-edge”. Under such circumstances, the Taiwanese government will not only be chasing a moving target, but will shift the goalpost for the rest of the industry in the process, greatly complicating American efforts to reach the 20% mark. Thus, it becomes essential for the American leadership to keep track of foreign developments within the manufacturing space while developing its own.

Challenge # 2: Growth in the United States Must Outpace Foreign Development

Any measure of the success of the CHIPS Act must consider not only the output of leading edge fabrication facilities built in the United States, but also the growth of new fabs outside the United States. Specifically, to boost its global share of leading edge capacity by 20%, the U.S. must not only match the pace of its foreign competition, it must outpace it.

This means the U.S. must contend with Asia, where government subsidies and accommodating regulatory environments have boosted fabrication innovation for decades. Though Asian manufacturing companies will contribute to the increase of American chipmaking capabilities, it appears most chipmaking capacities will remain in Asia through at least 2030. For instance, while TSMC’s first two fabrication facilities in Arizona can produce a combined output of 50,000 wafers a month, TSMC currently operates 4 fabrication facilities in Taiwan that can each produce up to 100,000 wafers a month. Moreover, Taiwanese companies have announced plans to set up 7 additional fabrication facilities on the island, 2 of which include TSMC’s 2 nm facilities. In South Korea, the president has unveiled plans to build 16 new fabrication facilities through 2047 with a total investment of $471 billion, establishing a fabrication mega-cluster in the process. The mega-cluster will include contributions by Samsung, suggesting expansion of Korea’s leading-edge capacity. Even Japan, which has not been home to logic fabrication in recent years, has taken steps to establish its leading-edge capabilities. The Japanese government is currently working with the startup Rapidus to initiate production for 2 nm chips, with plans of a 2 nm and 1 nm fabrication facility under way. While the U.S. has taken a decisive step to initiate chipmaking, governments in Asia are also organizing efforts to establish or maintain their lead.

Asia is not the only region growing its capacity for leading edge chip manufacturing. The growth of semiconductor manufacturing within the E.U. may further complicate American efforts to increase its leading-edge shares by 20%. The European Union recently approved of the E.U. Chips Act, a $47B package that aims to bring the E.U.’s global semiconductor shares to 20% by 2030. Already, both Intel and TSMC have committed to expanding semiconductor manufacturing in Europe. In Magdeburg, Germany, Intel seeks to build a fabrication facility that uses post-18A process technologies, producing semiconductors within the order of 1.5 nm. TSMC, on the other hand, plans to build a fabrication facility in Dresden, producing 12/16 nm technologies. Though the Dresden facility may not be considered leading-edge, TSMC’s involvement could lead to more leading-edge investments within European borders in the near future.

In addition to monetary funding under the CHIPS Act, the U.S. also faces non-monetary obstacles that may hamper its success. TSMC’s construction difficulties in Arizona have been well-documented and contrasted with the company’s brief and successful construction process in Kumamoto, Japan. Like TSMC, Intel’s U.S. construction in Ohio has also faced setbacks and delays. According to the Center for Security and Emerging Technology, many countries in Asia provide infrastructure support, easing regulations in order to accelerate the logistical and utilities-based processes. For instance, during Micron’s expansion within Taiwanese borders, the Taiwanese investment authority assisted the company with land acquisition and lessened the administrative burden the company had to undergo for its construction. The longer periods required to obtain regulatory approvals and complete construction in the U.S. provide other nations with significant lead time to outpace U.S. growth.

Furthermore, the monetary benefits of CHIPS Act rewards will take time to materialize. Despite headlines claiming CHIPS Act grants have been awarded, no actual awards have been issued. Instead, Intel, TSMC, Samsung and Micron have received Preliminary Memorandum of Terms, which are not binding obligations. They are the beginning of a lengthy due diligence process. Each recipient must negotiate a long-form term sheet and, based upon the amount of funding per project, may need to obtain congressional approval. As part of due diligence, funding recipients may also be required to complete environmental assessments and obtain government permits. Government permits for semiconductor factories can take 12- 18 months to obtain. Environmental assessments can take longer. For example, the average completion and review period for an environmental impact statement under the National Environmental Policy Act is 4.5 years. Despite the recent announcements of preliminary terms, the path to actual term sheets and funding will take time to complete.

Even if the due diligence and term sheets are expeditiously completed, the recipients still face years of construction. The Department of Commerce estimates a leading-edge fab takes 3-5 years to construct after the approval and design phase is complete. Moreover, two of the four chip manufacturers have already announced delays in construction projects covered by CHIPS Act incentives. Accounting for 2-3 years to obtain permits and complete due diligence, 3-5 years for new construction, and an additional year of delay, it may be 6-9 years before any new fabs begin production. To achieve the CHIPS Act goal of 20% by 2030, the United States must do more than provide funding– it must ensure the due diligence and permitting processes are streamlined to remain competitive with Europe and Asia.

The Future of Leading-Edge in the United States

Between the constant changes in the meaning of “leading-edge” under Moore’s Law and the growing presence of foreign competition within the semiconductor industry, the recent grant announcements of nearly $28B for leading-edge manufacturing are only the start of the journey. The real test for the U.S. CHIPS Act will occur over the next few years, when the CHIPS Office must do more than monitor semiconductor progress within the U.S. It must also facilitate timely completion of the CHIPS Act projects and measure their competitiveness as compared to overseas expansions. The Department of Commerce must continually evaluate whether its goals still align with developments in the global semiconductor industry.

As such, whether the United States proves successful largely depends on why achieving the 20% target matters. Is the goal to establish a steady supply of advanced logic manufacturing to protect against foreign supply-side shocks, or is it to take and maintain technological leadership against the advancements of East Asia? If the former case, then abiding to the notion of “advanced logic” will suffice; the degree of such an achievement will be smaller compared to what was initially promised under “leading-edge”, but it remains a measured and sensible goal for the U.S. to achieve. If the latter case holds true, achieving the 20% benchmark would place the United States in a much stronger position within the global supply chain. To do so, however, will undoubtedly require much greater funding efforts towards leading-edge than the $28B that has been allocated.

National governments are increasingly investing efforts to establish a stronger productive capacity for semiconductors, and many will continue to do so in the succeeding decades. If the United States aims to keep pace with the rest of the industry, then it must maintain a steady stream of support towards leading-edge technologies. It will be an expensive initiative, but some leading figures such as Secretary Raimondo are already suggesting a secondary CHIPS Act to expand upon its initial efforts; In the global race, another subsidy package will provide the nation with a much needed push towards the 20% finish line. Hence, despite all the murkiness surrounding the United States’s fate within the semiconductor industry, one fact remains certain: the completion of the CHIPS Act should not be seen as the conclusion, but as the prologue to America’s chipmaking future.

Also Read:

CHIPS Act and U.S. Fabs

Micron Mandarin Memory Machinations- CHIPS Act semiconductor equipment hypocrisy

The CHIPS and Science Act, Cybersecurity, and Semiconductor Manufacturing

Why China hates CHIPS