Banner 800x100 0810

Why China hates CHIPS

Why China hates CHIPS
by Craig Addison on 09-05-2022 at 6:00 am

Joe Bidden CHIPS Act 2022

The CHIPS and Science Act has its fair share of critics, with detractors calling it corporate welfare for “losers” like Intel, or lacking guardrails to prevent companies making legacy chips in China.

One of the most vocal opponents of the act has been China’s communist-ruled government.

CHIPS – an acronym for Creating Helpful Incentives to Produce Semiconductors – offers $52.7 billion in subsidies for chip investments on  American soil.

China’s foreign ministry spokesman said the act was “economic coercion” by the US. State-owned newspaper Global Times slammed CHIPS as an attempt to “artificially isolate China from the industrial chain”.

More recently, state-backed industry groups have joined the chorus. The head of China’s Semiconductor Industry Association (CSIA) said parts of the act violated “fair market principles”, and called on the US to “correct its mistakes”.

Language like that is often used in Communist Party propaganda, so the CSIA statement was more likely aimed at pleasing Beijing than swaying foreign sentiment on the issue.

An official at a different trade group said CHIPS would disrupt “normal” cooperation and investment between China and the US, while another labeled it a form of “semiconductor hegemony”.

What’s going on here?

Besides the hypocrisy – China’s own National IC Industry Investment Fund, aka the Big Fund, raised $50 billion to invest locally – Beijing is worried that the days of foreign chip makers investing billions in China may be over.

That would weaken the country’s role in the global supply chain, and limit the knowledge transfer that occurs when Chinese engineers trained in a foreign-owned venture leave to start their own company.

Another reason for the angst on the Chinese side is that their so-called “self sufficiency” efforts in semiconductors are not paying off, at least not fast enough.

Sorry, SMIC’s 7-nm chip produced without an EUV scanner doesn’t count. Regardless of the headlines and armchair experts proclaiming that it leveled the playing field between China and the West, SemiWiki readers know that producing an experimental chip is not the same as making one in high volume at high yields.

More worrying for Beijing, though, is the fact that several senior Chinese officials in charge of disbursing the Big Fund money are now under investigation for graft.

While there’s bound to be differences of opinion over how to best spend the CHIPS money, Beijing won’t feel any better knowing that none of it will line the pockets of American chip executives, several of whom were invited to the White House to witness President Joe Biden sign the bill into law.

Also read:

The Semiconductor Cycle Snowballs Down the Food Chain – Gravitational Cognizance

2022 Semiconductor Supercycle and 2023 Crash Scenario

An EDA AI Master Class by Synopsys CEO Aart de Geus


MAB: The Future of Radio is Here

MAB: The Future of Radio is Here
by Roger C. Lanctot on 09-04-2022 at 6:00 am

MAB The Future of Radio is Here 1

Good story telling is what helps drive change, engage consumers, and define progress. Steve Newberry, CEO of Quu, is a master of the craft.

He told two stories, in particular, at the Michigan Association of Broadcasters event last week in Traverse City, Mich. The first story was to simply note for the broadcasters in attendance that whether they knew it or not their world was changed forever when the National Highway Traffic Safety Administration mandated backup cameras in cars.

With the stroke of a regulatory pen, NHTSA decreed that within a few years – in other words, now – all cars would be outfitted with eight-inch (or larger) LCD displays to enable drivers to see where they were going – when they were driving backwards (in the interest of saving approximately 150 lives each year).  Simultaneously, auto industry engineers suddenly gained a huge canvas upon which to render the future of content consumption in cars – bye-bye narrow displays with five preset buttons, and a couple of knobs. Hello future radio!

The second story Steve told was of a meeting between the National Association of Broadcasters and General Motors, where a senior GM executive said:

“I love meeting with you guys – always do – but I must be honest. How can we expect radio to deliver on this technology in the future when you guys can’t get your act together on the basic RDS and HD information? Radio is a mess.”

The participants in that meeting needn’t have looked any further than the dashboards in cars parked outside the building. Every one of them would be guaranteed to have a different presentation of relevant metadata for the same radio station with the same content including information displayed in all caps or not, words cutoff or misspelled, or, most likely, information entirely missing.

SOURCE: Slide supplied by Quu showing different presentations of the same content from the same broadcaster in different cars.

The presentation of the information in the cars – wanting though it may be – is no fault of the auto makers. Auto makers understand that the radio is the one piece of customer interaction that they can actually control. Designers and engineers are doing their best to capture the information delivered via the over-the-air broadcast signal and render it for the convenience of the consumer.

Sadly, broadcasters do not universally have their act together. This creates confusion for the consumer and a disappointing experience in the dashboard – in spite of the best efforts of the car makers.

In an age when Google, Apple, Amazon, and other tech companies are seeking to commandeer automotive user experiences, there is no room for failure of this sort. Broadcasters need to get their collective act together simply to get in the game and participate in the snazzy new interfaces being delivered by car makers such as Audi and Mercedes Benz.

On stage at the MAB event was Juan Galdamez, senior director of broadcast strategy and business development at Xperi, which is laboring diligently to deliver the back-end systems and consumer-facing content capable of supporting those snazzy interfaces.

Galdamez emphasized, though, that Xperi is merely a toolkit for the automotive industry. It is worthless without proper inputs from broadcasters in the form of carefully curated metadata.

Fred Jacobs, moderator of the panel and owner of Jacobs Media, pointed out – ominously – that for the first time consumers surveyed as part of Jacobs Media’s annual TechSurvey identified Bluetooth, not radio, as “the most important media feature” among new car buyers. In other words, consumers want to plug their phones into their infotainment systems and project their mobile apps and content.

This is unquestionably bad news for broadcasters and auto makers. Screen projection solutions such as Apple’s CarPlay and Google’s Android Auto prejudice Internet sources of content over access to local media. Once these systems take over the screen it can be nearly impossible for users to find their way back to the radio.

Broadcasters need to clean up their acts. The tools and the screen real estate are in place to deliver the future of radio. For some broadcasters, that future has already arrived and they are thriving. Broadcasters need to embrace digital technology to make their stations easier to discover and enjoy. The auto makers have already done their part.

Also read:


Podcast EP105: Cadence STA Strategy and Capabilities, Today and Tomorrow with Brandon Bautz

Podcast EP105: Cadence STA Strategy and Capabilities, Today and Tomorrow with Brandon Bautz
by Daniel Nenni on 09-02-2022 at 10:00 am

Dan is joined by Brandon Bautz, senior group director of product management responsible for silicon signoff and verification product lines in the Cadence Digital & Signoff Group. Brandon has more than 20 years of experience in chip design and the EDA industry and has been at Cadence for over 10 years.

Dan explores the current and future design challenges being addressed by STA at Cadence. Strategies to deliver cost-effective performance in the face of exploding design complexity are discussed. The role of STA to address variability, aging, IR drop/max frequency issues and 3D implementation are also discussed among other topics.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


The Semiconductor Cycle Snowballs Down the Food Chain – Gravitational Cognizance

The Semiconductor Cycle Snowballs Down the Food Chain – Gravitational Cognizance
by Robert Maire on 09-02-2022 at 6:00 am

Wiley Coyote Semiconductor Crash 2022

-Where are we in the chip cycle? Why is it different this time?
-No one rings a bell to indicate the top or bottom of a cycle
-Could the lack of self-awareness lead to a worse downturn?
-Who will weather the cycle better & come out on top

Gravitational Cognizance
“A cartoon character will not fall until they realize they should be falling”

We wasted too much of our ill spent youth watching cartoons. One of our favorites was Wile E. Coyote. The unique physics was very repeatable, the character in question would find him or herself with no visible means of support but not succumb to gravity until they recognized their position or another character pointed it out to them.

Typically, Wile E. Coyote would run full speed off a cliff but not fall until he noticed his predicament.

This reminds us very much of where the semiconductor industry is today. The industry has been running so fast and focused on speed that it hasn’t yet realized that the basis that supports the industry has gone away, that is that demand has dropped and will see further declines.

We have been talking about the industry being in a down cycle for months now. Memory prices have dropped (usually one of the first signs) inventories have grown, lead times are down. More importantly, demand for semiconductor rich electronic devices is dropping.

However, some semiconductor and semiconductor equipment companies are still reporting great earnings, record breaking earnings in some cases. This makes it very difficult to talk about a down cycle when you are still making big bucks.

The speed at which the industry has been running has driven so much momentum into the industry that gravitational cognizance has been delayed.

Still living on backlog and fear

In many cases the industry is living on backlog or non cancelable orders placed near the height of the cycle despite the fact that product is in inventory or readily available. In other cases, customers are so fearful (like the auto industry) that they continue to order even though they already have enough simply because they don’t want a repeat of the shortages.

Semiconductor equipment is worst in this regard as no one dares to get off the queue waiting in line for litho tools lest shortages start up again.

Realization may hit home when the channel is fully stuffed

In the past we have seen instances where there are crates of semiconductor equipment piling up on the receiving dock because it can’t be installed quickly enough or there’s no room. In one past case there was a parking lot full of crates.

Wafers sit in the channel at OSATs waiting to be packaged and tested. Manufacturers, like Micron, start to hold product off the market to support pricing.

Momentum could cause a huge overshoot of capacity

Given the absolutely huge momentum the industry has had for several years, it is not unreasonable to think that we could wind up with one of the largest cases of excess capacity the industry has seen in many cycles.

A lot has been said about the industry being more conservative in their spending and more cautious than the bad old days of cycles past but the rate of equipment orders over the last year or more has been anything but cautious.

Where is the snowball in the downhill food chain?

We are still at the early stages of a down cycle as not everyone agrees, admits or recognizes reality. We are concerned about processor demand for hyperscalers, data centers and memory in consumer devices but there is not a full fledged capitulation. We have seen virtually no impact in the equipment makers financials other than supply chain issues primarily related to COVID, which have been relatively minor. So we are still at the snowball stage where the issue has not yet grown to snowman size to encompass the entire industry.

Many so called analysts are still very bullish or have a lot of buys or have become even more positive as valuations have slipped. From a stock perspective we have not yet hit, and are still far away from capitulation.

Maybe the bell ringer indicating the bottom of the cycle is the last bullish analyst capitulating (ignoring those who never change their ratings….).

Who will weather the down cycle best?

We think TSMC remains one of the more defensive names in the group of foundries or chip makers in general. They are far and away at the top of the heap and can control and dominate pricing such that they control pricing for every other foundry in the market. Other foundries live under the price umbrella of TSMC.

When TSMC is out of capacity or raises prices enough, chip customers are forced to go elsewhere to get their chips made even though TSMC is always their first choice. The bottom line is that TSMC’s overflow business goes to competitors. When TSMC has excess capacity, a lot of that business will come back to them, leaving those down the foundry food chain with much lower utilization and profits.

In semiconductor equipment , ASML is always the last piece of equipment you would ever cancel given the crazy long lead times. Most process tools, such as deposition and etch tools are more of a “turns” business where you can simply reorder what you have canceled without much delay.

China business is an added “unknown”

Its unclear what the status of tool and chip shipments to China will be given the worsening relations. Given that China has been the biggest customer of most equipment companies means that this is a significant variable that looks to be getting worse in the near term. Though not hugely impactful today it could make a big difference when equipment companies are scrambling for orders or need to find new homes for cancelations or delayed product.

Are there “time bombs” in wooden crates in the field?

Lam had noted that they had several billion dollars worth of unfinished tools that were shipped on an incomplete basis to customers while waiting on parts. This situation is quite different from ASML that shipped completed yet untested tools to get them to customers faster.

What happens when all those tools are completed and installed???

We recall a situation where the Chinese LED industry had a lot of MOCVD tools sitting in crates that were going unused.

The CHIPS act, throwing gasoline on a glut bonfire?

As we have mentioned in previous notes the timing of the CHIPS Act is nothing short of poetic. Micron will likely cut capex in half and Intel has already announced a likely slowing of Ohio and other projects.

Could we get into a situation of “use it or lose it” where chip makers feel forced to spend CHIPS money where they otherwise wouldn’t, through prudent financial analysis. Basically throwing free or cheap money at the industry even though its not needed because we already have excess capacity (although maybe not in the right country).

We may need a redirect of the CHIPS act given that circumstances have changed substantially since the project was conceived

The stocks

Overall we see a lot more downside beta than upside beta in the semi industry right now. Its hard to come up with some variables that could break significantly to the upside and most of the variables seem to be degrees of downside potential.

We see no good reason to get involved with value traps. The last thing we want to hear is some analyst saying that a stock is trading at a 52 week low with a huge dividend. At this point I am certainly not concerned about a dividend play when my principle is at significant risk , there is no offset benefit.

Certainly macro uncertainty is a big portion of the problem and it doesn’t look like macro issues are getting better soon. Semiconductors remain the tip of the economic spear and will see outsize impact from any macro gyrations.

The other issue is that we don’t have any good sense as to how long or how deep? Could overall demand for chips keep the slow down short lived and minor? Which way will all the variables fall?

Long term demand seems absolutely great but things could get even uglier short term as we have yet to see a bottom in our view.

About Semiconductor Advisors LLC
Semiconductor Advisors is an RIA (a Registered Investment Advisor), specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also read:

KLAC same triple threat headwinds Supply, Economy & China

LRCX – Great QTR and guide but gathering China storm

Intel & Chips Act Passage Juxtaposition


Resilient Supply Chains a Must for Electronic Systems

Resilient Supply Chains a Must for Electronic Systems
by Dave Bursky on 09-01-2022 at 10:00 am

three phase approach to supply chain resilence

The last few years have seen multiple disruptions in the supply chain in many industries. One of the key technologies that many fingers have pointed to is the semiconductor technology.  As products in all industries become more electronics based, semiconductors play a key role since no end system could function today without these chips. Additionally, few, if any, complex chips are available from multiple suppliers, thus creating strong dependencies on a single source of supply. Any disruption in the chip manufacturing process, from fabrication lines to packaging to testing to shipping can impact product delivery.

Externally influenced supply chain disruptions resulting from geopolitical issues, a pandemic, labor difficulties, and unbridled fluctuations in demand make it difficult to bring products to market on time and within budget. However, as Stephen Chavez of Siemens Digital Industries Software explains, businesses can manage risk and plan for a more resilient future by applying a three-phased solution to enable supply-chain resilience for electronic systems design (see the figure). The key focus of this solution is digital transformation, implementing a “shift-left” or “outside-in” approach to a workflow that is inherently siloed, disconnected, and inefficient.

The phases, Chavez explains, are built on three progressive principles: Knowledge, Intelligence, and Optimization.  The ’Knowledge’ phase is about arming engineers and teams with comprehensive real-time compo­nent sourcing data so they can make more informed part decisions when the cost of change is lowest. ‘Intelligence’ further applies the insights from component sourcing knowledge and couples it with part intelligence to empower more informed actions and workflows across the enterprise, solving for both cost and risk. This allows the enter­prise to adapt quickly to supply disruptions. Lastly, ’Optimization’ delivers a full closed-loop component management digital twin, with built-in traceability, comprehension of manufacturing experi­ences, and AI-driven analytics, so that optimal choices are made at every point of technical and business decision.

He projects that the three-phase approach will empower organizations and expand supply chain resilience to the point of design, allowing companies to optimize not only their systems design process, but also every link to the stakeholders in the global electronics value chain. By uniting the electronics value chain with the engineer’s desktop, system design companies will achieve higher levels of digital transforma­tion and the greater profitability that results, as they realize, with confidence, the requirements of tomorrow’s designs today.

The simultaneous array of dynamic forces impacting today’s supply landscape have created the perfect storm in the electronics industry. Sadly, as Chavez observes, there is no immediate end in sight, nor will there be any temporary relief. If anything, there is real poten­tial for matters to get worse before they get better. He sees four main themes contributing to the dynamic market forces that continue to stress the global electronics value chain to its limits.

  1. Electrification across verticals (especially modern vehicles). Today we see increased product complexity as things get smaller and faster, with a big push for many things to go electrical and wireless.
  2. Global supply chain complexity. With today’s globalization, we see distributed teams collaborating and functioning as one complex centralized machine. Numerous suppliers add to this complexity along with ever-expanding geographies.
  3. Accelerating speed of innovation. As the industry has evolved, so have our engineering tools and knowledge, which has directly impacted the speed in which we innovate. Today we see release cycles defined in months versus years, and these are accomplished with less resources and tighter budgets. Adding to this acceleration is heightened competition and customer demand. Product seasonality fuels the speed of innovation. It’s amazing how fast innovation happens compared to years past.
  4. Increased supply chain market volatility. Globalized supply chains are extremely fragile. This is due to many years of executing against a “just-in-time” model that isn’t sustainable with the evolution of today’s industries. Only now is the depth of this fragility exposed due to the pandemic and global geopolitics. Additionally, the lack of data visibility is persistent, making it more difficult to deliver products.

Chavez also sees a very complex, spider-web-like network of informal processes. This morass of processes entangles enter­prises in even more sticky challenges as they attempt to span functional silos, disparate systems, and frag­mented cross-functional decision-making processes. The complexities of internal systems and tools – such as EDA, PLM, source/procurement, SCM ERP, and MES – add to the challenge within the enterprise for tighter integrations and communications. From diverse engineering, sourcing, and supply teams to how issues are being addressed by emails, manual spreadsheets, meetings, and static reports, there is both a desire and a need to optimize.

A shift left, or better yet, an “outside-in” approach, where processes and systems are primed by global business and supply chain conditions on the ground, is the key to cost reduction, risk mitigation, and faster time-to-market through data visibility, system link­ages, and cross-team collaboration. The key point here is to make these external sources of data internal to the enterprise so that they can be addressed further upstream in the process – because in the current scenario, by the time they are addressed, downstream, their negative impacts are more signifi­cant and costly.

In practice, supply-chain resilience is the capacity of a supply chain to persist, adapt, or transform in the face of change. Many design organizations are highly vulnerable to supply-chain volatility since the engineering and sourcing handoff is highly linear and not built with resilience in mind. Chavez sees the electronics value chain as ripe for a digital transformation, given the persistent chasms that stifle innovation and product development execution.

www.siemens.com/software

Also read:

Five Key Workflows For 3D IC Packaging Success

IC Layout Symmetry Challenges

Verifying 10+ Billion-Gate Designs Requires Distinct, Scalable Hardware Emulation Architecture

UVM Polymorphism is Your Friend


Coherency in Heterogeneous Designs

Coherency in Heterogeneous Designs
by Bernard Murphy on 09-01-2022 at 6:00 am

Ncore application

Ever wonder why coherent networks are needed beyond server design? The value of cache coherence in a multi-core or many-core server is now well understood. Software developers want to write multi-threaded programs for such systems and expect well-defined behavior when accessing common memory locations. They reasonably expect the same programming model to extend to heterogeneous SoCs, not just for the CPU cluster in the SoC but more generally. Say in a surveillance application based on an intelligent image processing pipeline feeding into inferencing and recognition to detect abnormal activity. Stages in such pipelines share data which must remain coherent. However, these components interface through AMBA CHI, ACE and AXI protocols – a mix of coherent and incoherent interfaces. The Arteris IP Ncore network is the only coherent network interface that can accomplish this objective.

Coherency in a Heterogeneous Design

An application like a surveillance camera depends on high-performance streaming all the way through the pipeline. A suspicious figure may be in-frame only for a short time, yet you still must capture and recognize a cause for concern. The frames/second inference rate must be high enough to meet that goal, enabling camera tracking to follow the detected figure.

The imaging pipeline starts with the CPU cluster processing images, tiled through multiple parallel threads for maximum performance. Further maximizing performance, memory accesses are cached to the greatest extent possible, and therefore that cache network must be coherent. The CPUs support CHI interfaces, so far, so good.

But image signal processing is a lot more complex than just reading images. There’s demosaicing, color management, dynamic range management and much more. Maybe handled in a specialized GPU or DSP function, which must deliver the same caching performance boost to not slow down the pipeline. And for which thread programming expects the same memory consistency model. Often this hardware function only supports an ACE interface. ACE is coherent but different from CHI. Now the design needs a coherent network that can support both.

Those threads feed into the AI engine to infer suspicious objects in images at, say, 30 frames/second. Aiming to detect not only such an object but also the direction of movement. AI engines commonly support an AXI interface, which is widely popular but is not coherent. However, the control front-end to that engine must still see a coherent view of processed image tiles streaming into the engine. Meeting that goal requires special support.

The Arteris IP Ncore coherent network

The Arteris IP FlexNoC non-coherent network serves the connectivity needs of much of a typical SoC, which may not need coherent memory sharing with CPUs and GPUs. The AI accelerator itself may be built on a FlexNoC network. But a connectivity solution is needed to manage the coherent domain as well. For this, Arteris IP has built its Ncore coherent NoC generator.

Think of Ncore as a NoC with all the regular advantages of such a network but with a couple of extra features. First, the network provides directory-based coherency management. All memory accesses within the coherent domain, such as CPU and GPU clusters, adhere to the consistency model. Second, Ncore supports CHI and ACE interfaces. It also supports ACE-Lite interfaces with embedded cache, which Arteris IP calls proxy caches. A proxy cache can connect to an AXI bus in the non-coherent domain, complementing the AXI data on the coherent side with the information required to meet the ACE-Lite specification. A proxy cache ensures that when the non-coherent domain reads from the cache or writes to the cache, those transactions will be managed coherently.

Bottom line, using Ncore provides the only commercial solution for network coherency between CHI, ACE and AXI networks. The kind of networks you will commonly find in most SoCs. If you’d like to learn more, click HERE.


Podcast EP104: Enabling Future Innovation with GBT Technologies

Podcast EP104: Enabling Future Innovation with GBT Technologies
by Daniel Nenni on 08-31-2022 at 10:00 am

Dan is joined by Dr. Danny Rittman, CTO of GBT Technologies. Danny has an extensive background in the R&D space and has been working for companies such as Intel, IBM, and Qualcomm. He has spent most of his career researching and inventing processor chips, as well as paving the way for futuristic AI software programs that can be successfully used in a vast number of industries.

Dan explores the broad portfolio of GBT Technologies with Danny – its IP, architectures, design tools and application areas. The impact of this technology portfolio is discussed in some detail.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.

 


Five Key Workflows For 3D IC Packaging Success

Five Key Workflows For 3D IC Packaging Success
by Kalar Rajendiran on 08-31-2022 at 6:00 am

3D IC design workflows

An earlier blog started with the topic of delivering 3D IC innovations faster. The blog covered the following foundational enablers for successful heterogeneous 3D IC implementation.

  • System Co-Optimization (STCO) approach
  • Transition from design-based to systems-based optimization
  • Expanding the supply chain and tool ecosystem
  • Balancing design resources across multiple domains
  • Tighter integration of the various teams

The UCI-Express standard is driving the adoption of heterogeneous chiplets integration and with it the adoption of 3D IC implementations. When a new capability gets ready for mainstream, its mass adoption success depends on a number of things. While the foundational enablers are important, they are not sufficient for easily, quickly, effectively and efficiently delivering a successful solution. Standardized protocols are needed to offer plug-and-play compatibility between chiplet suppliers. With new requirements and signoffs, design tools evolve to meet and resolve any new challenges that arise. What the mainstream users need is a way to best use the tools to get an edge in the competitive marketplace. That boils down to key workflows which is the topic of a recent whitepaper published by Siemens EDA. This blog will cover the salient points from that whitepaper.

Workflow Adoption

There are two approaches to a chiplet based design. One approach uses a process of disaggregation wherein a complex monolithic chip is decomposed into plug-and-play modules to be assembled and interconnected with a silicon interposer. The other approach uses general purpose building block chiplets that are assembled and interconnected with an ASIC to build the system. Whichever approach is adopted, chiplet based designs add levels of complexity that must be understood and planned for.

The following five workflow adoption focus areas lend themselves to a managed methodology process that minimizes risk and cost and accelerates time to market benefit.

Early Planning and Predictive Analysis

Early planning and predictive analysis of the complete package assembly is mandatory with heterogeneous chiplets based systems. This involves thermal and power delivery considerations, chiplet co-optimization, chiplet interface management and design.

Chiplets often introduce new connectivity structures such as 3D stacking, TSVs, bumps, hybrid bonding and copper pillars. These structures can cause thermal induced stresses leading to performance and reliability problems. Investigating the connectivity structures for available alternate material options can help prevent unexpected failures that require late stage design changes.

Predictive power delivery analysis should be performed early in the process. Even though it may be approximate, this analysis averts later stage layout issues. A typical approach is to approximate the percentage of metal coverage per routing layer using Monte Carlo type sweep analysis. This analysis helps identify and communicate to the layout team, the parts of the circuit that would have the greatest impact on performance.

Waiting until the chiplet bump array is fully defined will delay package planning and limit the ability for co-optimization. The package planning can begin even before the chiplet design has started. The chiplet’s bump array and signal assignments can be created at the interposer level and passed to the IC design team for iterating, as the design progresses.

Standardized interfaces and protocols are key for broad adoption of chiplets based designs. At the same time, describing these interfaces brings new challenges for designers. Current approaches such as schematic description or HDL coding introduce the risk of human introduced errors. To overcome this challenge, Siemens EDA has developed a new approach called interface-based design that lends itself to automation.

Automating Interface-Based Design

Interface based design (IBD) is a new approach to capturing, designing, and managing large numbers of complex interfaces that interconnect multiple chiplets. With an interface defined as an IBD object, the designer can focus on a higher level of connectivity abstraction. The interface description becomes part of the chiplet part model. When a designer places an instance of this chiplet, everything related to the interface is automatically put in place. This approach allows designers to explore, define, and visualize route planning without having to transition the design into a substrate place-and-route tool. It enables more insightful chiplet floorplanning and chiplet-to-package or chiplet- to-interposer signal assignments. The IBD methodology helps establish correct-by-design chiplet connectivity.

Thermal, Stress and Reliability Management

Chiplets can add complex behaviors in terms of heat dissipation and thermal interactions between the chiplets and the substrates. Substrate stackup material and chiplet proximity have considerable impact on thermal and stress performance. For this reason, predictive analysis before or during the prototyping/planning phase is very important. Starting analysis as far left in the process as possible allows for maximum flexibility in making material choices and tradeoffs. Designers should generate power-aware thermal and stress device-level models to provide greater accuracy for thermal and mechanical simulations. Using a combination of chip-level and package/system thermal modeling, warpage, stress and fatigue points could be identified earlier in the design phase.

Test and Testability

Heterogeneous chiplet designs are very different from traditional designs. IEEE test standards are being developed to accommodate these 2.5D test methods. Different tool vendors may deploy different approaches in implementing these standards, which may cause test compatibility issues of chiplets that use different DFT vendor tools. For board level testing, a composite BSDL file for each of the internal components is preferred, but may not necessarily be supported by all DFT tool vendors, which further complicates the PCB level testing.

Although each chiplet is assumed to be delivered as a known-good-die (KGD), each still needs to be re-tested after being assembled into the 3D-IC package. As such, a production test program must be provided for each of the internal components of the 3D-IC package. The tests need to run from the external package pins, most of which are not connected directly to the chiplet pins. In addition to the individual die testing, the die-to-die interfaces between chiplets need to be functionally tested as well.

Driving Verification and Signoff

To be able to release into manufacturing with confidence, we have to make sure that all the devices and substrates work together as expected. For this, it is important to start verification in the planning process and continue throughout the layout process. Such in-design validation provides early identification and resolution of manufacturing issues without running the full sign-off flow. When it comes to final design verification, it is important to also analyze various layout enhancements that will improve yield and reliability.

For more details on the “Five Key Workflows that Deliver 3D IC Packaging Success”, you can download the whitepaper published by Siemens EDA.

Also Read:

WEBINAR: Intel Achieving the Best Verifiable QoR using Formal Equivalence Verification for PPA-Centric Designs

A faster prototyping device-under-test connection

IC Layout Symmetry Challenges


WEBINAR: Intel Achieving the Best Verifiable QoR using Formal Equivalence Verification for PPA-Centric Designs

WEBINAR: Intel Achieving the Best Verifiable QoR using Formal Equivalence Verification for PPA-Centric Designs
by Synopsys on 08-30-2022 at 10:00 am

Synopsys Fusion Compiler

Synopsys Fusion Compiler offers advanced optimizations to achieve the best PPA (power, performance, area) on today’s high-performance cores and interconnect designs. However, advanced transformation techniques available in synthesis such as retiming, multi-bit registers, advanced datapath optimizations, etc. are of little value if they cannot be verified through Formal Equivalence Verification (FEV). FEV setup must be rapid and provide out-of-the-box results to avoid becoming a bottleneck on advanced designs.

In this Synopsys webinar, Intel will share how it achieved the best QoR (Quality of Results) with an aggressive frequency target (3-4GHz). Using advanced optimization techniques, such as ungrouping and sequential optimizations, resulted in faster FEV convergence with a significant reduction in verification runtime as opposed to the long setup and runtimes designers face with traditional methods.

Attendees will walk away with an understanding of how Synopsys Formality Equivalence Checking captures the design transformation/optimizations in Formality Guide Files (SVF) for rapid setup of the verification environment to avoid multiple iterative runs. In addition, ML-driven adaptive distributed verification techniques will be highlighted, which help to partition the design and run solvers in parallel to further accelerate verification runtime and out-of-the-box results.

Register Here

Speakers

Listed below are the industry leaders scheduled to speak:

Avinash Palepu

Product Marketing Manager, Sr. Staff
Synopsys

Avinash Palepu is the Product Marketing Manager for Formality and Formality ECO products at Synopsys. Starting with Intel as a Design Engineer, he has held various design, AE management, and product marketing roles in the semiconductor design and EDA industries.

Avinash holds a master’s degree in EE from Arizona State University and a bachelor’s degree from Osmania University.

Sidharth Ranjan Panda

Engineering Manager
Intel Corporation

Sidharth Ranjan Panda has 10 years of experience in the VLSI industry. He is responsible for execution and signoff convergence activities for formal equivalence verification, low-power verification, and functional ECO closure for all SoC/IP programs in the NEX BU at Intel. He is a major contributor to the development of verification tools, flows, and methodologies at Intel. Sidharth holds a master’s degree in EE from the Birla Institute of Technology and Science, Pilani.

Register Here

 

Fusion Compiler features a unique RTL-to-GDSII architecture that enables customers to reimagine what is possible from their designs and take the fast path to achieving maximum differentiation. It delivers superior levels of power, performance and area out-of-the-box, along with industry-best turnaround time.

About Synopsys

Synopsys, Inc. (Nasdaq: SNPS) is the Silicon to Software™ partner for innovative companies developing the electronic products and software applications we rely on every day. As an S&P 500 company, Synopsys has a long history of being a global leader in electronic design automation (EDA) and semiconductor IP and offers the industry’s broadest portfolio of application security testing tools and services. Whether you’re a system-on-chip (SoC) designer creating advanced semiconductors, or a software developer writing more secure, high-quality code, Synopsys has the solutions needed to deliver innovative products. Learn more at www.synopsys.com.

Also read:

An EDA AI Master Class by Synopsys CEO Aart de Geus

WEBINAR: Design and Verify State-of-the-Art RFICs using Synopsys / Ansys Custom Design Flow

DSP IP for High Performance Sensor Fusion on an Embedded Budget


A faster prototyping device-under-test connection

A faster prototyping device-under-test connection
by Don Dingee on 08-30-2022 at 6:00 am

ProtoBridge from S2C provides a high bandwidth prototyping device-unider-test connection

When discussing FPGA-based prototyping, we often focus on how to pour IP from a formative SoC design into one or more FPGAs so it can be explored and verified before heading off to a foundry where design mistakes get expensive. There’s also the software development use case, jumpstarting coding for the SoC before silicon arrives. But there is another shift left use case – closer to project start when a pile of IP in the prototyping platform is tethered to a development host looking for signs of life. Creating a fast, productive prototyping device-under-test connection is the topic of a new white paper from S2C.

Facing the classic embedded design bring-up conundrum

Embedded designers have faced a similar board-level challenge for years. Bringing up a board requires being able to see into it with some interface. But a complex interface like USB requires both working hardware and a protocol stack for communication. When it works, it’s great. When it doesn’t, it can be hard to get visibility into what’s wrong. Ditto for other complex blocks where visibility is challenging.

Running around the board’s interior with a logic analyzer gets is where the fun, or lack thereof, begins. Traces may be tough to probe. Getting stimulus and triggering right to see the problem as its happening can be elusive. Moving a wide bus setup to different points around the board takes time. The more logic gets swept up into large devices like ASICs and FPGAs, the harder physical probing gets, and the more visibility drops.

Hoping to solve visibility in the embedded world, JTAG emerged first as a simple daisy-chain connection between chips or logic blocks inside an ASIC or FPGA. A JTAG analyzer is a simple gadget with four required wires and a fifth optional wire. The scheme works, but it can be painfully slow. If one is to solve the bring-up conundrum on an FPGA-based prototyping platform, which is a complex board with some big FPGAs, something much better is needed.

Moving to native interfaces on both sides of the workflow

Fortunately, two interfaces exist that fit perfectly into an FPGA-based prototyping device-under-test connection.

  • AXI is a ubiquitous native interconnect between IP blocks on the device-under-test side. It’s a single initiator, single target protocol in basic form, but it extends easily into an N:M interconnect that can scale topology for connecting more IP blocks.
  • PCIe is easy to find in or add to development hosts. It’s fast, cabling is simple, and a host device driver is straightforward.

S2C brings in the ProtoBridge System, a transactor-level interface with speed and visibility that improves test coverage and productivity, especially in the early phases of design. Here’s a conceptual diagram.

Transactors offer a way to bridge the gap between behavioral models typical of a test and verification environment and RTL models running in hardware on the FPGAs. The other benefit is that it allows software developers and test engineers to work in a familiar environment, C/C++, instead of RTL or FPGA constructs.

And the requisite speed is there. Some tests may require transferring big stimulus files, like videos or complex waveforms. Stimulus can be stored on the host until needed for a test, then transferred for execution at full PCIe bandwidths up to 4 Gb/sec.

As the project progresses, things shift from an initial bring-up mode to test coverage mode. The ProtoBridge scheme remains the same from start to finish, so teams don’t have to bring an array of different tools to the party. Workflows are smoother, productivity improves, and time is freed up for deeper exploration and testing of designs – all benefits of a shift left strategy.

To get the entire white paper and see the rest of the S2C story on the high bandwidth prototyping device-under-test connection and more background on ProtoBridge, hit the link below, then scroll down to this title:

White paper: High-bandwidth PC-to-DUT Connectivity with ProtoBridge