SNPS1670747138 DAC 2025 800x100px HRes

TSMC has spent a lot more money on 300mm than you think

TSMC has spent a lot more money on 300mm than you think
by Scotten Jones on 04-06-2023 at 10:00 am

Slide1

Up until November of 2022, IC Knowledge LLC was an independent company and had become the world leader in cost and price modeling of semiconductors. In November 2022 TechInsights acquired IC Knowledge LLC and IC Knowledge LLC is now a TechInsights company.

For many years, IC Knowledge has published a database tracking all the 300mm wafer fabs in the world. Compiled from a variety of public and private sources, we believe the 300mm Watch database is the most detailed database of 300mm wafer fabs available. IC Knowledge LLC also produces the Strategic Cost and Price Model that provides detailed cost and price modeling for 300mm wafer fabs as well as detailed equipment and materials requirements. The ability to utilize both products to analyze a company provides a uniquely comprehensive view and we recently utilized these capabilities to do a detailed analysis of TSMC’s 300mm wafer fabs.

One way we check the modeling results of the Strategic Cost and Price Model is to compare the modeled spending on 300mm fabs for TSMC to their reported spending. Since the early 2000s nearly all of TSMC’s capital spending has been on 300mm wafer fabs and the Strategic Model covers every TSMC 300mm wafer fab.

Figure 1 presents an analysis of TSMC’s cumulative capital spending by wafer fab site from 2000 to 2023 and compares it to the reported TSMC capital spending.

Figure 1. TSMC Wafer Fab Spending by Fab.

In figure 1 there is a cumulative area plot by wafer fab calculated using the Strategic Cost and Price Model – 2023 – revision 01 – unreleased, and a set of bars representing TSMC’s reported capital spending. One key thing to note about this plot is the Strategic Cost and Price Model is a cost and price model and fabs don’t depreciate until they are put on-line, therefore the calculated spending from the model is for when the fabs come on-line whereas the reported TSMC spending is when the expenditure is made regardless of when it comes on-line. TSMC’s capital spending also includes some 200mm fab, and mask and packaging spending. The TSMC reported spending is adjusted as follows:

  1. In the early 2000s estimated 200mm spending is subtracted from the totals. In some cases, TSMC announced what portion of capital spending was 200mm. In the overall cumulative total through 2022 this is a not a material amount of spending.
  2. Recently roughly 10% of TSMC’s capital spending is for masks and packaging, TSMC discloses this and it is subtracted from the total.
  3. When capital equipment is acquired but not yet put on-line, it is accounted for as assets in progress and this number is disclosed in financial filings. We subtract this number from the reported spending because the Strategic Model calculates on-line capital.

Note that fabs 12 and 20 are/will be in Hsinchu – Taiwan, Fabs 14 and 18 are in Tainan – Taiwan, Fab 15 is in Taichung – Taiwan, Fab 16 is in Nanjing – China, Fab 21 is in Arizona – United States, Fab 22 is planned for Kaohsiung – Taiwan and Fab 23 is being built in Kumamoto – Japan.

Some interesting conclusions from this analysis:

TSMC has spent roughly $135 billion dollars on 300mm wafers fabs through 2022. This number should break $200 billion dollars in 2024.

Fab 18 is TSMC’s most expensive fab (5nm and 3nm production), we expect that site to exceed $100 billion dollars in investment next year. Interestingly Fab 18 is right next to Fab 14 where an investment of more than $30 billion dollars has taken place and the combination next year will approach $140 billion dollars!

The capital investment of roughly $135 billion dollars in 300mm fabs just by TSMC is an amazing number, perhaps even more amazing is the investment is accelerating, should break $200 billion dollars in 2024 and could break $400 billion dollars by 2030.

Customers that license our 300mm Watch channel not only get the 300mm watch database along with regular updates, they also get access to this recent TSMC analysis and will also get access to a similar analysis we are doing of Samsung. For information on the 300mm Watch database or Strategic Cost and Price Model please contact sales@techinsights.com

Also Read:

SPIE Advanced Lithography Conference 2023 – AMAT Sculpta® Announcement

IEDM 2023 – 2D Materials – Intel and TSMC

IEDM 2022 – TSMC 3nm

IEDM 2022 – Imec 4 Track Cell


Interconnect Under the Spotlight as Core Counts Accelerate

Interconnect Under the Spotlight as Core Counts Accelerate
by Bernard Murphy on 04-06-2023 at 6:00 am

Core counts min

In the march to more capable, faster, smaller, and lower power systems, Moore’s Law gave software a free ride for over 30 years or so purely on semiconductor process evolution. Compute hardware delivered improved performance/area/power metrics every year, allowing software to expand in complexity and deliver more capability with no downsides. Then the easy wins became less easy. More advanced processes continued to deliver higher gate counts per unit area but gains in performance and power started to flatten out. Since our expectations for innovation didn’t stop, hardware architecture advances have become more important in picking up the slack.

Drivers for increasing core-count

An early step in this direction used multi-core CPUs to accelerate total throughput by threading or virtualizing a mix of concurrent tasks across cores, reducing power as needed by idling or powering down inactive cores. Multi-core is standard today and a trend in many-core (even more CPUs on a chip) is already evident in server instance options available in cloud platforms from AWS, Azure, Alibaba and others.

Multi-/many-core architectures are a step forward, but parallelism through CPU clusters is coarse-grained and has its own performance and power limits, thanks to Amdahl’s law. Architectures became more heterogenous, adding accelerators for image, audio, and other specialized needs. AI accelerators have also pushed fine-grained parallelism, moving to systolic arrays and other domain-specific techniques. Which was working pretty well until ChatGPT appeared with 175 billion parameters with GPT-3 evolving into GPT-4 with 100 trillion parameters  – orders of magnitude more complex than today’s AI systems – forcing yet more specialized acceleration features within AI accelerators.

On a different front, multi-sensor systems in automotive applications are now integrating into single SoCs for improved environment awareness and improved PPA. Here, new levels of autonomy in automotive depend on fusing inputs from multiple sensor types within a single device, in subsystems replicating by 2X, 4X or 8X.

According to Michał Siwinski (CMO at Arteris), sampling over a month of discussions with multiple design teams across a wide range of applications suggests those teams are actively turning to higher core counts to meet capability, performance, and power goals. He tells me they also see this trend accelerating. Process advances still help with SoC gate counts, but responsibility for meeting performance and power goals is now firmly in the hands of the architects.

More cores, more interconnect

More cores on a chip imply more data connections between those cores. Within an accelerator between neighboring processing elements, to local cache, to accelerators for sparse matrix and other specialized handling. Add hierarchical connectivity between accelerator tiles and system level buses. Add connectivity for on-chip weight storage, decompression, broadcast, gather and re-compression. Add HBM connectivity for working cache. Add a fusion engine if needed.

The CPU-based control cluster must connect to each of those replicated subsystems and to all the usual functions – codecs, memory management, safety island and root of trust if appropriate, UCIe if a multi-chiplet implementation, PCIe for high bandwidth I/O, and Ethernet or fiber for networking.

That’s a lot of interconnect, with direct consequences for product marketability. In processes below 16nm, NoC infrastructure now contributes 10-12% in area. Even more important, as the communication highway between cores, it can have significant impact on performance and power. There is real danger that a sub-optimal implementation will squander expected architecture performance and power gains, or worse yet, result in numerous re-design loops to converge.  Yet finding a good implementation in a complex SoC floorplan still depends on slow trial-and-error optimizations in already tight design schedules. We need to make the jump to physically aware NoC design, to guarantee full performance and power support from complex NoC hierarchies and we need to make these optimizations faster.

Physically aware NoC designs keeps Moore’s law on track

Moore’s law may not be dead but advances in performance and power today come from architecture and NoC interconnect rather than from process. Architecture is pushing more accelerator cores, more accelerators within accelerators, and more subsystem replication on-chip. All increase the complexity of on-chip interconnect. As designs increase core counts and move to process geometries at 16nm and below, the numerous NoC interconnects spanning the SoC and its sub-systems can only support the full potential of these complex designs if implemented optimally against physical and timing constraints – through physically aware network on chip design.

If you also worry about these trends, you might want learn more about Arteris FlexNoC 5 IP technology HERE.

 


AI is Ushering in a New Wave of Innovation

AI is Ushering in a New Wave of Innovation
by Greg Lebsack on 04-05-2023 at 10:00 am

16268313 rm373batch5 18a

Artificial intelligence (AI) is transforming many aspects of our lives, from the way we work and communicate to the way we shop and travel. Its impact is felt in nearly every industry, including the semiconductor industry, which plays a crucial role in enabling the development of AI technology.

One of the ways AI is affecting our daily lives is by making everyday tasks more efficient and convenient. For example, AI-powered virtual assistants such as Alexa and Siri can help us schedule appointments, set reminders, and answer our questions. AI algorithms are also being used in healthcare to analyze patient data and provide personalized treatment plans, as well as in finance to detect fraud and make investment decisions.

AI is also changing the way we work. Many jobs that used to require human labor are now being automated using AI technology. For example, warehouses are increasingly using robots to move and sort goods, and customer service departments are using chatbots to handle routine inquiries.

The semiconductor industry is a critical component of the AI revolution. AI relies on powerful computing processors, such as graphics processing units (GPUs) and deep learning processors (DLPs), to process massive amounts of data and perform complex calculations. The demand for these chips has skyrocketed in recent years, as more companies invest in AI technology.

AI is beginning to have an impact on the design and verification of ICs. AI can be used to improve the overall design process by providing designers with new tools and insights. For example, AI-powered design tools can help designers explore design alternatives and identify tradeoffs between performance, power consumption, and cost. AI can also be used to provide designers with insights into the behavior of complex systems, such as the interaction between software and hardware in an embedded system.

AI is enabling the development of new types of chips and systems. For example, AI is driving the development of specialized chips for specific AI applications, such as image recognition and natural language processing. These specialized chips can perform these tasks much faster and more efficiently than general-purpose processors and are driving new advances in AI technology.

Semiconductor fabrication is the largest expenditure and AI has the greatest potential in this area. AI can help optimize the manufacturing process from design to fabrication by analyzing the process data, identifying defects, and suggesting optimizations.  These insights and changes will allow fans to detect problems earlier, reducing cost, increasing yield, and improving overall efficiency.

There are also many concerns with a technology that is this disruptive.  While this automation can potentially increase productivity and reduce costs, it also raises concerns about job loss and the need for workers to acquire new skills.  There are also a number of ethical concerns associated with AI.  AI systems can collect and analyze large amounts of personal data, raising concerns about privacy and surveillance. There are also concerns about the potential for corporations and governments to misuse this data for their own purposes.

AI is transforming many aspects of our lives, from the way we work and communicate to the way we shop and travel. The semiconductor industry is a critical component of the AI revolution, not only providing the computing power to enable AI, but also benefiting from AI for IC design and manufacturing improvements. As AI technology continues to advance, it is likely that it will continue to play an increasingly important role in the semiconductor design process, enabling new levels of innovation and driving new advances in AI technology.   It is essential to stay informed about AIs impact and ensure that its benefits are realized while minimizing the potential risks.

Also Read:

Narrow AI vs. General AI vs. Super AI

Scaling AI as a Service Demands New Server Hardware

MIPI D-PHY IP brings images on-chip for AI inference

Deep thinking on compute-in-memory in AI inference


LIVE WEBINAR: New Standards for Semiconductor Materials

LIVE WEBINAR: New Standards for Semiconductor Materials
by Daniel Nenni on 04-05-2023 at 6:00 am

New Standards for Semiconductor Materials

This is the 5th webinar in our series of webinars to explore trending topics on materials and semiconductor development. Join us to discover how digital solutions are forming new ways of operating in a fast-paced, highly demanding semiconductor industry.

With data analytics and digital tools, we are setting new standards for the way we develop new materials, manufacture, control our processes, and supply our customers. During this webinar, you will learn how we pair engineering principles with data analytics capabilities to firstly, drive digitalization with digital twin deployment and secondly, establish comprehensive data analytics methods to deploy descriptive, predictive, and prescriptive solutions throughout the organization.

LEARN MORE 

Why attend? 

Our Semiconductor Materials Series attract professionals, business and technology leaders, researchers, academics, and industry analysts from across the electronics supply chain around the world.

In this webinar, you will gain practical insights on:

  • Data-based operations of a semiconductor materials supplier
  • Digital twin deployment in every-day operations
  • Advanced data analytics methodologies to drive innovation, increase transparency, and act with speed

Who should attend? 

  • Quality experts, process engineers, Supply Chain, Process, and Technology Development teams in semiconductor companies
  • Data enthusiasts and digitalization experts

Register today to access exclusive content and engage in the interactive Q&A session.

You will be able to apply innovative techniques and best practices to solve your unique challenges.

Attendees are invited to submit questions ahead of time at info_semi_webinar@emdgroup.com.

AGENDA

4:00 pm – 4:05 pm
Laith Altimime
President
SEMI Europe

Welcome Remarks

Anand Nambiar
Executive Vice President and Global Head, Semiconductor Materials
The Electronics business of Merck KGaA, Darmstadt, Germany
4:05 pm – 4:45 pm
Dr. Safa Kutup Kurt
Executive Director and Head of Operations of Digital Solutions
The Electronics business of Merck KGaA, Darmstadt, Germany

Presentation

Biography
His organization is a key enabler to design and optimize products by using data analytics methodology in R&D, quality, and supply chain while ensuring data protection in sensitive environments. After gaining a bachelor´s degree in Chemical Engineering and Business Administration in Turkey, Kutup received a Master’s degree in Industry 4.0 Technologies in Germany at the TU Dortmund University. He also gained his Ph.D. in Chemical Engineering at the same university. He and his team led several data-driven process optimization projects worldwide for Electronics and Life Science business sectors. He has co-authored over 16 technical papers and patents focused on smart and continuous manufacturing technology, equipment design, and process intensification.

Anja Muesch
Head of Use Case Management of Digital Solutions
The Electronics business of Merck KGaA, Darmstadt, Germany

Presentation

Biography
Anja’s work is focused on portfolio development and expansion of data sharing and analytics engagements. Her team manages Use Cases for customers and suppliers along the use case life cycle. She holds an MSc in Business Chemistry from the Heinrich-Heine University in Düsseldorf, Germany, and the Universiteit van Amsterdam, the Netherlands. She has in-depth experience in project management and strategy development focusing on Data and Digital.

4:45 pm – 5:00 pm

Live Q&A and Conclusions

REGISTRATION

Registration is FREE of charge. If you miss the live session, view the recording on-demand.

LEARN MORE

DIGITAL SOLUTIONS FOR THE SEMICONDUCTOR INDUSTRY

The semiconductor industry is demanding to drive higher yield, produce with zero defects and reduce time to market to meet the ultimate goal – the “ideal ramp”.

Merck KGaA, Darmstadt, Germany, applies advanced analytics methods to design and optimize our products in R&D, Quality, and High Volume Manufacturing while ensuring data security in our sensitive environments.

Also Read:

Step into the Future with New Area-Selective Processing Solutions for FSAV

Integrating Materials Solutions with Alex Yoon of Intermolecular

Ferroelectric Hafnia-based Materials for Neuromorphic ICs


Autonomy Lost without Nvidia

Autonomy Lost without Nvidia
by Roger C. Lanctot on 04-04-2023 at 10:00 am

Autonomy Lost without Nvidia

Five years ago Uber nearly singlehandedly wiped out the prospect of a self-driving car industry with the inept management of its autonomous vehicle testing in Phoenix which led to a fatal crash. The massive misstep instantly vaporized tens of billions of dollars of Uber’s market cap and sent the company’s robotaxi development arm into a tailspin from which it was unable to recover.

It had only been a few months before – at the CES event in Las Vegas – that Nvidia had proudly trumpeted its newfound relationship with Uber – just one of several dozen autonomous vehicle collaborations announced by Nvidia. But the Uber crash cast a pall that caused Nvidia to pause its own autonomous vehicle testing and, ultimately, dial back on its autonomous vehicle grandstanding.

A measure of that impact was evident in Nvidia CEO Jensen Huang’s keynote at this week’s Nvidia GTC event. Huang’s keynotes have become a tech industry bellwether as the company’s GPU processing platforms have risen to prominence across the spectrum of emerging high-end applications.

While Nvidia remains an actively engaged participant in the development of autonomous vehicle technology – the topic received scant mention in Huang’s keynote this year. Instead, generative AI and large language model inference engines got the spotlight as Nvidia announced its launch of the Nvidia AI Foundation – a cloud-based platform developed in partnership with Microsoft, Google, and Oracle to deliver processing power for a wide range of AI-centric applications.

Huang announced its Omniverse Managed Cloud Service – what he described as AI’s iPhone moment – delivering four configurations from a single architecture. The four configurations included L4 for AI video, L40 for Omniverse and graphics rendering, H100 PCIE for scaling out large language model inference engines, and Grace-Hopper for recommender systems and vector databases.

Huang’s more than hour long presentation was a typical tour de force of all of Nvidia’s technological advances in GPU and server technology along with a review of various strategic engagements and technology deployments. The fact that autonomous vehicle tech got short shrift – while automotive factory planning and automation did get a fair bit of attention – was yet another hint that autonomous vehicle tech has been consigned to the sidelines.

The dark cloud of Uber’s failure lingers over the industry. Even semi-autonomous vehicle operator Tesla struggles to explain suspicious Autopilot and full-self-driving misbehavior (crashes – fatal and otherwise) to regulators. Autonomous vehicle developers have been forced to extend their viability forecasts. Some have given up altogether.

Cruise CEO Kyle Vogt told Fortune Magazine this week that “within 10 years driving a car will be a hobby like riding horses is today.” He added that within five years the majority of people would get around cities in autonomous vehicles.

Sadly, Vogt’s sanguine view is shared by few.

While Vogt may foresee a very short time-line to the arrival of millions of autonomous vehicles on city streets – vehicles the deployment of which Vogt believes does not require National Highway Traffic Safety Administration exemptions – the dim reality is manifest in the stuttering performance of Cruise vehicles on the streets of San Francisco today.

What was once sexy and worthy of spotlighted emphasis at Nvidia’s GTC event, has now become an awkward and frightening embarrassment. The promise of autonomous vehicles transforming society is being lost in the focus on the downside – potential catastrophic failures and exorbitant expenditures with little short-term prospect of revenue.

Interestingly, the technology that has seized the spotlight – generative AI – is itself a pricey proposition with ill-defined commercialization opportunities. While autonomous tech transitions through its trough of despair, ChatGPT and its ilk are riding high on the ether of unlimited potential.

In some respects, the collateral damage from the fatal Uber crash five years ago was Nvidia’s diminished enthusiasm for autonomous vehicle tech. The sector is in dire need of leadership and vision – something that Nvidia is imparting to the AI sector in spades.

It might be time for Nvidia to get its robotaxi mojo rolling again. The fatal Uber crash was a devastating blow – but it ought not to be fatal to the entire sector. Autonomous vehicle tech remains a strategic focus for Nvidia and retains the promise of societal transformation. This is not time to throw in the towel.

Also Read:

Mercedes, VW Caught in TikTok Blok

AAA Hypes Self-Driving Car Fears

IoT in Distress at MWC 2023


AI in Verification – A Cadence Perspective

AI in Verification – A Cadence Perspective
by Bernard Murphy on 04-04-2023 at 6:00 am

Opening slide min

AI is everywhere or so it seems, though often promoted with insufficient detail to understand methods. I now look for substance, not trade secrets but how exactly they using AI. Matt Graham (Product Engineering Group Director at Cadence) gave a good and substantive tutorial pitch at DVCon, with real examples of goal-centric optimization in verification. Some of these are learning-based, some are simply sensible automation. In the latter class he mentioned test weight optimization, ranking test value and perhaps ordering tests by contribution to coverage. Pushing the low contributors to the end or out of the list. This is human intelligence applied to automation, just normal algorithmic progress.

AI is a bigger change, yet our expectations must remain grounded to avoid disappointment and the AI winters of the past. I think of AI as a second industrial revolution. We stopped using an ox to drag a plough through a field and started building steam driven tractors. The industrial revolution didn’t replace farmers, it made them more productive. Today, AI points to a similar jump in verification productivity. The bulk of Matt’s talk was on opportunities, some of these already claimed for the Verisium product.

AI opportunities in simulation

AI can be used to compress regression, by learning from coverage data in earlier runs. It can be used to increase coverage in lightly covered areas and on lightly covered properties, both worthy of suspicion that unseen bugs may lurk under rocks. Such methods don’t replace constrained random but rather enhance it, increasing bug exposure rate over CR alone.

One useful way to approach rare states is through learning on front-end states which naturally if infrequently reach rare states, or come close. New tests can be synthesized based on such learning which together with regular CR tests can increase overall bug rate both early and late in the bug maturation cycle.

AI opportunities in debug

I like to think of debug as the third wall in verification. We’ve made a lot of progress in test generation productivity through reuse (VIP) and test synthesis, though we’re clearly not done yet. And we continue to make progress on verification engines, from virtual to formal and in hardware assist platforms (also not done yet). But debug remains a stubbornly manual task, consuming a third or more of verification budgets. Debuggers are polished but don’t attack the core manual problems – figuring out where to focus, then drilling down to find root causes. We’re not going to make a big dent until we start knocking down this wall.

This starts with bug triage. Significant time can be consumed simply by separating a post-regression pile of bugs into those that look critical and those that can wait for later analysis. Then sub-bucketing into groups with suspected common causes. Clustering is a natural for unsupervised learning, in this case looking at meta-data from prior runs. What checkins were made prior to the test failing? Who ran it and when? How long did the test run for? What was the failure message? What part of the design was responsible for the failure message?

Matt makes the point that as engineers we can look at a small sample of these factors but are quickly overwhelmed when we have to look at hundred or thousands of pieces of information. AI in this context is just automation to handle large amounts of relatively unstructured data to drive intelligent clustering. In a later run when intelligent triage sees a problem matching an existing cluster with high probability, bucket assignment becomes obvious. An engineer then only needs to pursue the most obvious or easiest failing test to a root cause. They can then re-run regression in expectation that all or most of that class of problems will disappear.

On problems you choose to debug, deep waveform analysis can further narrow down a likely root cause. Comparing legacy and current waveforms, legacy RTL versus current RTL, a legacy testbench versus the current testbench. There is even research on AI-driven methods to localize a fault – to a file or possible even to a module (see this for example).

AI Will Chip Away at Verification Complexity

AI-based verification is a new idea for all of us; no-one is expecting a step function jump into full blown adoption. That said, there are already promising signs. Orchestrating runs against proof methods appeared early in formal methodologies. Regression optimization for simulation is on an encouraging ramp to wider adoption. AI-based debug is the new kid in this group, showing encouraging results in early adoption. Which will no doubt drive further improvements, pushing debug further up the adoption curve. All inspiring progress towards a much more productive verification future.

You can learn more HERE. (Need a link)


Mapping SysML to Hardware Architecture

Mapping SysML to Hardware Architecture
by Daniel Payne on 04-03-2023 at 10:00 am

SysML to VisualSim, Media App min

The Systems Modeling Language (SysML) is used by systems engineers that want to specify, analyze, design, verify and validate a specific system. SysML started out as an open-source project, and it’s a subset of the Unified Modeling Language (UML). Mirabilis Design has a tool called VisualSim Architect that imports your SysML descriptions so that you can start to measure the actual performance of an electronic system before the software gets developed by choosing an optimal hardware configuration, knowing that requirements and constraints are met very early in the system design process, before detailed implementation begins. I attended their most recent webinar and learned how this system level design process can be used to create a more optimal system that meets metrics like power, latency and bandwidth.

Tom Jose was the webinar presenter from Mirabilis where he has done R&D and now is the Lead Application Engineer. The first case study presented was a media application system with a camera, CPU, GPU and memory components where the power target was under 2.0W, and the number of frames analyzed in 20 ms had to be over 50,000.

Media Application

The first approach was to run all of the tasks in software using the A53 core, without any acceleration, which resulted in the power goal met, however the peak frame rate wasn’t met. Analysis revealed that the rotate frame step was taking too long, so a second approach using hardware acceleration was modeled. With HW acceleration the frame rate reached 125.5K frames, but power was too high. For the third iteration some power management was applied to the HW accelerator block, and then both frame rate and power metrics were fully achieved.

For a second case study Tom showed an automotive example where AUTOSAR was running on an ECO while faults were being injected.

AUTOSAR Test Case

When faults were injected into the simulation then memory became corrupted, causing spurious runnable execution. Failures also increased the system latency, while keeping ECU activity high. Normal execution could be quickly compared versus execution with injected faults.

The benefit of using VisualSim Architect is that a systems engineer can find the right hardware components early in the exploration phase, eliminating surprises during implementation. This Mirabilis approach bridges the gap between concept and implementation, so once your requirements are defined you can explore different architectures, optimize for performance and power, or even inject faults to see the consequences. Engineers can model the software, network and hardware components rather quickly in this GUI-based simulation platform. There’s already an extensive system-level library of IP models, allowing you to drag and drop to model your system.

A final example showed a Radar system that started out as SysML blocks, then imported into VisualSim architect for exploration, analysis and optimization.

RADAR Example

The RADAR simulation was run, but reading the activity results showed that requirements were not being met. By sweeping some of the system-level parameters and re-running a few dozen simulations, a table of passing and failing requirements was generated. The system architect could then choose which of the passing cases to use.

Summary

Mirabilis started out in 2003 and over the past 20 years has grown to include development and support centers in the USA, India, Taiwan, Japan and Czech Republic. The VisualSim Architect tool enables a systems engineer to visualize, optimize and validate a system specification prior to detailed implementation. This methodology produces a shift-left benefit by shortening the time required for model creation, communication and refinement, and even implementation.

View the 28 minute webinar recording on YouTube.

Related Blogs


Full-Stack, AI-driven EDA Suite for Chipmakers

Full-Stack, AI-driven EDA Suite for Chipmakers
by Kalar Rajendiran on 04-03-2023 at 6:00 am

Synopsys.ai Industry First AI driven Full EDA Suite

Semiconductor technology is among the most complex of technologies and the semiconductor industry is among the most demanding of industries. Yet the ecosystem has delivered incredible advances over the last six decades from which the world has benefitted tremendously. Yes, of course, the markets want that break-neck speed of advances to continue. But the industry is facing not only technological challenges but also a projected shortage of skilled engineers to contend with. In terms of technology complexity, we have entered the SysMoore era. The power, performance and area (PPA) requirements of SysMoore era applications are lot more demanding than the Moore’s era applications. In terms of engineering talent shortage, the demand for US-based design workers alone is expected to be 89,000 by 2030 with a projected shortfall of 23,000. [Source: BCG Analysis]

Artificial Intelligence (AI) to the rescue. With foresight, many companies within the ecosystem have started leveraging AI in their tools offerings with tangible benefits accruing to their customers. AI techniques can automate several time-consuming and repetitive tasks in the chip design process, such as layout optimization, verification, and testing. They can help automate the design process, reduce design time, and optimize power, performance, and area (PPA) tradeoffs.

Two years ago, Synopsys launched their AI-driven design space optimization (DSO.ai) capability. Since then, DSO.ai has boosted designer productivity and been leveraged for 160 production tape-outs to date. DSO.ai uses machine learning techniques to explore the design space and identify optimal solutions that meet the designer’s PPA targets. But DSO.ai capability was just the tip of the ice berg in terms of AI-driven technology offering from Synopsys. The company has been investing heavily in AI to enhance and expand their tools offerings.

Synopsys.ai

This week at the Synopsys Users Group (SNUG) conference, the company unveiled Synopsys.ai as the industry’s first full-stack, AI-driven EDA suite for chipmakers. The full suite addresses the areas from system architecture to silicon test and more.

Verification and Test

Going to market in a rapid and cost-effective manner with a chip that is high-yielding involves more than just the design implementation. The entire process of moving from architecture to design implementation to a production worthy manufactured chip can be broadly divided into Design, Verification and Test. Without successful verification, we may end up with a functionally wrong chip. Without successful testing, we may end up with a functionally failing chip. Just as the design phase is time/effort intensive, verification and testing phases are too.

AI can help verify and validate the chip design by analyzing and predicting potential design issues and errors. It can help reduce the need for manual verification and testing and improve the chip’s quality and reliability.

Verification Space Optimization (VSO.ai)

The Synopsys.ai EDA suite includes AI-driven functional verification (VSO.ai) capability. VSO.ai helps in early design validation by analyzing and verifying the chip design against the design specification and requirements. Design flaws that creeped in during the specification to design phase are identified early in the project, reducing the time and cost of design iteration further down the line.

The tool infers needed coverage from the stimulus and RTL and defines the coverage requirement. Advanced root cause analysis is performed to identify unreachable coverage. A ML-based Solver targets hard to hit coverage for higher coverage closure. A built-in regression optimizer ensures that the highest ROI tests are run first.

The end result, regression runs that take days to reach target coverage take only hours to reach and exceed the target coverage.

Test Space Optimization (TSO.ai)

The number of states in which a modern-day chip can operate is almost infinite. Looking for potential failure modes in complex chips is like searching for a needle in a haystack. This makes the task too enormous for engineers to identify these failure modes. The Synopsys.ai suite includes AI-driven silicon test (TSO.ai) capability. AI models analyze the chip design data and identify input conditions that could cause a chip to misbehave. The tool then uses these input conditions to generate test cases that exercise the chip’s functionality in an efficient and thorough manner. AI models can also suggest solutions based on root causes they identify for faults observed.

Manufacturing Lithography

Beyond arriving at a functional chip, there is also the requirement of high yielding chip for profitable volume production. The Synopsys.ai EDA suite includes AI-driven manufacturing solutions to accelerate the development of high accuracy lithography models for optical proximity correction (OPC). At advanced process nodes, OPC is imperative for achieving high yield during silicon fabrication. At this time, Synopsys is working with IBM to offer AI-driven mask synthesis solutions. It is reasonable to expect more capabilities in this area will be added to the Synopsys.ai suite over time.

Migrating Analog Designs to a Different Process

Compared to digital design, analog design has always been considered more challenging. At the same time, the pool of analog engineers has been shrinking over time. Analog design migration is a task that requires highly skilled analog engineers who are intimately knowledgeable in related process technologies. Synopsys’ AI-driven analog design migration flow enables efficient re-use of designs that need to be migrated from one TSMC process to another.

You can read the Synopsys.ai press announcement here. To learn more details, visit Synopsys.ai.

Also Read:

Power Delivery Network Analysis in DRAM Design

Intel Keynote on Formal a Mind-Stretcher

Multi-Die Systems Key to Next Wave of Systems Innovations


Podcast EP150: How Zentera Addresses Development Security with Mike Ichiriu

Podcast EP150: How Zentera Addresses Development Security with Mike Ichiriu
by Daniel Nenni on 03-31-2023 at 10:00 am

Dan is joined by Michael Ichiriu, Vice President of Marketing at Zentera Systems. Prior to Zentera Mike was a senior executive at NetLogic Microsystems where he played a critical role in shaping the company’s corporate and product strategy. While there, he built the applications engineering team, and helped lead the organization from pre-revenue to its successful IPO and eventual acquisition by Broadcom.

Dan explores the recent updates to the National Cybersecurity Strategy with Mike. The structure and implications of these new requirements are explored. Mike describes the impact the new rule will have on development infrastructure and discusses how Zentera has helped organizations achieve compliance in as little as three months.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Chiplets, is now their time?

Chiplets, is now their time?
by Daniel Nenni on 03-31-2023 at 6:00 am

P3221015 cropped
Bapi, Dan and Sagar

Chiplets appeared on SemiWiki in 2020 and have been a top trending keyword ever since. The question is not IF chiplets will disrupt the semiconductor industry, the question is WHEN? I certainly have voiced my opinion on this (pro chiplet) but let’s hear it from the experts. There was a live panel recently sponsored by Silicon Catalyst held in Silicon Valley. These types of events were quite common before the pandemic and it is great to see them coming back. Spending time with industry icons with food and wine for all, that is what networking is all about, absolutely.

Chiplets, is now their time?
Chiplets have gained popularity in the last few years. Recently, VCs (Mayfield) have expressed interest in this technology as well. The first industry symposium on chiplets was held a few weeks ago in San Jose which was very well attended. Work on this technology has been going on for the past 20+ years. This informal panel discusses whether this is for real or the next industry “fad”. This is intended to be the first of series of events/webinars addressing this topic in 2023.

Particpants:
Moderated by Dan Armbrust, Co-founder, Board Director and initial CEO of Silicon Catalyst. Dan has more than 40 years of semiconductor experience starting at 26 years with IBM at the East Fishkill, NY and Burlington, VT fabs followed by president and CEO of Sematech, then Board Chairman of PVMC (PhotoVoltaic Mfg Consortium), and the founding of Silicon Catalyst.

Panelist Dr. Bapi Vinnakota, PhD from Princeton in computer engineering, Bapi is a technologist and architect (Intel/Netronome), academic (University of Minnisota/San Jose State University), and is currently with the Open Compute Project Foundation.

Panalist Sagar Pushpala has 40 years of experience starting with AMD as a process engineer, National Semiconductor, Maxim, Intersil, TSMC, Nuvia, Qualcomm, and is now an active advisor, investor, and board member.

The panelists shared their personal experience which was quite interesting. The audience was Silicon Catalyst advisors so the question really is WHEN will the commercial chiplet ecosystem be ready for small to medium companies?

I attended the first Chiplet Summit referenced above and was very impressed with the content and attendance.  The next one is mid June so stay tuned. I have also spent many hours researching and discussing chiplets with the foundries and their top customers. Xilinx, Intel, AMD, NVIDIA, Broadcom, amongst others have implemented the chiplets concept with their internal designs. The point being, chiplets have already been proven in R&D and are in production so that answers the questions to IF and WHEN for the top semiconductor companies.

As to when the commercial chiplet ecosystem will be ready a laundry list of technical challenges were discussed which included: die to die communication, die interoperability, bumping, access to packaging and assembly houses, firmware, software, known good dies, system test and test coverage, EDA and simulation tools to cover multi-physics (electrical, thermal, mechanical). More importantly these different groups or different companies will have to work together in a whole new chiplet way.

In my opinion this is not as hard as it sounds and this was also covered. The foundry business is a great example. When we first started going fabless there was no commercial IP market. Today we have a rich IP ecosystem anchored by the foundries like TSMC. Chiplets will be a similar process but we really are at the beginning of said process and that was talked about as well.

An interesting discussion point was with DARPA and the Electronics Resurgence Initiative. To me chiplets is all about high volume leading edge designs and the ability to reduce design time and cost. But now I also see how the US Government can greatly benefit from chiplets and hopefully be a funding source for the ecosystem.

As much as I like Zoom and virtual conferences there is nothing like a live gathering. The chiplet discussion will continue and I highly recommend doing it live whenever possible. The next big event is the annual ecosystem TSMC Technology Symposium, I hope to see you there.

Also Read:

CEO Interview: Dr. Chris Eliasmith and Peter Suma, of Applied Brain Research Inc.

2023: Welcome to the Danger Zone

Silicon Catalyst Angels Turns Three – The Remarkable Backstory of This Semiconductor Focused Investment Group