Synopsys IP Designs Edge AI 800x100

TSMC 2025 Technical Symposium Briefing

TSMC 2025 Technical Symposium Briefing
by Daniel Nenni on 04-23-2025 at 11:40 am

TSMC Advanced Tecnology RoadMap 2025 SemiWiki

At the pre-conference briefing, Dr. Kevin Zhang gave quite a few of us media types an overview of what will be highlighted at the 2025 TSMC Technical Symposium here in Silicon Valley. Since most of the semiconductor media are not local this was a very nice thing to do. I will be at the conference and will write more tomorrow after the event. TSMC was also kind enough to share Kevin’s slides with us.

The important thing to note is that TSMC is VERY customer driven so this presentation is based on interactions with the largest semiconductor manufacturing customer base the industry has ever seen, absolutely.

As you can imagine, AI is driving the semiconductor industry now not unlike what smartphones did for the last two decades. The difference being that AI consumes leading edge silicon at an alarming rate which is a good thing for the semiconductor industry. While AI is very performance centric, it must also be power sensitive. This puts TSMC in a very strong position from all of those years of manufacturing mobile SOCs for smartphones and other battery operated devices.

Kevin started with the AI revolution and how AI will be infused into most every electronic device from the cloud to the edge and will enable many new applications. Personally, I think AI will transform the world in a similar fashion as smartphones have but on a much grander scale.

Not long ago the mention of the semiconductor industry hitting $1T seemed like a dream. It is one thing for industry observers like myself to say it but it is quite another when TSMC does. There is little doubt in my mind that it will happen based on my observations inside the semiconductor ecosystem.

There have been some minor changes to the TSMC roadmap. It has been extended out to 2028 adding N3C and A14. The C is a compressed version meaning the yield learning curve is at a point where the process can be further optimized for density.

A14 will certainly be a big topic of discussion at the event. A14 is TSMC’s second generation of nanosheet transistor which is considered a full node (PPA) versus N2: 10-15% speed improvement at the same power, 25-30% power reduction at the same speed, and 1.2X logic density improvement. The first iteration of 14A does not have backside power delivery. It was the same with N2 which was followed by A16 with Super Power Rail (SPR). SPR for A14 is expected in 2029.

The TSMC 16A specs were updated as well. 16A is the first version of SPR for reduced IR drop and improved logic density. This has the transistor connection on the back. SPR is targeted at AI/HPC designs with improved signal routing and power delivery. A16 is on track for production in the second half of 2026. In comparison to N2P, A16 provides an 8-10% speed improvement at the same power, 15-20% power reduction at the same speed.

From what I have heard TSMC N2 is yielding quite well and is on track for production later this year. The big question is who will be the first customer to ship N2 product? Usually it is Apple but word on the street is the iPhones this year will again be using N3. I already have an N3 iPhone so I will skip this generation if that is the case. If Apple does an N2 based iPhone Max Pro this year then count me in!

TSMC N2P is also on track for production in the second half of 2026. As compared to N3E, N2P offers: 18% speed improvement at the same power, a 36% power reduction at the same speed, and a 1.2x density improvement.

The most interesting thing about N2 is the rapid growth of tape-outs between N5, N3, and N2. It really is astounding. Given that TSMC N3 was an absolute landslide for customer tape-outs I had serious doubts if we would ever see a repeat of that success but here we are. Again, in the past mobile was the driver for early tape-outs but now we have AI/HPC as well.

Finally, as Kevin said, TSMC N3 is the last and best FinFET technology available on such a massive scale with N3, N3E, N3P, N3X, N3A, and now N3C. Yet, N2 tape-outs beat N3 in the first year and the second year even more so. Simply amazing. I guess the question is who is NOT using TSMC N2?

The second part of the presentation was on packaging which will be covered in another blog. After the event I can provide even more details and get a feeling for the vibe at the event from the ecosystem. Exciting times!

UPDATE: TSMC is sharing recordings of the presentations HERE.

Also Read:

TSMC Brings Packaging Center Stage with Silicon

IEDM 2025 – TSMC 2nm Process Disclosure – How Does it Measure Up?

TSMC Unveils the World’s Most Advanced Logic Technology at IEDM

IEDM Opens with a Big Picture Keynote from TSMC’s Yuh-Jier Mii


Perspectives from Cadence on Data Center Challenges and Trends

Perspectives from Cadence on Data Center Challenges and Trends
by Bernard Murphy on 04-23-2025 at 6:00 am

Cadence Data Center Report Image

From my vantage point in the EDA foxhole it can be easy to forget that Cadence also has interests in much broader technology domains. One of these is in data center modeling and optimization, through their Cadence Reality Digital Twin Platform. This is an area in which they already have significant track record collaborating with companies like Switch, NV5, Nvidia and others, with focus on design modeling for giant data centers. Cadence recently released a report based on a survey of hundreds of IT, facility, and business leaders on what priorities and concerns they see for future data center evolution. The report covers a lot of detail. I will highlight here just a few of the points I found intriguing, especially from my limited understanding of the data center world.

Modeling data centers

While the Cadence report jumps straight into business perspectives, I want to take a couple of paragraphs to elaborate on the purpose of modeling.  Electronic systems (servers, storage, and networking in a data center) dissipate power in the form of heat. Excess heat causes loss of performance and, in extreme cases, damage, which must be mitigated through cooling. Forced air cooling (like AC) has been the standard approach for traditional heat density found in most data centers. However, with the emergence of AI compute, where power densities are climbing towards 100X, there is growing demand for more energy-efficient liquid-based cooling. Heat and energy/sustainability problems have become even more pressing as these power-hungry AI servers consume more of the data center footprint.

Planning compute resource (racks, etc) placements together with cooling resources (fans, vents, liquid cooling support, plumbing and heat exchangers) is rather like a place and route problem, except that objects to place are racks and routing is convective/ conductive/ radiative heat flow away from hot devices in 3 dimensions with flows through vents, up, around and above racks, and through cold plates and piping for liquid cooling. Hence the need for modeling.

The need to evolve

Whether for on-premises data centers, cloud services, or colocated services (guaranteed capacity on my hardware in your data center and you take care of service), almost all users/ suppliers of data center services want to see continued innovation. (The report focuses on hardware, the stuff that consumes capital cost and power, not software layers enabling hybrid cloud for example.)

Unsurprisingly, how much that wish translates into action depends on whether the service provider is a profit center (a cloud or colocation service) or a cost center (on-premises). Profit centers must keep moving ahead to stay competitive, whereas cost centers must justify investments against imperfectly quantified future benefits. Also no surprise, much of the demand to evolve comes from a need to increase energy efficiency and a need to add or increase AI support, two requirements fighting against each other.

Adding to the challenge, while some innovation can be introduced incrementally, significant improvements demand more significant capital investment, for example investments in local renewable energy options, high-density servers, and liquid cooling which may require major infrastructure rework. A big step forward when implemented but a big cost to get there.

Innovation at this level almost certainly demands adding expert staff, for AI certainly, and for digital twin planning – these kinds of improvements must be grounded in certainty that they will deliver in practice.

It’s easy to see how for smaller data centers an ardent desire to improve can run into a brick wall of budget and staffing constraints. Even hyperscaler centers must plan carefully to maximize continuing value of existing assets.  One interesting insight from the report is that data centers in South America are claiming high confidence in their ability to innovate, I would assume because many of these are quite new and have been able to design and hire from scratch to meet state-of-the-art objectives.

Meaningful improvements must start with digital twins

For small enterprises, AI needs may already be pushing some compute loads (at least training loads) to the cloud or colocation services.

For larger enterprises there are good reasons to maintain and expand on-premises (and perhaps colocation) options, but they must also step up to some of the kinds of investments already being made by profit-driven data centers, if not at the same scale. Meanwhile profit-based data center enterprises are already there and fully familiar with these needs.

Whatever the business motivation, any enterprise planning new or significantly upgraded capability is doing so using digital twin modeling. Automakers, aircraft makers, factory builders and others are all moving to digital twins to optimize and continue to refine the efficiency of their businesses. This is an unavoidable component of planning today.

Interesting white paper. You can access it HERE.

Also Read:

Designing and Simulating Next Generation Data Centers and AI Factories

How Cadence is Building the Physical Infrastructure of the AI Era

Big Picture PSS and Perspec Deployment


Semiconductor Tariff Impact

Semiconductor Tariff Impact
by Bill Jewell on 04-22-2025 at 3:00 pm

US Semiconductor Imports 2024 SemiWiki

President Donald Trump has initially excluded semiconductors from his latest round of U.S. tariffs. However, he could put tariffs on semiconductors in the future. If tariffs are placed on semiconductors imported to the U.S., how would that affect U.S.-based semiconductor companies? The chart below shows U.S. semiconductor imports for the year 2024. 64% of imports are from just four countries: Malaysia, Taiwan, Thailand, and Vietnam. China accounts for only 3% of imports. Semiconductors made in China generally come into the U.S. as components in finished electronics equipment such as PCs and smartphones.

Why are these four countries such a significant portion of U.S. semiconductor imports? Except for Taiwan, they do not have significant wafer fabs. However, they account for a major portion of semiconductor assembly and test (A&T) facilities. These facilities take wafers from fabs, assemble them into packages, and test to see if they meet specifications. These A&T facilities may belong to the semiconductor manufacturer (IDM) or may be owned by outsourced assembly and test (OSAT) companies. The chart below from SEMI shows the distribution of these facilities. China, Taiwan and Southeast Asia account for 70% of A&T facilities.

The major U.S.-based IDMs all have most of their A&T facilities outside of the U.S. as show below:

Fabless U.S.-based companies such as Nvidia, Qualcomm, Broadcom and AMD primarily use TSMC’s foundry services. TSMC mainly uses its own A&T facilities located in Taiwan.

Thus, if tariffs are placed on semiconductor imports to the U.S., it would drive up costs for U.S.-based semiconductor companies. The U.S. companies with their own fabs have most of their fab capacity in the U.S., but have the vast majority of their A&T capacity outside of the U.S. TSMC is building fabs in the U.S., but currently has no A&T facilities in the U.S.

A solution to avoid tariffs would be for companies to build more A&T facilities in the U.S. However, it will take significant time and money to build these facilities. Below are proposed new A&T facilities announced in the last few years.

Based on these projects, it takes two to three years to build a new A&T facility. The cost could be over $4 billion. Only two of these new facilities are in the U.S. Amkor, an OSAT company, is building an A&T facility to support TSMC’s fabs in Arizona. Integra Technologies, an OSAT company, announced plans for an A&T facility in Kansas, but the project has been delayed. Intel’s A&T facility in Poland has been delayed at least two years.

Significant cost differences exist in building facilities in the U.S. and Europe versus Asia. The three A&T facilities planned for the U.S. and Europe have an average estimated cost of $3 billion and average employment of 1,900 people. The three facilities planned for Asia have an average cost of $840 million and average employment of 3,500 people.

Any tariffs placed on semiconductor imports to the U.S. would raise the costs to most U.S.-based semiconductor companies as well as foreign companies. If U.S. companies decide to build more A&T facilities in the U.S., it will take several years and drive-up A&T costs.

Also Read:

Weak Semiconductor Start to 2025

Thanks for the Memories

Semiconductors Slowing in 2025


Designing and Simulating Next Generation Data Centers and AI Factories

Designing and Simulating Next Generation Data Centers and AI Factories
by Kalar Rajendiran on 04-22-2025 at 10:00 am

Digital Twin and the AI Factory Lifecycle

At NVIDIA’s recent GTC conference, a Cadence-NVIDIA joint session provided insights into how AI-powered innovation is reshaping the future of data center infrastructure. Led by Kourosh Nemati, Senior Data Center Cooling and Infrastructure Engineer from NVIDIA and Sherman Ikemoto, Sales Development Group Director from Cadence, the session dove into the critical challenges of building AI Factories, which are next-generation data centers optimized for accelerated computing. The talks showcased how digital twins and industrial simulation are transforming the design, deployment, and operation of these complex systems.

The Rise of the AI Factory

Kourosh opened the session by spotlighting how the data centers of the future aren’t just about racks and power but rather to be treated as AI Factories. These next-generation data centers are highly dynamic, compute-dense environments built to run massive AI workloads, support real-time inference, and train foundational models that power everything from drug discovery to autonomous systems. But with this transformation comes a new set of challenges. Designing, building, and operating an AI Factory requires multidisciplinary coordination between power, cooling, networking, and compute — with significantly tighter tolerances and higher performance demands than traditional data centers.

This level of complexity requires a new approach to designing, building and operating. An AI Factory Digital Twin is needed to simulate and manage everything from physical infrastructure to real-time operations.

Simulation: A New Era in Data Center Design

The centerpiece of this vision is NVIDIA’s AI Factory Digital Twin, built on the Omniverse platform and powered by the OpenUSD (Universal Scene Description) framework. It’s more than just a virtual replica, a continuously updating simulation engine that integrates mechanical, electrical, and thermal data into a single, unified environment. AI factories demand extreme levels of optimization to manage power density, thermal load, and operational efficiency. By using simulation in the design phase, engineers can evaluate “what-if” scenarios, test control logic, and spot failure points long before equipment is installed.

This approach helps accelerate deployment timelines and reduce operational risk.

Cadence: Technology Behind the Digital Twin

Following Kourosh’s talk, Sherman detailed how multiphysics simulation powers the AI Factory Digital Twin. He described the need to move beyond siloed workflows and bring together disciplines such as electrical, thermal and structural, that often may have operated independently in the past, into a coordinated, data-driven design process. With traditional design tools, each team optimizes their own domain but when put together, they might not work efficiently as a system. This is why digital twins become mission critical.

Cadence’s advanced simulation technology is a core enabler of NVIDIA’s digital twin strategy. From detailed power integrity models to dynamic thermal simulations, these tools provide the physical accuracy needed to make decisions early, fast, and confidently.

Sherman also emphasized Cadence’s commitment to interoperability. As a founding member of the OpenUSD-based digital twin standards group, Cadence is helping define how simulation data integrates with 3D models, real-time telemetry, and operational software.

Ecosystem Collaboration

Another major theme of the joint session was the importance of collaboration across the broader data center ecosystem. AI factories are not just a design challenge but a supply chain and operational challenge as well.

To this end, partners like Foxconn and Vertiv are playing a critical role. Foxconn, with its global manufacturing capabilities, is helping accelerate the production of modular AI factory components. Vertiv, a leader in power and cooling infrastructure, is working closely with NVIDIA and Cadence to simulate real-world behavior of critical equipment within the twin, ensuring that systems behave predictably under peak AI loads.

By simulating components like Coolant Distribution Units (CDUs), Power Distribution Units (PDUs), and Heating, Ventilation and Air Conditioning (HVAC) systems as part of the broader twin, these partners enable end-to-end validation of system behavior. This is a huge step forward in building next generation data centers that are both resilient and responsive to changing demands.

Summary

AI factories are complex, multidisciplinary systems that require new tools, new thinking, and deep collaboration. By integrating simulation, open standards, and a robust partner ecosystem, Cadence is collaborating with NVIDIA and the broader ecosystem to lay the foundation for a new era of AI infrastructure. Their tools not only reduce costs and risk, but also accelerate the delivery of AI-powered innovation across industries from healthcare and manufacturing to energy, finance, and scientific research.

Also Read:

How Cadence is Building the Physical Infrastructure of the AI Era

Big Picture PSS and Perspec Deployment

Metamorphic Test in AMS. Innovation in Verification


CEO Interview with Dr. Michael Förtsch of Q.ANT

CEO Interview with Dr. Michael Förtsch of Q.ANT
by Daniel Nenni on 04-22-2025 at 6:00 am

20230524MST 2941

Dr. Michael Förtsch, CEO of Q.ANT, is a physicist and innovator driving advancements in photonic computing and sensing technologies. With a PhD from the Max Planck Institute for the Science of Light, he leads Q.ANT’s development of Thin-Film Lithium Niobate (TFLN) technology, delivering groundbreaking energy efficiency and computational power for AI and data center applications.

Tell us about your company.

Q.ANT is a deep-tech company pioneering photonic computing and quantum sensing solutions to address the challenges of the next computing era. Our current, primary area of focus is developing and industrializing photonic processing technologies for advanced applications in AI and high-performance computing (HPC).  

Photonic computing is the next paradigm shift in AI and HPC. By using light (photons) instead of electrons, we overcome the limitations of traditional semiconductor scaling and can deliver radically higher performance with lower energy consumption. 

Q.ANT’s first photonic AI processors are based on the industry standard PCIexpress and include the associated software control (plug-and-play solution) so they integrate easily into HPC environments as they are currently set up.  We know it is critical for any new technology to get adopted.  

At the core of our technology is Thin-Film Lithium Niobate (TFLN), a breakthrough material that enables precise and efficient light-based computation. Our photonic Native Processing Units (NPUs) operate using the native parallelism of light —processing multiple calculations simultaneously, dramatically improving efficiency and performance for AI and data-intensive applications. 

Founded in 2018, Q.ANT is headquartered in Stuttgart, Germany, and is at the forefront of scaling photonic computing for real-world adoption. 

What problems are you solving?

The explosive growth of AI and HPC is starting to exceed what is possible with conventional semiconductors, creating four urgent challenges: 

1). Unmanageable energy consumption – AI data centers require vast amounts of power, with GPUs consuming over 1.2 kW each. The industry is now considering mini nuclear plants just to meet demand. 

2). Processing limitations – Traditional chips rely on electronic bottlenecks that struggle to keep up with AI’s growing complexity. 

3). Space limitations – Physical space constraints pose a significant challenge as datacenters struggle to accommodate the increasing number of server racks needed to meet rising performance demands.  

4). Shrinking traditional GPU technology to the next smaller nodes demands massive investments in cutting-edge manufacturing, often reaching billions.  

Q.ANT is tackling these challenges head-on, redefining computing efficiency by transitioning from electron-based to photon-based processing. Our photonic processors not only enhance performance but also dramatically reduce operational costs and energy consumption—delivering 30X energy savings and 50X greater computational density.  

Also in manufacturing, Q.ANT has recently showcased a breakthrough: its technology can be produced at significantly lower costs by repurposing existing CMOS foundries that operate with 1990s-era technologies. These breakthroughs pave the way for more sustainable, scalable AI and high-performance computing. 

What application areas are your strongest?

Q.ANT’s photonic processors dramatically enhance efficiency and processing power so are ideal for compute-intensive applications, such as AI model training, data center optimization, and advanced scientific computing. By leveraging the inherent advantages of light, Q.ANT enables faster, more energy-efficient processing of complex mathematical operations. Unlike traditional architectures, our approach replaces linear functions with non-linear equations, unlocking substantial gains in computational performance and redefining how businesses tackle their most demanding workloads. 

What keeps your customers up at night?

Q.ANT’s customers are primarily concerned with the increasing energy demands and limitations of current data processing and sensing technologies. Data center managers, AI infrastructure providers, and industrial innovators face mounting challenges in performance, cost, and scalability and traditional semiconductor technology is struggling to keep pace.  This is raising a number of urgent concerns such as: 

  • Increasing power demands – The cost and energy required to run AI at scale are becoming unsustainable. 
  • Processing power – AI workloads require ever-increasing computational power, but scaling with conventional chips is costly and inefficient. 
  • Scalability – Businesses need AI architectures that can grow without exponential increases in power consumption and operational expenses. 
  • Space  – Expanding data centers to accommodate AI’s growing infrastructure is becoming increasingly difficult. 

Q.ANT’s photonic computing solutions directly address these challenges by introducing a more efficient, high-performance approach that provides the following benefits:  

  • Radical energy efficiency – Our photonic processors reduce AI inference energy consumption by a factor of 30. 
  • Faster, more efficient processing – Native non-linear computation accelerates complex workloads with superior efficiency in cost, power, and space utilization. 
  • Seamless integration – Our PCIe-based Native Server Solution is fully compatible with x86 architectures and integrates easily into existing data centers. 
  • Optimized for data center space – With significantly fewer servers required to achieve the same computational power, Q.ANT’s solution helps alleviate space constraints while delivering superior performance. 

By rethinking computing from the ground up, Q.ANT enables AI-driven businesses to scale sustainably, reduce operational costs, and prepare for the future of high-performance computing. 

What does the competitive landscape look like, and how do you differentiate?

The competitive landscape is populated by companies exploring photonic and quantum technologies. However, many competitors remain in the research phase or focus on long-term promises. Q.ANT differentiates itself by focusing on near-term, high-impact solutions. We use light for data generation and processing, which enhances energy efficiency and creates opportunities that traditional semiconductor-based solutions cannot achieve.  

Q.ANT is different. We are: 

  • One of the few companies delivering real photonic processors today 
  • Developing photonic chips using TFLN, a material that allows ultra-fast, precise computations without generating excess heat 
  • TFLN experts: with six years of expertise in TFLN-based photonic chips, and operating our own pilot line we have a significant first-mover advantage in commercial photonic computing. 
What new features/technology are you working on?

We are focused on revolutionizing AI processing in HPC datacenters by developing an entirely new technology: analog computing units packaged as server solutions called Native Processing Servers (NPS). These servers promise to outperform today’s chip technologies in both performance and energy efficiency. They also integrate seamlessly with existing infrastructure, using the standardized PCI Express interface for easy plug-and-play compatibility with HPC server racks. When it comes to data center deployment requirements, our NPS meets the standards you’d expect from leading vendors. The same ease applies to software, existing source code can run on our systems without modification. 

How do customers normally engage with your company?

Businesses, researchers, and AI leaders can contact Q.ANT via email at: native-computing@qant.gmbh and engage with the company in a number of ways: 

  • Direct purchase of photonic processors – Our processors are now available for purchase for experimenting and integrating into HPC environment.  
  • Collaborative innovation – We work with industry and research partners to develop next-gen AI applications on our photonic processors  
  • Community outreach – We participate in leading tech events to advance real-world photonic computing adoption. 

With AI and HPC demand growing exponentially, Q.ANT is a key player in shaping the next era of computing. 

Also Read:

Executive Interview with Leo Linehan, President, Electronic Materials, Materion Corporation

CEO Interview with Ronald Glibbery of Peraso

CEO Interview with Pierre Laboisse of Aledia


Verifying Leakage Across Power Domains

Verifying Leakage Across Power Domains
by Daniel Payne on 04-21-2025 at 10:00 am

leakage contention

IC designs need to operate reliably under varying conditions and avoid inefficiencies like leakage across power domains. But how do you verify that connections between IP blocks has been done properly? This is where reliability verification, Electrical Rule Checking (ERC) tools and dynamic simulations all come into play particularly during the verification of leakage between power domains, and other circuit reliability issues.

Let’s look at three types of leakage that need to be verified and voided:

Parasitic leakage – parasitic body diodes, like bulk nodes for PMOS that are not high enough or NMOS not low enough.

Analog gate leakage – MOSFET stacks from power to ground, floating gate inputs, high impedance states.

Digital gate leakage – power domain mismatches, missing level shifters, floating states, high impedance states.

In the following two circuits, (a) has contention when the input is high, while (b) is safe and has no contention for any input state.

Circuit simulation would only catch the contention if the input reached a high state. Static electrical rule checks (ERC) tools do not understand states, so would not find the contention either. Fortunately, there’s a reliability verification solution from Siemens called Insight Analyzer that does use state-based analysis and finds this contention, without resorting to simulation. Circuit designers run Insight Analyzer early and often throughout their design process to quickly find and fix reliability issues, saving valuable engineering time.

Using multiple power domains is a technique to significantly reduce power in complex SoCs, but the interfaces between these domains can have errors like a missing level-shifter or other issues that need to be identified and fixed. Inter-domain leakage between different power domains is challenging to manually verify, so an automated approach is a far more robust approach.

Chips with multiple power domains and low power states need to operate reliably, and the Insight Analyzer tool complements the verification approaches of static checking and dynamic simulation

Circuit simulation at the transistor-level with SPICE is well-known to verify reliability and logic operation, but how do you know if your stimulus has uncovered all the states and modes that are possible to detect a failure or concern? Insight Analyzer catches violations like inter-domain leakage where a backup power supply powers into a main supply that is turned off.

STMicroelectronics

European engineers from STMicroelectronics presented at the 2024 User2User conference on how the Insight Analyzer tool was used in their flow to detect high impedance (HiZ) nets and perform ERC. They wanted to know all instances where HiZ occurred, for both digital and analog circuits to see if they were persistent or transient, and when they caused issues.

Using Insight Analyzer enabled the ST team to verify that high-voltage, high-power, high-density designs were thoroughly verified for reliability issues. Users found that the GUI-based tool had a quick learning curve, allowing them to be operational after just a few hours of use.

Summary

Reliability and leakage issues are critical to verify and fix for multiple power domain designs. Adding reliability verification tools like Insight Analyzer provides immediate benefits to engineering teams. Finding and fixing issues earlier helps to accelerate the schedule for a project.

Read the complete White Paper on Insight Analyzer online.

Related Blogs


How Cadence is Building the Physical Infrastructure of the AI Era

How Cadence is Building the Physical Infrastructure of the AI Era
by Kalar Rajendiran on 04-21-2025 at 6:00 am

Phases of AI Adoption

At the 2025 NVIDIA GTC Conference, CEO Jensen Huang delivered a sweeping keynote that painted the future of computing in bold strokes: a world powered by AI factories, built on accelerated computing, and driven by agentic, embodied AI capable of interacting with the physical world. He introduced the concept of Physical AI—intelligence grounded in real-world physics—and emphasized the shift from traditional programming to AI-developed systems trained on vast data, simulation, and models. From drug discovery to robotics to large-scale digital twins, Huang asserted that the future belongs to those who can design, simulate, and optimize both the virtual and the physical with extraordinary efficiency.

Cadence Enabling the Vision: Accelerating the Infrastructure of AI

It’s within Huang’s visionary context that Rob Knoth, Group Director of Strategy and New Ventures at Cadence, offered a look at how the company is actively enabling the future that Huang painted during GTC 2025. Knoth’s talk provided insight into how Cadence and NVIDIA are building the tools and infrastructure needed to make Huang’s future real. A collaborative approach lies at the heart of Cadence’s work across domains, whether it’s semiconductor design, robotics, data centers, or molecular biology. With the power of NVIDIA’s Grace Blackwell architecture, Cadence is pushing the boundaries of what’s possible in both speed and scale.

Computational Fluid Dynamics (CFD) Meets Accelerated Compute: The Boeing 777 Case Study

One of the most compelling examples was the simulation of a Boeing 777 during takeoff and landing, performed using Cadence’s Fidelity CFD platform on the Grace Blackwell system. This simulation tackled one of the most complex aerodynamic challenges in aviation and matched experimental results—all executed in a fraction of the time and energy traditional methods would require. It was a proof point of sustainability and efficiency through accelerated compute, echoing Huang’s keynote themes.

Designing AI Factories: The Role of Digital Twins

Knoth also emphasized Cadence’s leadership in building AI factory infrastructure, highlighting its collaboration with NVIDIA, Schneider Electric, and Vertiv to design digital twins of data centers. These high-fidelity models are powered by Cadence’s Reality Digital Twin Platform and are integral to NVIDIA’s reference blueprint for next-generation AI infrastructure. This work reflects Huang’s emphasis on AI factories as the backbone of the AI-driven economy, where every element is simulated, optimized, and iterated in digital space before deployment in the real world.

Embodied Intelligence: From Robots to Edge Systems

To realize the vision of agentic AI systems, Cadence is bringing its multi-physics solvers to handle everything from thermal dynamics to structural stress, onto accelerated GPU architectures. These tools are essential for designing physical AI systems, such as robots, edge devices, and complex electromechanical machines that need to interact with the real world. Knoth positioned this as a foundational step in enabling the next generation of physical AI, where simulation meets embodiment.

Transforming Drug Discovery: AI Meets Molecular Science

Building on Huang’s vision of AI in life sciences, Knoth shared Cadence’s progress in molecular design and drug discovery. Cadence’s Orion molecular design platform, integrated with NVIDIA’s BioNeMo NIM microservices, brings AI into the molecular modeling process, offering tools for 3D protein structure prediction, small molecule generation, and AI-driven molecular property prediction. These advancements significantly reduce the cost, time, and risk in drug discovery—one of the most complex and resource-intensive areas of science.

The JedAI Platform: Scalable AI for Science and Engineering

Cadence’s Joint Enterprise Data & AI (JedAI) platform provides the glue between these verticals. It’s built to integrate with NVIDIA’s LLM and generative AI technologies, while offering deployment flexibility for on-prem, cloud, or hybrid environments. JedAI powers Cadence’s AI design tools across domains like digital and analog chip design, verification and debug, PCB layout, multi-physics optimization, and data center design and operation. It’s a scalable, customizable foundation to bring AI into every part of the engineering workflow.

Building an Open Ecosystem: The Alliance for OpenUSD

To support this massive transformation, Cadence and NVIDIA are helping lead efforts in the Alliance for OpenUSD (Universal Screen Description), aiming to create standardized, interoperable component models for everything from chips to entire data center environments. This initiative ties directly to Huang’s vision of an open, composable ecosystem where companies across the stack can contribute to and benefit from shared digital infrastructure standards.

Summary

From digital twins and chip design to molecular simulation and AI co-pilots, Cadence is helping architect the platforms that will power, embody, and apply AI across industries. The company is helping build the physical and digital infrastructure of the AI era.

Also Read:

Big Picture PSS and Perspec Deployment

Metamorphic Test in AMS. Innovation in Verification

Compute and Communications Perspectives on Automotive Trends


TSMC’s Innovations in Physical Design for Semiconductor Scaling

TSMC’s Innovations in Physical Design for Semiconductor Scaling
by Daniel Nenni on 04-20-2025 at 8:08 am

LC LU TSMC ISPD 2017

In a 2017 ISPD presentation, TSMC Fellow LC Lu outlined critical challenges and innovations in physical design to sustain power, speed, and area scaling trends in semiconductors. As Moore’s Law faces economic hurdles, process-design co-optimization emerges as key to extending it. Lu emphasized application-optimized platforms for mobile, high-performance computing (HPC), automotive, and IoT, balancing area, performance, and power (PPA) with functional safety and ultra-low power needs.

Semiconductor trends highlight slowing primary dimension scaling (metal, gate, fin pitches), making area reduction harder. Innovations like fin depopulation boost cell density by reducing fins from 3-4 in 16nm to 2 in 7nm, easing scaling pressure. This not only increases logic density by up to 3x but also enhances speed-power efficiency: higher-fin cells offer peak speed, while fewer fins excel at same-power speed or same-speed low power. Cell utilization rises from 70% to 80%, aided by power plan optimizations.

Power grid (PG) enhancements are pivotal for logic density. To counter IR drop, PG via counts increase across generations, but shrinking pitches harms routing. Evolving from uniform to dual M1 architectures via top-down or bottom-up co-design allows better cell placement freedom. Power stubs over straps maximize cells under PG, and staggered pins add access points (from 5 to 6), minimizing unused space.

Extreme Ultraviolet (EUV) lithography further densifies routing. Compared to inverse lithography or multiple patterning, EUV single patterning and directed self-assembly (DSA) enable finer pitches (12-16nm half-pitch). Shifting metal:poly pitch from 1:1 to 2:3 provides more metal resources, reducing coupling capacitance and boosting routing tracks, though requiring dual library sets for offsets.

Performance scaling grapples with exponential metal/via resistance growth—up to 3x from 40nm to 5nm—dominating delay (50% BEOL impact at 5nm). Via pillars mitigate this: large drivers, thick upper metals, and pillar structures slash transistor, wire, and via resistance. Automated EDA flows insert electromigration (EM) and performance via pillars across placement, CTS, and routing, reducing BEOL delay impact significantly.

Power scaling leverages ultra-low voltage (ULV) for IoT efficiency, but challenges functionality and variation. Solutions include skew/fine-grained cells, high-stack designs, transmission gates, and multi-bit flops to curb delay degradation. Flop robustness demands high-sigma checks for write paths. Delay variation explodes at low VDD, turning non-Gaussian; new models split distributions into early/late for accurate STA, aligning with Monte Carlo simulations via advanced statistical OCV.

Heterogeneous integration via 3D packaging achieves low-cost, high-performance systems. InFO (Integrated Fan-Out) and CoWoS (Chip-on-Wafer-on-Substrate) outperform traditional SIP/MCM, enabling vertical stacking for better form factors and bandwidth. InFO variants (PoP, Multi-chip) suit small dies (<400mm², <1000 I/Os), while CoWoS handles large HPC integrations (>1000mm², >3000 I/Os). Co-design flows incorporate inter-die DRC/LVS, SI/PI simulations, thermal-aware EM/IR, yielding 12% better thermal dissipation and 5-10% voltage droop reduction in InFO-PoP with IPD.

Machine learning (ML) tackles rising physical design complexity. TSMC’s platform extracts features from APR databases, trains models to predict routing congestion and detours, eliminating biases in traditional EDA heuristics. This enables pre-route optimizations, like accurate ARM A72 clock gating, boosting post-route speed by 40-150MHz with 95% detour prediction accuracy.

In conclusion, these innovations—fin depopulation, EUV, via pillars, ULV modeling, 3D integration, and ML—extend Moore’s Law through EDA-physical design synergy. As nodes shrink, such co-optimizations ensure complex 3D SoCs meet PPA demands, driving future mobile, HPC, and IoT advancements.

TSMC


Podcast EP284: Current Capabilities and Future Focus at Intel Foundry Services with Kevin O’Buckley

Podcast EP284: Current Capabilities and Future Focus at Intel Foundry Services with Kevin O’Buckley
by Daniel Nenni on 04-18-2025 at 10:00 am

Dan is joined by Kevin O’Buckley, senior vice president and general manager of Foundry Services at Intel Corporation. In this role, he is responsible for driving continued growth for Intel Foundry and its differentiated systems foundry offerings, which go beyond traditional wafer fabrication to include packaging, chiplet standards and software, as well as U.S.- and Europe-based capacity.

Dan explores the changes at Intel Foundry Services with Kevin as the company moves into a broad-based foundry model. Kevin describes the new culture at Intel that focuses on ecosystem partnerships to bring differentiated, cutting edge fabrication and packaging capabilities to a broad customer base. Kevin spends some time on the attributes of the Intel 18A process node and the new capabilities it brings to market in areas such as gate-all-around and backside metal technologies.

Kevin also discusses what lies ahead at Intel in the areas of advanced technology and advanced packaging. The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Andes RISC-V CON in Silicon Valley Overview

Andes RISC-V CON in Silicon Valley Overview
by Daniel Nenni on 04-18-2025 at 6:00 am

Email blast header

RISC-V conferences have been at full capacity and I expect this one will be well attended as well. Andes is the biggest name in RSIC-V. The most notable thing about RISC-V conferences is the content. Not only is the content deep, it is international from the top companies in the industry. It is hard to find a design win these days without RISC-V content, absolutely.

We did a podcast with Andes covering the conference: Podcast EP282: An Overview of Andes Focus on RISC-V and the Upcoming RISC-V CON

Registration: visit the event website

An additional keynote has been added:

Nvidia’s Frans Sijstermans (VP of Multimedia Architecture/ ASIC) will present  Use of RISC-V in NVIDIA’s Deep-Learning Accelerator.

This one should be a big draw in addition to the keynote from Andes co-founder, Chairman and CEO  Frankwell Lin “Celebrating 20 years of driving SIP innovation and 15 years of pioneering RISC-V”.

Here are the other keynotes:

– Charlie Su, President and CTO, Andes Technology – Provides insights on advancing modern computing with Andes RISC-V processor solutions.

– Paul Master, Co-founder and CTO, Cornami – Presenting Fully Homomorphic Encryption, the Holy Grail.

Fireside chat: What’s coming in AI? – Facilitated by Charlie Cheng, Andes, Board Advisor with keynotes from:

– Jeff Bier, Founder, Edge AI and Embedded Vision Alliance

– Pete Warden, Founder & CEO, Useful Sensors

There are two full tracks this year, the main conference track and a developer track. The developer track includes hands-on technical training:

Developer Track – Hands-on Technical Training (Limited Seats!)

Four in-depth, 1-hour sessions designed for engineers who want to dive deep into RISC-V technology. Bring your laptop fully charged!

  • Optimization with RISC-V Vector ISA – Curious about RISC-V Vectors (RVV) but unsure how it works? This session covers the basics and how it boosts vector performance. You will get hands-on experience writing and running vector software using AndeSight tools on RVV-capable processors.
  • IAR Professional Tools for RISC-V – Learn how professional tools can help you debug your application more quickly and efficiently, accelerating your time to market. Also, discover how prequalified Functional Safety tools can enhance your products.
  • Create Your Own RISC-V Custom Instructions – Want to create your own RISC-V Instructions to accelerate your application? This session explores Andes’ Automated Custom Extensions (ACE) for enhancing the RISC-V ISA, with a hands-on demo using Andes Copilot, which automates much of the process.
  • Unleashing the Power of Heterogeneous Computing: Building a complex SoC with high compute and memory demands? Avoid performance surprises by understanding how CPU and GPU subsystems interact under real workloads and memory hierarchies. This session demonstrates how trace-based simulation can uncover bottlenecks, validate compute and cache architecture decisions, and improve heterogeneous system performance.

I know quite a few of the presenters on the main conference track and I can tell you it is an all star cast. The main conference welcomes all attendees, featuring over ten sessions and speeches that cover the RISC-V market and the development of SoCs using RISC-V across AI, automotive, application processing, communications, and more. Participants will also have opportunities to network with speakers and exhibitors from over twenty sponsoring companies offering IP, software, tools, services, and products.

This really is an excellent networking event at one of my favorite Silicon Valley locations which includes breakfast, lunch and a networking reception.

Event Details:

– Location: DoubleTree by Hilton, San Jose, CA
– Time: 8:30 – 6:00 PDT
– Admission: RISC-V CON is free to attend
– Registration: visit the event website

About Andes Technology

As a Founding Premier member of RISC-V International and a leader in commercial CPU IP, Andes Technology is driving the global adoption of RISC-V. Andes’ extensive RISC-V Processor IP portfolio spans from ultra-efficient 32-bit CPUs to high-performance 64-bit Out-of-Order multiprocessor coherent clusters.

With advanced vector processing, DSP capabilities, the powerful Andes Automated Custom Extension (ACE) framework, end-to-end AI hardware/software stack, ISO 26262 certification with full compliance, and a robust software ecosystem, Andes unlocks the full potential of RISC-V, empowering customers to accelerate innovation across AI, automotive, communications, consumer electronics, data centers, and mobile devices. Over 16 billion Andes-powered SoCs are driving innovations globally. Discover more at www.andestech.com and connect with Andes on LinkedInX , and YouTube.

Also Read:

Webinar: Unlocking Next-Generation Performance for CNNs on RISC-V CPUs

Relationships with IP Vendors

Changing RISC-V Verification Requirements, Standardization, Infrastructure