SNPS1670747138 DAC 2025 800x100px HRes

Designing and Simulating Next Generation Data Centers and AI Factories

Designing and Simulating Next Generation Data Centers and AI Factories
by Kalar Rajendiran on 04-22-2025 at 10:00 am

Digital Twin and the AI Factory Lifecycle

At NVIDIA’s recent GTC conference, a Cadence-NVIDIA joint session provided insights into how AI-powered innovation is reshaping the future of data center infrastructure. Led by Kourosh Nemati, Senior Data Center Cooling and Infrastructure Engineer from NVIDIA and Sherman Ikemoto, Sales Development Group Director from Cadence, the session dove into the critical challenges of building AI Factories, which are next-generation data centers optimized for accelerated computing. The talks showcased how digital twins and industrial simulation are transforming the design, deployment, and operation of these complex systems.

The Rise of the AI Factory

Kourosh opened the session by spotlighting how the data centers of the future aren’t just about racks and power but rather to be treated as AI Factories. These next-generation data centers are highly dynamic, compute-dense environments built to run massive AI workloads, support real-time inference, and train foundational models that power everything from drug discovery to autonomous systems. But with this transformation comes a new set of challenges. Designing, building, and operating an AI Factory requires multidisciplinary coordination between power, cooling, networking, and compute — with significantly tighter tolerances and higher performance demands than traditional data centers.

This level of complexity requires a new approach to designing, building and operating. An AI Factory Digital Twin is needed to simulate and manage everything from physical infrastructure to real-time operations.

Simulation: A New Era in Data Center Design

The centerpiece of this vision is NVIDIA’s AI Factory Digital Twin, built on the Omniverse platform and powered by the OpenUSD (Universal Scene Description) framework. It’s more than just a virtual replica, a continuously updating simulation engine that integrates mechanical, electrical, and thermal data into a single, unified environment. AI factories demand extreme levels of optimization to manage power density, thermal load, and operational efficiency. By using simulation in the design phase, engineers can evaluate “what-if” scenarios, test control logic, and spot failure points long before equipment is installed.

This approach helps accelerate deployment timelines and reduce operational risk.

Cadence: Technology Behind the Digital Twin

Following Kourosh’s talk, Sherman detailed how multiphysics simulation powers the AI Factory Digital Twin. He described the need to move beyond siloed workflows and bring together disciplines such as electrical, thermal and structural, that often may have operated independently in the past, into a coordinated, data-driven design process. With traditional design tools, each team optimizes their own domain but when put together, they might not work efficiently as a system. This is why digital twins become mission critical.

Cadence’s advanced simulation technology is a core enabler of NVIDIA’s digital twin strategy. From detailed power integrity models to dynamic thermal simulations, these tools provide the physical accuracy needed to make decisions early, fast, and confidently.

Sherman also emphasized Cadence’s commitment to interoperability. As a founding member of the OpenUSD-based digital twin standards group, Cadence is helping define how simulation data integrates with 3D models, real-time telemetry, and operational software.

Ecosystem Collaboration

Another major theme of the joint session was the importance of collaboration across the broader data center ecosystem. AI factories are not just a design challenge but a supply chain and operational challenge as well.

To this end, partners like Foxconn and Vertiv are playing a critical role. Foxconn, with its global manufacturing capabilities, is helping accelerate the production of modular AI factory components. Vertiv, a leader in power and cooling infrastructure, is working closely with NVIDIA and Cadence to simulate real-world behavior of critical equipment within the twin, ensuring that systems behave predictably under peak AI loads.

By simulating components like Coolant Distribution Units (CDUs), Power Distribution Units (PDUs), and Heating, Ventilation and Air Conditioning (HVAC) systems as part of the broader twin, these partners enable end-to-end validation of system behavior. This is a huge step forward in building next generation data centers that are both resilient and responsive to changing demands.

Summary

AI factories are complex, multidisciplinary systems that require new tools, new thinking, and deep collaboration. By integrating simulation, open standards, and a robust partner ecosystem, Cadence is collaborating with NVIDIA and the broader ecosystem to lay the foundation for a new era of AI infrastructure. Their tools not only reduce costs and risk, but also accelerate the delivery of AI-powered innovation across industries from healthcare and manufacturing to energy, finance, and scientific research.

Also Read:

How Cadence is Building the Physical Infrastructure of the AI Era

Big Picture PSS and Perspec Deployment

Metamorphic Test in AMS. Innovation in Verification


CEO Interview with Dr. Michael Förtsch of Q.ANT

CEO Interview with Dr. Michael Förtsch of Q.ANT
by Daniel Nenni on 04-22-2025 at 6:00 am

20230524MST 2941

Dr. Michael Förtsch, CEO of Q.ANT, is a physicist and innovator driving advancements in photonic computing and sensing technologies. With a PhD from the Max Planck Institute for the Science of Light, he leads Q.ANT’s development of Thin-Film Lithium Niobate (TFLN) technology, delivering groundbreaking energy efficiency and computational power for AI and data center applications.

Tell us about your company.

Q.ANT is a deep-tech company pioneering photonic computing and quantum sensing solutions to address the challenges of the next computing era. Our current, primary area of focus is developing and industrializing photonic processing technologies for advanced applications in AI and high-performance computing (HPC).  

Photonic computing is the next paradigm shift in AI and HPC. By using light (photons) instead of electrons, we overcome the limitations of traditional semiconductor scaling and can deliver radically higher performance with lower energy consumption. 

Q.ANT’s first photonic AI processors are based on the industry standard PCIexpress and include the associated software control (plug-and-play solution) so they integrate easily into HPC environments as they are currently set up.  We know it is critical for any new technology to get adopted.  

At the core of our technology is Thin-Film Lithium Niobate (TFLN), a breakthrough material that enables precise and efficient light-based computation. Our photonic Native Processing Units (NPUs) operate using the native parallelism of light —processing multiple calculations simultaneously, dramatically improving efficiency and performance for AI and data-intensive applications. 

Founded in 2018, Q.ANT is headquartered in Stuttgart, Germany, and is at the forefront of scaling photonic computing for real-world adoption. 

What problems are you solving?

The explosive growth of AI and HPC is starting to exceed what is possible with conventional semiconductors, creating four urgent challenges: 

1). Unmanageable energy consumption – AI data centers require vast amounts of power, with GPUs consuming over 1.2 kW each. The industry is now considering mini nuclear plants just to meet demand. 

2). Processing limitations – Traditional chips rely on electronic bottlenecks that struggle to keep up with AI’s growing complexity. 

3). Space limitations – Physical space constraints pose a significant challenge as datacenters struggle to accommodate the increasing number of server racks needed to meet rising performance demands.  

4). Shrinking traditional GPU technology to the next smaller nodes demands massive investments in cutting-edge manufacturing, often reaching billions.  

Q.ANT is tackling these challenges head-on, redefining computing efficiency by transitioning from electron-based to photon-based processing. Our photonic processors not only enhance performance but also dramatically reduce operational costs and energy consumption—delivering 30X energy savings and 50X greater computational density.  

Also in manufacturing, Q.ANT has recently showcased a breakthrough: its technology can be produced at significantly lower costs by repurposing existing CMOS foundries that operate with 1990s-era technologies. These breakthroughs pave the way for more sustainable, scalable AI and high-performance computing. 

What application areas are your strongest?

Q.ANT’s photonic processors dramatically enhance efficiency and processing power so are ideal for compute-intensive applications, such as AI model training, data center optimization, and advanced scientific computing. By leveraging the inherent advantages of light, Q.ANT enables faster, more energy-efficient processing of complex mathematical operations. Unlike traditional architectures, our approach replaces linear functions with non-linear equations, unlocking substantial gains in computational performance and redefining how businesses tackle their most demanding workloads. 

What keeps your customers up at night?

Q.ANT’s customers are primarily concerned with the increasing energy demands and limitations of current data processing and sensing technologies. Data center managers, AI infrastructure providers, and industrial innovators face mounting challenges in performance, cost, and scalability and traditional semiconductor technology is struggling to keep pace.  This is raising a number of urgent concerns such as: 

  • Increasing power demands – The cost and energy required to run AI at scale are becoming unsustainable. 
  • Processing power – AI workloads require ever-increasing computational power, but scaling with conventional chips is costly and inefficient. 
  • Scalability – Businesses need AI architectures that can grow without exponential increases in power consumption and operational expenses. 
  • Space  – Expanding data centers to accommodate AI’s growing infrastructure is becoming increasingly difficult. 

Q.ANT’s photonic computing solutions directly address these challenges by introducing a more efficient, high-performance approach that provides the following benefits:  

  • Radical energy efficiency – Our photonic processors reduce AI inference energy consumption by a factor of 30. 
  • Faster, more efficient processing – Native non-linear computation accelerates complex workloads with superior efficiency in cost, power, and space utilization. 
  • Seamless integration – Our PCIe-based Native Server Solution is fully compatible with x86 architectures and integrates easily into existing data centers. 
  • Optimized for data center space – With significantly fewer servers required to achieve the same computational power, Q.ANT’s solution helps alleviate space constraints while delivering superior performance. 

By rethinking computing from the ground up, Q.ANT enables AI-driven businesses to scale sustainably, reduce operational costs, and prepare for the future of high-performance computing. 

What does the competitive landscape look like, and how do you differentiate?

The competitive landscape is populated by companies exploring photonic and quantum technologies. However, many competitors remain in the research phase or focus on long-term promises. Q.ANT differentiates itself by focusing on near-term, high-impact solutions. We use light for data generation and processing, which enhances energy efficiency and creates opportunities that traditional semiconductor-based solutions cannot achieve.  

Q.ANT is different. We are: 

  • One of the few companies delivering real photonic processors today 
  • Developing photonic chips using TFLN, a material that allows ultra-fast, precise computations without generating excess heat 
  • TFLN experts: with six years of expertise in TFLN-based photonic chips, and operating our own pilot line we have a significant first-mover advantage in commercial photonic computing. 
What new features/technology are you working on?

We are focused on revolutionizing AI processing in HPC datacenters by developing an entirely new technology: analog computing units packaged as server solutions called Native Processing Servers (NPS). These servers promise to outperform today’s chip technologies in both performance and energy efficiency. They also integrate seamlessly with existing infrastructure, using the standardized PCI Express interface for easy plug-and-play compatibility with HPC server racks. When it comes to data center deployment requirements, our NPS meets the standards you’d expect from leading vendors. The same ease applies to software, existing source code can run on our systems without modification. 

How do customers normally engage with your company?

Businesses, researchers, and AI leaders can contact Q.ANT via email at: native-computing@qant.gmbh and engage with the company in a number of ways: 

  • Direct purchase of photonic processors – Our processors are now available for purchase for experimenting and integrating into HPC environment.  
  • Collaborative innovation – We work with industry and research partners to develop next-gen AI applications on our photonic processors  
  • Community outreach – We participate in leading tech events to advance real-world photonic computing adoption. 

With AI and HPC demand growing exponentially, Q.ANT is a key player in shaping the next era of computing. 

Also Read:

Executive Interview with Leo Linehan, President, Electronic Materials, Materion Corporation

CEO Interview with Ronald Glibbery of Peraso

CEO Interview with Pierre Laboisse of Aledia


Verifying Leakage Across Power Domains

Verifying Leakage Across Power Domains
by Daniel Payne on 04-21-2025 at 10:00 am

leakage contention

IC designs need to operate reliably under varying conditions and avoid inefficiencies like leakage across power domains. But how do you verify that connections between IP blocks has been done properly? This is where reliability verification, Electrical Rule Checking (ERC) tools and dynamic simulations all come into play particularly during the verification of leakage between power domains, and other circuit reliability issues.

Let’s look at three types of leakage that need to be verified and voided:

Parasitic leakage – parasitic body diodes, like bulk nodes for PMOS that are not high enough or NMOS not low enough.

Analog gate leakage – MOSFET stacks from power to ground, floating gate inputs, high impedance states.

Digital gate leakage – power domain mismatches, missing level shifters, floating states, high impedance states.

In the following two circuits, (a) has contention when the input is high, while (b) is safe and has no contention for any input state.

Circuit simulation would only catch the contention if the input reached a high state. Static electrical rule checks (ERC) tools do not understand states, so would not find the contention either. Fortunately, there’s a reliability verification solution from Siemens called Insight Analyzer that does use state-based analysis and finds this contention, without resorting to simulation. Circuit designers run Insight Analyzer early and often throughout their design process to quickly find and fix reliability issues, saving valuable engineering time.

Using multiple power domains is a technique to significantly reduce power in complex SoCs, but the interfaces between these domains can have errors like a missing level-shifter or other issues that need to be identified and fixed. Inter-domain leakage between different power domains is challenging to manually verify, so an automated approach is a far more robust approach.

Chips with multiple power domains and low power states need to operate reliably, and the Insight Analyzer tool complements the verification approaches of static checking and dynamic simulation

Circuit simulation at the transistor-level with SPICE is well-known to verify reliability and logic operation, but how do you know if your stimulus has uncovered all the states and modes that are possible to detect a failure or concern? Insight Analyzer catches violations like inter-domain leakage where a backup power supply powers into a main supply that is turned off.

STMicroelectronics

European engineers from STMicroelectronics presented at the 2024 User2User conference on how the Insight Analyzer tool was used in their flow to detect high impedance (HiZ) nets and perform ERC. They wanted to know all instances where HiZ occurred, for both digital and analog circuits to see if they were persistent or transient, and when they caused issues.

Using Insight Analyzer enabled the ST team to verify that high-voltage, high-power, high-density designs were thoroughly verified for reliability issues. Users found that the GUI-based tool had a quick learning curve, allowing them to be operational after just a few hours of use.

Summary

Reliability and leakage issues are critical to verify and fix for multiple power domain designs. Adding reliability verification tools like Insight Analyzer provides immediate benefits to engineering teams. Finding and fixing issues earlier helps to accelerate the schedule for a project.

Read the complete White Paper on Insight Analyzer online.

Related Blogs


How Cadence is Building the Physical Infrastructure of the AI Era

How Cadence is Building the Physical Infrastructure of the AI Era
by Kalar Rajendiran on 04-21-2025 at 6:00 am

Phases of AI Adoption

At the 2025 NVIDIA GTC Conference, CEO Jensen Huang delivered a sweeping keynote that painted the future of computing in bold strokes: a world powered by AI factories, built on accelerated computing, and driven by agentic, embodied AI capable of interacting with the physical world. He introduced the concept of Physical AI—intelligence grounded in real-world physics—and emphasized the shift from traditional programming to AI-developed systems trained on vast data, simulation, and models. From drug discovery to robotics to large-scale digital twins, Huang asserted that the future belongs to those who can design, simulate, and optimize both the virtual and the physical with extraordinary efficiency.

Cadence Enabling the Vision: Accelerating the Infrastructure of AI

It’s within Huang’s visionary context that Rob Knoth, Group Director of Strategy and New Ventures at Cadence, offered a look at how the company is actively enabling the future that Huang painted during GTC 2025. Knoth’s talk provided insight into how Cadence and NVIDIA are building the tools and infrastructure needed to make Huang’s future real. A collaborative approach lies at the heart of Cadence’s work across domains, whether it’s semiconductor design, robotics, data centers, or molecular biology. With the power of NVIDIA’s Grace Blackwell architecture, Cadence is pushing the boundaries of what’s possible in both speed and scale.

Computational Fluid Dynamics (CFD) Meets Accelerated Compute: The Boeing 777 Case Study

One of the most compelling examples was the simulation of a Boeing 777 during takeoff and landing, performed using Cadence’s Fidelity CFD platform on the Grace Blackwell system. This simulation tackled one of the most complex aerodynamic challenges in aviation and matched experimental results—all executed in a fraction of the time and energy traditional methods would require. It was a proof point of sustainability and efficiency through accelerated compute, echoing Huang’s keynote themes.

Designing AI Factories: The Role of Digital Twins

Knoth also emphasized Cadence’s leadership in building AI factory infrastructure, highlighting its collaboration with NVIDIA, Schneider Electric, and Vertiv to design digital twins of data centers. These high-fidelity models are powered by Cadence’s Reality Digital Twin Platform and are integral to NVIDIA’s reference blueprint for next-generation AI infrastructure. This work reflects Huang’s emphasis on AI factories as the backbone of the AI-driven economy, where every element is simulated, optimized, and iterated in digital space before deployment in the real world.

Embodied Intelligence: From Robots to Edge Systems

To realize the vision of agentic AI systems, Cadence is bringing its multi-physics solvers to handle everything from thermal dynamics to structural stress, onto accelerated GPU architectures. These tools are essential for designing physical AI systems, such as robots, edge devices, and complex electromechanical machines that need to interact with the real world. Knoth positioned this as a foundational step in enabling the next generation of physical AI, where simulation meets embodiment.

Transforming Drug Discovery: AI Meets Molecular Science

Building on Huang’s vision of AI in life sciences, Knoth shared Cadence’s progress in molecular design and drug discovery. Cadence’s Orion molecular design platform, integrated with NVIDIA’s BioNeMo NIM microservices, brings AI into the molecular modeling process, offering tools for 3D protein structure prediction, small molecule generation, and AI-driven molecular property prediction. These advancements significantly reduce the cost, time, and risk in drug discovery—one of the most complex and resource-intensive areas of science.

The JedAI Platform: Scalable AI for Science and Engineering

Cadence’s Joint Enterprise Data & AI (JedAI) platform provides the glue between these verticals. It’s built to integrate with NVIDIA’s LLM and generative AI technologies, while offering deployment flexibility for on-prem, cloud, or hybrid environments. JedAI powers Cadence’s AI design tools across domains like digital and analog chip design, verification and debug, PCB layout, multi-physics optimization, and data center design and operation. It’s a scalable, customizable foundation to bring AI into every part of the engineering workflow.

Building an Open Ecosystem: The Alliance for OpenUSD

To support this massive transformation, Cadence and NVIDIA are helping lead efforts in the Alliance for OpenUSD (Universal Screen Description), aiming to create standardized, interoperable component models for everything from chips to entire data center environments. This initiative ties directly to Huang’s vision of an open, composable ecosystem where companies across the stack can contribute to and benefit from shared digital infrastructure standards.

Summary

From digital twins and chip design to molecular simulation and AI co-pilots, Cadence is helping architect the platforms that will power, embody, and apply AI across industries. The company is helping build the physical and digital infrastructure of the AI era.

Also Read:

Big Picture PSS and Perspec Deployment

Metamorphic Test in AMS. Innovation in Verification

Compute and Communications Perspectives on Automotive Trends


Podcast EP284: Current Capabilities and Future Focus at Intel Foundry Services with Kevin O’Buckley

Podcast EP284: Current Capabilities and Future Focus at Intel Foundry Services with Kevin O’Buckley
by Daniel Nenni on 04-18-2025 at 10:00 am

Dan is joined by Kevin O’Buckley, senior vice president and general manager of Foundry Services at Intel Corporation. In this role, he is responsible for driving continued growth for Intel Foundry and its differentiated systems foundry offerings, which go beyond traditional wafer fabrication to include packaging, chiplet standards and software, as well as U.S.- and Europe-based capacity.

Dan explores the changes at Intel Foundry Services with Kevin as the company moves into a broad-based foundry model. Kevin describes the new culture at Intel that focuses on ecosystem partnerships to bring differentiated, cutting edge fabrication and packaging capabilities to a broad customer base. Kevin spends some time on the attributes of the Intel 18A process node and the new capabilities it brings to market in areas such as gate-all-around and backside metal technologies.

Kevin also discusses what lies ahead at Intel in the areas of advanced technology and advanced packaging. The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Andes RISC-V CON in Silicon Valley Overview

Andes RISC-V CON in Silicon Valley Overview
by Daniel Nenni on 04-18-2025 at 6:00 am

Email blast header

RISC-V conferences have been at full capacity and I expect this one will be well attended as well. Andes is the biggest name in RSIC-V. The most notable thing about RISC-V conferences is the content. Not only is the content deep, it is international from the top companies in the industry. It is hard to find a design win these days without RISC-V content, absolutely.

We did a podcast with Andes covering the conference: Podcast EP282: An Overview of Andes Focus on RISC-V and the Upcoming RISC-V CON

Registration: visit the event website

An additional keynote has been added:

Nvidia’s Frans Sijstermans (VP of Multimedia Architecture/ ASIC) will present  Use of RISC-V in NVIDIA’s Deep-Learning Accelerator.

This one should be a big draw in addition to the keynote from Andes co-founder, Chairman and CEO  Frankwell Lin “Celebrating 20 years of driving SIP innovation and 15 years of pioneering RISC-V”.

Here are the other keynotes:

– Charlie Su, President and CTO, Andes Technology – Provides insights on advancing modern computing with Andes RISC-V processor solutions.

– Paul Master, Co-founder and CTO, Cornami – Presenting Fully Homomorphic Encryption, the Holy Grail.

Fireside chat: What’s coming in AI? – Facilitated by Charlie Cheng, Andes, Board Advisor with keynotes from:

– Jeff Bier, Founder, Edge AI and Embedded Vision Alliance

– Pete Warden, Founder & CEO, Useful Sensors

There are two full tracks this year, the main conference track and a developer track. The developer track includes hands-on technical training:

Developer Track – Hands-on Technical Training (Limited Seats!)

Four in-depth, 1-hour sessions designed for engineers who want to dive deep into RISC-V technology. Bring your laptop fully charged!

  • Optimization with RISC-V Vector ISA – Curious about RISC-V Vectors (RVV) but unsure how it works? This session covers the basics and how it boosts vector performance. You will get hands-on experience writing and running vector software using AndeSight tools on RVV-capable processors.
  • IAR Professional Tools for RISC-V – Learn how professional tools can help you debug your application more quickly and efficiently, accelerating your time to market. Also, discover how prequalified Functional Safety tools can enhance your products.
  • Create Your Own RISC-V Custom Instructions – Want to create your own RISC-V Instructions to accelerate your application? This session explores Andes’ Automated Custom Extensions (ACE) for enhancing the RISC-V ISA, with a hands-on demo using Andes Copilot, which automates much of the process.
  • Unleashing the Power of Heterogeneous Computing: Building a complex SoC with high compute and memory demands? Avoid performance surprises by understanding how CPU and GPU subsystems interact under real workloads and memory hierarchies. This session demonstrates how trace-based simulation can uncover bottlenecks, validate compute and cache architecture decisions, and improve heterogeneous system performance.

I know quite a few of the presenters on the main conference track and I can tell you it is an all star cast. The main conference welcomes all attendees, featuring over ten sessions and speeches that cover the RISC-V market and the development of SoCs using RISC-V across AI, automotive, application processing, communications, and more. Participants will also have opportunities to network with speakers and exhibitors from over twenty sponsoring companies offering IP, software, tools, services, and products.

This really is an excellent networking event at one of my favorite Silicon Valley locations which includes breakfast, lunch and a networking reception.

Event Details:

– Location: DoubleTree by Hilton, San Jose, CA
– Time: 8:30 – 6:00 PDT
– Admission: RISC-V CON is free to attend
– Registration: visit the event website

About Andes Technology

As a Founding Premier member of RISC-V International and a leader in commercial CPU IP, Andes Technology is driving the global adoption of RISC-V. Andes’ extensive RISC-V Processor IP portfolio spans from ultra-efficient 32-bit CPUs to high-performance 64-bit Out-of-Order multiprocessor coherent clusters.

With advanced vector processing, DSP capabilities, the powerful Andes Automated Custom Extension (ACE) framework, end-to-end AI hardware/software stack, ISO 26262 certification with full compliance, and a robust software ecosystem, Andes unlocks the full potential of RISC-V, empowering customers to accelerate innovation across AI, automotive, communications, consumer electronics, data centers, and mobile devices. Over 16 billion Andes-powered SoCs are driving innovations globally. Discover more at www.andestech.com and connect with Andes on LinkedInX , and YouTube.

Also Read:

Webinar: Unlocking Next-Generation Performance for CNNs on RISC-V CPUs

Relationships with IP Vendors

Changing RISC-V Verification Requirements, Standardization, Infrastructure


Achieving Seamless 1.6 Tbps Interoperability for High BW HPC AI/ML SoCs: A Technical Webinar with Samtec and Synopsys

Achieving Seamless 1.6 Tbps Interoperability for High BW HPC AI/ML SoCs: A Technical Webinar with Samtec and Synopsys
by Daniel Nenni on 04-17-2025 at 10:00 am

Picture3

HPC Bandwidth Explosion and 1.6T Ecosystem Interop Need

The exponential growth in data bandwidth requirements driven by HPC systems, AI, and ML applications has set the stage for an ever-increasing need for 1.6Tbps Ethernet. As data centers strive to manage vast data transfers with maximum efficiency, the urgency for interoperability testing intensifies. Developers are compelled to act swiftly as standard development is trying to catch with the industry’s need for speed, conducting interoperability testing with real-world channels, at plug fests, Standard Body Interop testing events, along with early 224G TX and RX compliance characterization with variable channels etc has become a key indicator for real-life deployment. This proactive approach reduces risk and helps ensure that components from multiple vendors integrate seamlessly, maintaining robust and efficient operations.

Watch the Replay Now

Samtec and Synopsys Collaborate in Interoperability Testing to Enable 1.6Tbps Systems

Interoperability is the cornerstone of constructing robust systems for high-speed applications. It enables diverse components—such as switches, routers, and interconnects—to communicate and function cohesively without issues. In bandwidth-intensive environments, any downtime or performance degradation can lead to significant operational and financial setbacks. Standardized interoperability facilitates smooth upgrades and integrations, minimizing disruptions and ensuring continuous operation. This level of robustness is achieved through meticulous adherence to interoperability guidelines and rigorous testing.

It is well known that 224Gbps and upcoming 400G will be key for achieving 1.6T and eventually 3.2Tbps data transfers. Samtec and Synopsys continue to push the limits of 224Gbps Ethernet performance, as seen in t, where Synopsys 224G PHY IP and Samtec’s Si-Fly® HD co-packaged and near-chip systems showcased scale-out and scale-up capabilities.

 

224G Ethernet is the Key for 1.6Tbps Interoperability

The pursuit of 1.6T interoperability is driven by advancements in high-performance 224G SerDes architecture. These SerDes technologies are designed to handle the high data rates necessary for 1.6T Ethernet, enabling long-distance data transmission with minimal power and latency. The robustness and reliability of these architectures are critical for maintaining data integrity and performance in HPC, AI, and ML applications. By leveraging cutting-edge 224G SerDes technology, engineers can ensure that their systems meet the stringent demands of modern data centers.

It’s a really big ecosystem

The 1.6T Ethernet ecosystem is characterized by its diversity, incorporating various connectors, cables, and other components. Each approach offers specific advantages and use cases—LR Direct-attached cable or cabled backplane for longer distances, and NPO connectors for shorter, high-density interconnects. This variety ensures that engineers have access to suitable solutions for a broad range of applications and infrastructure configurations, enabling them to optimize data transmission based on specific operational requirements.

Watch the Replay Now

Conclusion with Synopsys and Samtec

Synopsys and Samtec are at the forefront of driving 1.6T interoperability. Samtec’s next-generation connector solutions are designed to meet the challenges of 224 Gbps PAM4 architectures. Synopsys delivers a best-in-class, widely interoperable 1.6T Ethernet IP solution with silicon-proven and low-power 224G Ethernet. These solutions have undergone extensive third-party interoperability testing, helping ensure that designers can start their 1.6Tbps with confidence today.

To delve deeper into these topics and learn how to achieve seamless 1.6 Tbps interoperability for your HPC, AI, and ML applications, we invite you to join us for an exclusive technical webinar with Synopsys and Samtec.

Register now for the Synopsys and Samtec webinar to ensure your systems to be low risk and leverage pre-tested, pre-verified, interoperable systems with 1.6T Ethernet IP solutions and advanced interconnect technology.

Also Read:

Samtec Advances Multi-Channel SerDes Technology with Broadcom at DesignCon

2025 Outlook with Matt Burns of Samtec

Samtec Paves the Way to Scalable Architectures at the AI Hardware & Edge AI Summit


Predictive Load Handling: Solving a Quiet Bottleneck in Modern DSPs

Predictive Load Handling: Solving a Quiet Bottleneck in Modern DSPs
by Jonah McLeod on 04-17-2025 at 6:00 am

Predictive Load

When people talk about bottlenecks in digital signal processors (DSPs), they usually focus on compute throughput: how many MACs per second, how wide the vector unit is, how fast the clock runs. But ask any embedded AI engineer working on always-on voice, radar, or low-power vision—and they’ll tell you the truth: memory stalls are the silent killer. In today’s edge AI and signal processing workloads, DSPs are expected to handle inference, filtering, and data transformation under increasingly tight power and timing budgets. The compute cores have evolved, but edge computing’s goal is to move the compute engine closer to the memory.

The toolchains have evolved. But memory? Still often too slow. And here’s the twist: it’s not because the memory is bad. It’s because the data doesn’t arrive on time.

Why DSPs Struggle with Latency

Unlike general-purpose CPUs, most DSPs used in embedded AI rely on non-cacheable memory regions—local buffers, scratchpads, or deterministic tightly coupled memory (TCM). That design choice makes sense: real-time systems can’t afford cache misses or non-deterministic latencies. But that also means every memory access must have exact load latency —or else the pipeline stalls. You can be in the middle of processing a spectrogram, a convolution window, or a beamforming sequence—and suddenly everything halts while the processor waits on data to arrive. Multiply-accumulate units sit idle. Latency compounds. Power is wasted.

Enter Predictive Load Handling

Now imagine if the DSP could recognize the pattern. If it could see that your loop is accessing memory in fixed strides—say, reading every 4th address—and preload that data ahead of time, —commonly referred to as “deep prefetch”—so that when the actual load instruction is issued, the data is already there. No stall. No pipeline bubble. Just smooth execution.

That’s the traditional model of prefetching or stride-based streaming—and while it’s useful and widely used, it’s not what we’re describing here.

A new Predictive Load Handling innovation takes a fundamentally different approach. This is not just a smarter prefetch—it’s a fundamentally different technique. Instead of predicting what address will be accessed next, Predictive Load Handling focuses on how long a memory access is likely to take.

By tracking the latency of past loads—whether from SRAM, bypassed caches, or DRAM—it learns how long memory requests from each region typically take. Instead of issuing loads early, the CPU proceeds normally. The latency prediction is applied on the vector side to schedule the execution at the predicted time, allowing the processor to adapt to memory timing without changing instruction flow. This isn’t speculative or risky. It’s conservative, reliable, and fits perfectly into deterministic DSP pipelines. It’s especially effective when the processor is working with large AI models or temporary buffers stored in DRAM—where latency is relatively consistent but still long. That distinction is critical. We’re not just doing a smarter prefetch—we’re enabling the processor to be latency-aware and timing-adaptive, even in the with or without a traditional cache or stride pattern.

When integrated into a generic DSP pipeline, Predictive Load Handling delivers immediate, measurable performance and power gains. The table shows how it looks in typical AI/DSP scenarios. These numbers reflect expectations in workloads like:

  • Convolution over image tiles
  • Sliding FFT windows
  • AI model inference over quantized inputs
  • Filtering or decoding over streaming sensor data
Metric Baseline DSP With Predictive Load Result
Memory Access Latency 200 ns 120 ns 40% faster
Data Stall Cycles 800 cycles 500 cycles 38% reduction
Power per Memory Load 0.35 mW 0.25 mW 29% reduction
Minimal Overhead, Maximum Impact

One of the advantages of Predictive Load Handling is how non-intrusive it is. There’s no need for deep reordering logic, cache controllers, or heavyweight speculation. It can be dropped into the dispatch or load decode stages of many DSPs, either as dedicated logic or compiler-assisted prefetch tags.  And because it operates deterministically, it’s compatible with functional safety requirements—including ISO 26262—making it ideal for automotive radar, medical diagnostics, and industrial control systems.

Rethinking the AI Data Pipeline

What Predictive Load Handling teaches us is that acceleration isn’t just about the math—it’s about data readiness. As processor speeds continue to outpace memory latency—a gap known as the memory wall—the most efficient architectures won’t just rely on faster cores. They’ll depend on smarter data pathways to deliver information precisely when needed, breaking the bottlenecks that leave powerful CPUs idle. As DSPs increasingly carry the weight of edge AI, we believe Predictive Load Handling will become a defining feature of next-generation signal processing cores.

Because sometimes, it’s not the clock speed—it’s the wait.

Also Read:

Even HBM Isn’t Fast Enough All the Time

RISC-V’s Privileged Spec and Architectural Advances Achieve Security Parity with Proprietary ISAs

Harnessing Modular Vector Processing for Scalable, Power-Efficient AI Acceleration

An Open-Source Approach to Developing a RISC-V Chip with XiangShan and Mulan PSL v2


Executive Interview with Leo Linehan, President, Electronic Materials, Materion Corporation

Executive Interview with Leo Linehan, President, Electronic Materials, Materion Corporation
by Daniel Nenni on 04-16-2025 at 10:00 am

Leo 4

Leo Linehan leads Materion’s Electronic Materials business segment, an important supplier to the global semiconductor market and an industry leader in the production of advanced chemicals, microelectronic packaging, precious and non-precious metals, and deposition, reclamation and refining services.

Prior to joining Materion in 2019, he was Vice President and General Manager of Semiconductor Solutions with Element Solutions Inc. and served as Vice President and General Manager of Electronic Chemicals at OM Group. He joined Dow Chemical Company in 2009 as part of its acquisition of Rohm and Haas, where he retained his senior leadership role in Dow’s Electronics Materials business. Earlier in his career, he held several positions of increasing responsibility at IBM.

Tell us about your company.

Materion is a global leader in advanced materials serving a range of industries, including semiconductor, industrial, aerospace & defense, energy and automotive. We’re headquartered just outside of Cleveland, Ohio, and have 3,000 employees around the world. The Electronic Materials (EM) business I lead provides specialty materials for thin film deposition, microelectronic packaging products, and inorganic chemicals. Our business started as a precious metals provider and, through a series of acquisitions, has become a world leader in deposition materials specifically and electronic materials generally.

What problems are you solving?

Our customers are continually seeking a robust, reliable, high-quality supply chain, especially when it comes to new thin film materials for advanced semiconductor production. That’s particularly true given the current geopolitical situation. We hear all the time from customers worried about tariff impacts, sourcing concerns, and volatility in raw materials markets. We’re well positioned to partner with them in mitigating those issues thanks to our international footprint, our broad material portfolio, and the significant investments we’ve made in manufacturing. We have a significant presence in both the U.S. and European markets, with primary manufacturing sites in both regions, which allows us to give our customers valuable insights into navigating these geopolitical realities.

What application areas are your strongest?

When it comes to thin film deposition materials, we serve all segments of the larger semiconductor industry. That includes advanced logic and memory, power semiconductors, and RF semiconductors.

What keeps your customers up at night?

I go back to the idea of having a reliable supply chain. It’s having the right technology that our customers’ customers require when they require it. Our business is fairly evenly split between leading-edge node and mature node semiconductors. There’s still a significant market for semiconductors in mature process nodes, and those processes are still being continuously improved. At the same time, when we acquired HC Starck’s electronic materials business in 2021, that instantly made us a significant player in leading-edge semiconductors as well. There are universal demands that cut across all segments of the semiconductor market related to continuous improvement and supply security.  There are also important differences between the advanced process nodes used for logic and memory and the more mature process nodes that are used for power and RF semiconductors that require segment-specific expertise. Ultimately, we work with all of our customers to provide the latest materials that we continuously evolve in order to meet their needs.

What does the competitive landscape look like and how do you differentiate?

The competitive landscape is actually quite complicated. On the leading-edge side of semiconductors, it’s highly concentrated among a relatively few suppliers including Materion. On the mature node side, it’s a diverse landscape with several physical vapor deposition (PVD) sputtering target manufacturers. Many of those are either regional or specialized into very narrow segments. We’re one of the few suppliers that covers the whole landscape, both in terms of semiconductor type and deposition technologies, including materials used for atomic layer deposition. We have a very broad portfolio that encompasses both precious and non-precious metals. That makes us an appealing supply chain partner to a wide range of semiconductor manufacturers.

What new features/technology are you working on?

I already mentioned atomic layer deposition, which is a fast-growing market for us. There is also a constant need for new PVD alloys. For many years, we’ve been known as the company that manufactures PVD targets no one else wants to, and we’re proud of that!

How do customers normally engage with your company?

We mostly sell directly to customers, and we try to engage them in an R&D context when we can. We prefer to have our R&D team engage with their R&D teams, with the sales force facilitating the conversation. We find that’s the best way to solve the critical process issues our customers face all the time.

Also Read:

CEO Interview with Ronald Glibbery of Peraso

CEO Interview with Pierre Laboisse of Aledia

CEO Interview with Cyril Sagonero of Keysom


A Perspective on AI Opportunities in Software Engineering

A Perspective on AI Opportunities in Software Engineering
by Bernard Murphy on 04-16-2025 at 6:00 am

ai agents software engineering

Whatever software engineering teams are considering around leveraging AI in their development cycles should be of interest to us in hardware engineering. Not in every respect perhaps but there should be significant commonalities. I found a recent paper on the Future of AI-Driven Software Engineering from the University of Aukland, NZ with some intriguing ideas I thought worth sharing. The intent of these authors is to summarize high-level ideas rather than algorithms, though there are abundant references to papers which on a sample review do get into more detail. As this is a fairly long paper, here I just cherry-pick a few concepts that stood out for me.

Upsides

In using LLMs for code generation the authors see increased emphasis on RAG (retrieval augmented generation) for finding code snippets versus direct code synthesis from scratch. They also share an important finding in a blog post from StackOverflow, reporting that blog hits on their website are declining. This is significant since StackOverflow has been a very popular source for exchanging ideas and code snippets. StackOverflow attribute the decline to LLMs like GPT4 both summarizing a response to a user prompt and directly providing code snippets. Such RAG-based systems commonly offer links to retrieved sources but clearly these are not compelling enough to keep up website hits. I find the same experience with Google search, where now search results often start with an AI-generated overview. I often (not always) find this useful, also I often don’t follow the links.

Meanwhile Microsoft reports that the GitHub CoPilot (a Microsoft product) paid customer base is growing 30% quarter on quarter, now at 1.3M developers in 50K organizations. Clearly for software development the ease of generating code through CoPilot has enough appeal to extract money from subscribers.

Backing up a step, before you can write code you need a clear requirements specification. Building such a specification can be a source of many problems, mapping from a client’s mental image of needs to an implementer’s image in natural language, with ambiguities, holes and the common reality of an evolving definition. AI-agents could play a big role here by interactively eliciting requirements, proposing examples and scenarios to help resolve ambiguities and plug holes. Agents can also provide some level of requirements validation, by identifying vague or conflicting requirements.

Maintaining detailed product documentation as development progresses can be a huge burden on developers and that documentation can easily drift out of sync with the implemented reality especially through incremental changes and bug fixes. The authors suggest this tedious task could be better handled through agent-based generation and updates, able to stay in sync with every large or small change. Along similar lines, not everyone in the product hierarchy will want detailed implementation doc. Product managers, AEs, application developers, and clients all need abstracted views best suited to their individual interests. Here also there is opportunity for LLMs to generate such abstractions.

Downsides

The obvious concern with AI generated code or tests is the hallucination problem. While accuracy will no doubt improve with further training, it is unrealistic to expect high certainty responses to every possible prompt. Hallucinations are more a feature than a bug, no matter how extensive the training.

Another problem is over-reliance on AI. As developers depend more on AI-assisted answers to their needs, there is a real concern that their problem-solving and critical thinking skills will decline over time. Without expert human cross checks, how do we ensure that AI-induced errors do not leak through to production? A common response is that the rise of calculators didn’t lead to innumeracy, they simply made us more effective. By implication AI will reach that same level of trust in time. Unfortunately, this is a false equivalence. Modern calculators produce correct answers every time; there is no indication that AI can rise to this level of certainty. If engineers lose the ability to spot errors in AI claims for such cases, quality will decline noticeably, even disastrously. (I should stress that I am very much a proponent of AI for many applications. I am drawing a line here for unsupervised AI used for applications requiring engineering precision.)

A third problem will arise as more of the code used in training and RAG is itself generated by AI. The “genotype” of this codebase will fail to weed out weak/incorrect suggestions unless some kind of Darwinian stimulus is added to the mix. Reinforcement based learning could be a part of the answer to improve training, but this won’t fix stagnation in RAG evolution. Worse yet, experts won’t be motivated to add new ideas (and where would they add them?) if recognition for their contribution will be hidden behind an LLM response. I didn’t see an answer to this challenge in the paper.

Mitigating Downsides

The paper underlines the need for unit testing unconnected to AI. This is basic testing hygiene – don’t have the same person (or AI) both develop and test. I was surprised that there was no mention of connecting requirements capture to testing since those requirements should provide independent oracles for correct behavior. Perhaps that is because AI involvement in requirements capture is still mostly aspirational.

One encouraging idea is to lean more heavily on metamorphic testing, something I have discussed elsewhere. Metamorphic testing checks relationships in behavior which should be invariant through low-level changes in implementation or in use-case tests. If you detect differences in such a relation during testing, you know you have an error in the design. However finding metamorphic relations is not easy. The authors suggest that AI could uncover new relations, as long as each such suggestion is carefully reviewed by an expert. Here the expert must ask if an apparent invariant is just an accident of the testing or something that really is an invariant, at least in the scope of usage intended for the product.

Thought-provoking ideas, all with relevance to hardware design.

Also Read:

The Journey of Interface Protocols: Adoption and Validation of Interface Protocols – Part 2 of 2

EDA AI agents will come in three waves and usher us into the next era of electronic design

Beyond the Memory Wall: Unleashing Bandwidth and Crushing Latency