SNPS1670747138 DAC 2025 800x100px HRes

Legacy IP Providers Struggle to Solve the NPU Dilemna

Legacy IP Providers Struggle to Solve the NPU Dilemna
by Admin on 06-11-2025 at 10:00 am

Fracture the IO Network

At Quadric we do a lot of first-time introductory visits with prospective new customers.  As a rapidly expanding processor IP licensing company that is starting to get noticed (even winning IP Product of the Year!) such meetings are part of the territory.  Which means we hear a lot of similar sounding questions from appropriately skeptical listeners who hear our story for the very first time.  The question most asked in those meetings sounds something like this:

“Chimera GPNPU sounds like the kind of breakthrough I’ve been looking for. But tell me, why is Quadric the only company building a completely new processor architecture for AI inference? It seems such an obvious benefit to tightly integrate the matrix compute with general purpose compute, instead of welding together two different engines across a bus and then partitioning the algorithms. Why don’t some of the bigger, more established IP vendors do something similar?”

The answer I always give: “They can’t, because they are trapped by their own legacies of success!”

A Dozen Different Solutions That All Look Remarkably Alike

A long list of competitors in the “AI Accelerator” or “NPU IP” licensing market arrived at providing NPU solutions from a wide variety of starting points. Five or six years ago, CPU IP providers jumped into the NPU accelerator game to try to keep their CPUs relevant with a message of “use our trusted CPU and offload those pesky, compute hungry matrix operations to an accelerator engine.”  DSP IP providers did the same.  As did configurable processor IP vendors.  Even GPU IP licensing companies did the same thing.  The playbook for those companies was remarkably similar: (1) tweak the legacy offering instruction set a wee bit to boost AI performance slightly, and (2) offer a matrix accelerator to handle the most common one or two dozen graph operators found in the ML benchmarks of the day: Resnet, Mobilenet, VGG.

The result was a partitioned AI “subsystem” that looked remarkably similar across all the 10 or 12 leading IP company offerings: legacy core plus hardwired accelerator.

The fatal flaw in these architectures: always needing to partition the algorithm to run on two engines.  As long as the number of “cuts” of the algorithm remained very small, these architectures worked very well for a few years.  For a Resnet benchmark, for instance, usually only one partition is required at the very end of the inference. Resnet can run very efficiently on this legacy architecture.  But along came transformers with a very different and wider set of graph operators that are needed, and suddenly the “accelerator” doesn’t accelerate much, if any, of the new models and overall performance became unusable.  NPU accelerator offerings needed to change. Customers with silicon had to eat the cost – a very expensive cost – of a silicon respin.

An Easy First Step Becomes a Long-Term Prison

Today these IP licensing companies find themselves trapped.  Trapped by their decisions five years ago to take an “easy” path towards short-term solutions. The motivations why all of the legacy IP companies took this same path has as much to do with human nature and corporate politics as it does with technical requirements.

When what was then generally referred to as “machine learning” workloads first burst onto the scene in vision processing tasks less than a decade ago, the legacy processor vendors were confronted with customers asking for flexible solutions (processors) that could run these new, fast-changing algorithms.  Caught flat-footed with processors (CPU, DSP, GPU) ill-suited to these news tasks, the quickest short-term technical fix was the external matrix accelerator.  The option of building a longer-term technical solution – a purpose built programmable NPU capable of handling all 2000+ graph operators found in the popular training frameworks – would take far longer to deliver and incur much more investment and technical risk.

The Not So Hidden Political Risk

But let us not ignore the human nature side of the equation faced by these legacy processor IP companies.  A legacy processor company choosing a strategy of building a completely new architecture – including new toolchains/compilers – would have to explicitly declare both internally and externally that the legacy product was simply not as relevant to the modern world of AI as that legacy (CPU, DSP, GPU) IP core previously was valued.  The breadwinner of the family that currently paid all the bills would need to pay the salaries of the new team of compiler engineers working on the new architecture that effectively competed against the legacy star IP.  (It is a variation on the Innovator’s Dilemma problem.) And customers would have to adjust to new, mixed messages that declare “the previously universally brilliant IP core is actually only good for a subset of things – but you’re not getting a royalty discount.”

All of the legacy companies chose the same path: bolt a matrix accelerator onto the cash cow processor and declare that legacy core still reigns supreme.  Three years later staring at the reality of transformers, they declared the first-generation accelerator obsolete and invented a second one that repeated the same shortcomings of the first accelerator. And now faced with the struggles of the 2nd iteration hardwired accelerator having also become obsolete in the face of continuing evolution of operators (self-attention, multiheaded self-attention, masked self-attention, and more new ones daily) they either have to double-down again and convince internal and external stakeholders that this third time the fixed-function accelerator will solve all problems forever; or admit that they need to break out of the confining walls they’ve built for themselves and instead build a truly programmable, purpose-built AI processor.

But the SoC Architect Doesn’t Have to Wait for the Legacy IP Company

The legacy companies might struggle to decide to try something new.  But the SoC architect building a new SoC doesn’t have to wait for the legacy supplier to pivot.  A truly programmable, high-performance AI solution already exists today.

The Chimera GPNPU from Quadric runs all AI/ML graph structures.  The revolutionary Chimera GPNPU processor integrates fully programmable 32bit ALUs with systolic-array style matrix engines in a fine-grained architecture.  Up to 1024 ALUs in a single core, with only one instruction fetch and one AXI data port.  That’s over 32,000 bits of parallel, fully-programmable performance.  The flexibility of a processor with the efficiency of a matrix accelerator.  Scalable up to 864 TOPS for bleeding-edge applications, Chimera GPNPUs have matched and balanced compute throughput for both MAC and ALU operations so no matter what type of network you choose to run they all run fast, low-power and highly parallel.  When a new AI breakthrough comes along in five years, the Chimera processor of today will run it – no hardware changes, just application SW code.

Contact Quadric

Also Read:

Recent AI Advances Underline Need to Futureproof Automotive AI

2025 Outlook with Veerbhan Kheterpal of Quadric

Tier1 Eye on Expanding Role in Automotive AI


A Novel Approach to Future Proofing AI Hardware

A Novel Approach to Future Proofing AI Hardware
by Bernard Murphy on 06-11-2025 at 6:00 am

Tensilica NeuroEdge min

There is a built-in challenge for edge AI intended for long time-in-service markets. Automotive applications are the obvious example, while aerospace and perhaps medical usage may impose similar demands. Support for the advanced AI methods we now expect – transformers, physical and agentic AI – is not feasible without dedicated hardware acceleration. According to one report, over 50% of worldwide dollars spent on edge AI by 2033 will go to edge hardware rather than software, cloud or services. The challenge is that AI technologies are still evolving rapidly yet hardware capability once built is baked in; it cannot be upgraded on the fly like software. However AI must be upgradable through the 15-year life of a typical car to continue to align with regulatory changes and not drift too far from user expectations for safety/security, economy, and resale value. Reconciling these conflicting needs is placing a major emphasis on future-proofing AI hardware – anticipating and supporting (to the greatest extent possible) the AI advances that we can imagine. Cadence has a novel answer to that need, a co-processor to support an edge NPU and handle offload for (non-NPU) AI tasks that are known and tasks that are not yet known.

How can you future-proof hardware?

CPUs can compute anything that is computable, making them perfect for general-purpose computing but inefficient for matrix or vector arithmetic as measured by performance and power consumption. NPUs excel at solving the large linear arithmetic problems common in transformers in attention and multi-layer perceptron operations, but not the non-linear operations required in AI for functions like activation and normalization, or custom vector or matrix operations not built into the NPU instruction set architecture (ISA). These operations are generally handled through specialized hardware.

Unfortunately, AI model evolution isn’t limited to the core functions represented in say the ONNX operation set. New operations continue to appear, sometimes for fusion, sometimes for complex algorithms, perhaps for agentic flows, operations which can’t be mapped easily onto the NPU. A workaround is to offload such operations to a master CPU as needed, but there can be a big performance (and power) overhead where repeated offloads are necessary, potentially very damaging to ultimate product performance.

Much better is a programmable embedded co-processor which can be tightly coupled to the NPU. Programmable so it provides full flexibility to add new operations, and tightly coupled to minimize overhead in offloading between the NPU and the co-processor. That is the objective of the Cadence NeuroEdge AI Co-Processor – to sit right next to one or more NPUs, delivering low latency non-NPU computation as a complement to NPU computation.

The Tensilica NeuroEdge 130 AI Co-Processor

This co-processor approach is already proven in leading chips from companies like NVIDIA, Qualcomm, Google and Intel. What makes the Cadence solution stand out is that in embedded applications you can pair it with your favorite NPU (later with an array of NPUs). You can build as tight an NPU subsystem as you’d like between the NeuroEdge Co-Processor and your NPU of choice, perhaps with shared memory, to deliver the differentiation you need.

Also of note, this IP builds on the mature Cadence Tensilica Vision DSP architecture and software stack, so it is available today. What Cadence has done here is interesting. They trimmed back the Vision DSP architecture to focus just on AI support, reminiscent of how NVIDIA transitioned their general-purpose GPU architecture to AI platforms. This reduction results in a co-Processor with 30% smaller area for a similar configuration and 20% lower power consumption for equivalent workloads, while maintaining the same performance. Designers can also use the proven NeuroWeave software stack for rapid development. Connection to your NPU of choice is through the NeuroConnect API across AXI (or a high bandwidth direct interface). Cadence adds that this core is ISO26262 FUSA-ready for automotive applications.

One more interesting point: A common configuration would have NeuroEdge acting as co-processor supporting an NPU as the primary interface. Alternatively, NeuroEdge can act as the primary interface controlling access to the NPU. I imagine this second approach might be common in agentic architectures.

My Takeaways

This is a clever idea: a ready-made, fully programmable IP to accelerate non-NPU functions you haven’t yet anticipated, while working in close partnership with an NPU. You could try building the same thing yourself, but then you have to figure out how to tie a vector accelerator (DSP) and a CPU (with floating point support) tightly around your NPU, then run exhaustive testing on your new architecture, and also rework your software stack, etc. Or you could build around a proven and complete complement to your NPU.

Can this anticipate every possible evolution of AI models over the next 15+ years? No one can foresee every twist and turn in edge AI evolution, but a tightly coupled co-processor with fully programmable support for custom vector and scalar operations seems like a pretty good start.

You can learn more about the Cadence Tensilica NeuroEdge 130 AI Co-Processor HERE.

Also Read:

Cadence at the 2025 Design Automation Conference

Anirudh Fireside Chats with Jensen and Lip-Bu at CadenceLIVE 2025

Anirudh Keynote at CadenceLIVE 2025 Reveals Millennium M2000


Mixel at the 2025 Design Automation Conference

Mixel at the 2025 Design Automation Conference
by Daniel Nenni on 06-10-2025 at 10:00 am

62nd DAC SemiWiki

Mixel, Inc., a leading provider of mixed-signal interface IP, will exhibit at booth #2616 at Design Automation Conference (DAC) 2025 on June 23-25. The company will demonstrate its latest customer demos featuring Mixel’s MIPI PHY IP and LVDS IP. Mixel’s customers include many of the world’s largest semiconductors and system companies targeting mobile and mobile-adjacent applications such as automotive, consumer/industrial IoT, VR/AR/MR, wearables, healthcare, and others.

At DAC, Mixel will be showcasing many MIPI customer demos who have integrated Mixel’s IP into their end product or SoC. One example is NXP, which leverages Mixel’s IP in multiple different applications. At DAC, Mixel will be showcasing an IoT application in the i.MX7ULP Applications Process featuring Mixel’s MIPI D-PHY DSI TX IP. This SoC is integrated into the Garmin Edge 830 Cycling GPS. Another MIPI D-PHY customer includes the Lattice Crosslink-NX low-power FPGA which features Mixel’s MIPI D-PHY Universal IP. Lattice FPGAs were announced to enable advanced driver experiences on Mazda Motor Corporation’s CX-60 and CX-90 models.

Mixel’s MIPI IP can be configured as a transmitter (TX), receiver (RX), or universal configuration which is the superset supporting both TX and RX in the same hard macro. Mixel also offers a unique MIPI implementation call TX+ and RX+. These proprietary MIPI IP configurations (patented in the case of RX+) allow for full-speed production testing without requiring a full MIPI Universal configuration, resulting in a substantial reduction in area and in standby power.

Mixel recently announced the availability of its latest MIPI C-PHY/D-PHY combo IP. This supports MIPI C-PHY v2.1 at 8.0Gsps per trio and MIPI D-PHY v2.5 at 6.5Gbps per lane. With 3 trios and 4 lanes, the IP supports over 54 Gbps aggregate bandwidth in C-PHY mode and 26 Gbps in D-PHY mode. At DAC, Mixel will have multiple customer demos featuring previous generations of its MIPI C-PHY/D-PHY combo IP. Hercules Microelectronics HME-H3 low-power FPGA leverages Mixel’s C-PHY/D-PHY combo RX IP and is in mass production. Hercules Microelectronics’s customers include Blackview which is in production with the HME-H3 FPGA in the Blackview Hero 10 foldable smartphone. Synaptics has also integrated Mixel’s MIPI C-PHY/D-PHY combo TX IP in Synaptics’s DisplayPort to dual MIPI VR bridge IC, the VXR7200.

Mixel has been focused on providing auto grade IP for many years. To further support its automotive customers, Mixel achieved ISO 9001 Quality Management System Certification and ISO 26262 certification for automotive functional safety. The Mixel MIPI product development process was certified up to ASIL-D with multiple products certified up to ASIL-B. Mixel will be showcasing one automotive customer demo from GEO Semiconductor (acquired by indie Semiconductor in 2023), the GEO GW5 geometric processor, featuring Mixel’s MIPI D-PHY CSI-2 TX and MIPI D-PHY CSI-2 RX IPs.

Mixel invites you to visit booth #2616 at DAC 2025, to meet their experts and explore how these technologies can benefit your SoC designs.

If you can’t make it DAC, you can learn more at https://mixel.com/

Also Read:

Cadence at the 2025 Design Automation Conference

proteanTecs at the 2025 Design Automation Conference

Breker Verification Systems at the 2025 Design Automation Conference


Analog Bits at the 2025 Design Automation Conference

Analog Bits at the 2025 Design Automation Conference
by Daniel Nenni on 06-10-2025 at 6:00 am

Analog Bits at the 2025 Design Automation Conference

Analog Bits attends a lot of events. I know because I see them a lot in my travels. Lately, the company has been stealing the show with cutting-edge analog IP on a broad range of popular nodes and a strategy that will change the way design is done. Analog Bits is quietly rolling out a new approach to system design. One that delivers a holistic approach to power management during the architectural phase of design The company believes this is the only way to achieve the required power and performance for demanding next-generation AI systems.

In their words, “Analog Bit is the leading energy management IP company, making power safe, reliable, observable and efficient.” There is a lot to unpack in that statement, and a lot to see at the Analog Bits booth #1320 at DAC. Let’s look at the IPs Analog Bits has available across several foundries. There are many more in the works.

TSMC 2nm

Analog Bits recently completed a successful second test chip tapeout at 2nm, but the real news is the company will be at DAC with multiple working analog IPs at 2nm. A wide range PLL, PVT sensor, droop detector, an 18-40MHz crystal oscillator, and differential transmit (TX) and receive (RX) IP blocks will all be on display.

TSMC 3nm

Four power management IPs from TSMC’s CLN3P process will also be demonstrated. These include a scalable low-dropout (LDO) regulator, a spread spectrum clock generation PLL supporting PCIe Gen4 and Gen5, a high-accuracy thermometer IP using Analog Bits patented pinless technology, and a droop detector for 3nm.

Other TSMC Nodes, 4nm to 0.18u

Analog Bits has been an OIP partner with TSMC since 2004 and has a large portfolio of clocking, sensors, SERDES and IO IP’s. You can check out the availability at the company’s product selector website at https://www.analogbits.com/product-selector/.

GlobalFoundries 12LP, 12LP+, 22FDX

An integer PLL, FracN/SSCG PLL, PCI G3 ref clock PLL, PVT sensor, and power on reset are all available in both GF 12LP and 12LP+. A PCI G4/5 ref clock PLL is available on GF 12LP+. A broad array of automotive IP is also available in GF 22FDX including voltage regulators, power on reset, PCT sensors and IO’s.

Samsung 4LPP, 8LPU and 14LPP

An integer PLL, PVT Sensor, power on reset, and droop detector are available on Samsung 4LPP. An automotive grade PLL, PVT sensor, and PCIe Gen4/5 SERDES are available on Samsung 8LPP/8LPU. An integer PLL, PVT sensor, and PCI Gen4 SERDES are available on Samsung 14LPP.

About the Intelligent Power Architecture

Optimizing performance and power in an on-chip environment that is constantly changing with on-chip variation and power-induced glitches can be a real headache. Multi-die design compounds the problem across many chiplets.

The Analog Bits view is that this problem cannot be solved as an afterthought. Plugging in optimized IP or modifying software late in the design process won’t work. The company believes that developing a holistic approach to power management during the architectural phase of the project is the answer.

So, Analog Bits is rolling out its Intelligent Power Architecture initiative. There is a lot of IP and know-how that work together to make this a reality. If power optimization is a challenge, you should stop by booth #1320 at DAC and see what solutions are available from Analog Bits.

To Learn More

You can find extensive coverage of Analog Bits on SemWiki here.  You can also visit the company’s website to dig deeper. See you at DAC.

Also Read:

Analog Bits Steals the Show with Working IP on TSMC 3nm and 2nm and a New Design Strategy

2025 Outlook with Mahesh Tirupattur of Analog Bits

Analog Bits Builds a Road to the Future at TSMC OIP


SoC Front-end Build and Assembly

SoC Front-end Build and Assembly
by Daniel Payne on 06-09-2025 at 10:00 am

SoC Compiler flow min

Modern SoCs can be complex with hundreds to thousands of IP blocks, so there’s an increasing need to have a front-end build and assembly methodology in place, eliminating manual steps and error-prone approaches. I’ve been writing about an EDA company that focuses on this area for design automation, Defacto Technologies, and we met by video to get an update on their latest release of SoC Compiler, v11.

With SoC Compiler an architect or RTL designer can integrate all of their IP, auto-connect some blocks, define which blocks should be connected, and create a database for simulation and logic synthesis tools. Both the top-level and subsystems can be built, or you can easily restructure your design before sending it to synthesis. Using SoC Compiler ensures that design collaterals such as UPF, SDC and IP-XACT are coherent  with RTL. Here’s what the design flow looks like with SoC Compiler.

Another use of the Defacto tool is when physical implementation needs to be linked to RTL pre-synthesis. More precisely, when Place and Route of all the IP blocks isn’t fitting within the area goal, you capture the back-end requirements and create a physically-aware RTL to improve the PPA during synthesis, as the tool also has power and clock domain awareness. When building an SoC it’s important to keep all of the formats coherent: IP-XACT, SDC, UPF, RTL. Using a tool to keep coherence saves time by avoiding manual mistakes and miscommunications.

In the new v11 release there has been a huge improvement in runtime performance, where customers report seeing an 80X speed-up to generate new configurations. This dramatic speed improvement means that you can try out several configurations per day, resulting in faster time to reach PPA goals. What used to take 3-4 hours to run, now takes just minutes.

One customer of Defacto had an SoC design with 925 IP blocks, consisting of 4,900 instances, 5k bus interface connections, and 65k ad hoc connections, where the runtime to make a complete integration completed in under just one hour.

V11 includes IP-XACT support and management of: TGI,  Vendor Extensions, multi-view. The latest UPF 3.1 is supported. Improvements to IP-XACT include support of parameterized add_connection, and Insert IP-XACT Bus Interface (IIBI).

There’s even some new AI-based features that improve tool usability and code generation tasks. You can use your own LLM or engines, and there’s no requirements to train the AI features.

Users of SoC Compiler can run the tool from the command line, GUI, or even use an API in Tcl, Python or C++ code. Defacto has seen customers use their tool in diverse application areas: HPC, security, automotive, IoT, AI. The more IP blocks in your SoC project, the larger the benefits of using SoC Compiler are. Take any existing EDA tool flow and add in the Defacto tool to get more productive.

Summary

During the past 17 years the engineering team at Defacto has released 11 versions of the SoC Compiler tool to help system architects, RTL designers and DV teams become more efficient during the chip assembly process. I plan to visit Defacto at DAC in booth 1527 on Monday, June 23 to hear more from a customer presentation about using v11.

Related Blogs


Siemens EDA Outlines Strategic Direction for an AI-Powered, Software-Defined, Silicon-Enabled Future

Siemens EDA Outlines Strategic Direction for an AI-Powered, Software-Defined, Silicon-Enabled Future
by Kalar Rajendiran on 06-09-2025 at 6:00 am

Software defined Systems of Systems

In a keynote delivered at this year’s Siemens EDA User2User event, CEO Mike Ellow presented a focused vision for the evolving role of electronic design automation (EDA) within the broader context of global technology shifts. The session covered Siemens EDA’s current trajectory, market strategy, and the changing landscape of semiconductor and systems design. Since Mentor Graphics became part of Siemens AG, the User2User event has become the annual opportunity to gain holistic insights into the company’s performance and strategic direction.

Sustained Growth and Strategic Investment

Siemens EDA has demonstrated strong growth over the past two years, both in revenue and market share. The company has responded by increasing R&D investment and expanding its portfolio. Notably, over 80% of new hires in fiscal year 2024 were placed in R&D roles, underscoring a strategic emphasis on product and technology development.

This growth comes during a period of industry consolidation and transformation. Without its own silicon IP offerings, Siemens is reinforcing its position around full-flow EDA, advanced simulation, and systems engineering. These areas are seen as key differentiators in a market where integration across domains is increasingly essential.

Extending Beyond Traditional EDA

Mike outlined Siemens’ expanding footprint into areas traditionally considered outside the core EDA domain. The $10.5 billion acquisition of Altair, a multiphysics simulation company, along with strategic moves into mathematical modeling, reflects a long-term strategy aimed at enabling cross-domain digital engineering. These capabilities are becoming increasingly important as products evolve into complex cyber-physical systems.

The company’s parent, Siemens AG, continues to invest heavily in digitalization, simulation, and lifecycle solutions. EDA now plays a central role in this technology stack, bridging the gap between silicon and the broader systems in which it operates.

Software-Defined Systems and AI as Central Drivers

At the heart of Siemens’ vision is the recognition that software is now the primary driver of differentiation. This shift means traditional hardware-led design processes must be restructured. The industry is moving toward a software-defined model, where silicon must be architected to support flexible, updatable, software-driven functionality.

This transition includes integrating AI directly into the design process—both as a capability within the tools and as a requirement for the end products. AI is accelerating demand for compute and increasing design complexity, but it also enables new methods of automation in verification, synthesis, and optimization. Siemens EDA is investing on both fronts: helping customers build silicon for AI, while embedding AI into its own design tool flows.

Multi-Domain Digital Twins

In today’s cyber-physical products—such as electric vehicles or industrial control systems—software and hardware must co-evolve in lockstep. The traditional handoff model, where completed hardware designs are passed to software teams, often results in inefficiencies and functional mismatches.

Instead, Siemens is promoting the use of multi-domain digital twins—integrated system models that span electrical, mechanical, manufacturing, and lifecycle domains. These models enable real-time collaboration and help prevent costly late-stage trade-offs. For example, a software update could inadvertently impact braking, weight distribution, and overall performance, resulting in a significant drop in range. A tightly coupled digital twin helps identify and mitigate such cascading effects before deployment.

Silicon Lifecycle Management and Embedded Monitoring

Beyond early design, Siemens is advancing silicon lifecycle management (SLM) by embedding monitors directly into chips to collect real-world operational data throughout their lifespan. This telemetry, feeding continuously into the digital twin, enables predictive maintenance, lifecycle optimization, and performance tuning as systems age.

This approach transforms silicon from a static component into a dynamic asset. Over-the-air updates, anomaly detection, and usage-aware software adaptation become feasible, improving product reliability and long-term value.

AI Infrastructure and Secure Data Lakes

To manage the escalating complexity of software-defined, AI-powered electronics, Siemens is building a robust AI infrastructure anchored in secure data lakes. These repositories aggregate verified design, simulation, and test data while maintaining strict access control—crucial for IP protection.

Domain-specific large language models (LLMs) and AI agents are being trained on this data to automate tasks such as script generation, testbench development, and design space exploration. Siemens is developing a unified AI platform to further support automation, decision-making, and cross-domain intelligence throughout the design lifecycle. The platform will be formally announced in the months ahead.

3D IC, Advanced Packaging, and Enterprise-Scale EDA

A key focus is the rise of 3D ICs and heterogeneous integration, from chiplets to PCB-level packaging. Siemens is enhancing its toolsets to support the convergence of digital and analog design, using AI-driven workflows to increase scalability and accuracy in these complex architectures.

These initiatives support Siemens’ broader push toward enterprise-scale EDA—democratizing access to advanced design tools through cloud platforms. These environments empower distributed teams, including less-experienced engineers, to collaborate on sophisticated designs. AI-powered automation bridges skills gaps, accelerates time-to-market, and enhances design quality.

Navigating Geopolitics and Sustainability

Mike also addressed external forces reshaping the semiconductor industry, including geopolitical pressures and the growing need for sustainability. Regionalization is accelerating, as countries invest in domestic design and manufacturing to mitigate supply chain risks and safeguard IP.

Meanwhile, AI and ubiquitous connectivity are driving compute demands beyond traditional energy efficiency gains. Siemens EDA is responding with low-power design methodologies, energy-efficient architectures, and system-wide optimization strategies that combine AI with simulation to reduce power consumption.

Summary

The central message of the keynote was that the future of electronics is AI-powered, software-defined, and silicon-enabled. For EDA providers, this means going beyond traditional design boundaries toward a full-stack, lifecycle-aware development model that integrates software, systems, and silicon from the outset.

Siemens EDA is positioning itself as a leader in this transformation—through comprehensive digital twins, embedded silicon lifecycle management, secure AI infrastructure, and cloud-enabled, democratized design platforms.

Also Read:

EDA AI agents will come in three waves and usher us into the next era of electronic design

Safeguard power domain compatibility by finding missing level shifters

Metal fill extraction: Breaking the speed-accuracy tradeoff


Cadence at the 2025 Design Automation Conference

Cadence at the 2025 Design Automation Conference
by Daniel Nenni on 06-08-2025 at 10:00 am

Cadence, a DAC 2025 industry sponsor, will exhibit in booth 1609 at the 62nd Design Automation Conference at San Francisco’s Moscone West Convention Center.

Highlights:

Paul Cunningham, SVP and GM of the System Verification Group, Cadence, will speak at Cooley’s DAC Troublemaker Panel. This discussion will be an open Q&A covering interesting and even controversial EDA topics. Monday, June 23, 3:00pm – 4:00pm, DAC Pavilion, Exhibit Hall, Level 2

Cadence will be at the DAC Chiplet Pavilion hosted by EE Times on Level 2, Exhibit Hall Booth 2308:

David Glasco, VP of the Compute Solutions Group, Cadence, will participate in a panel discussion, “Developing the Chiplet Economy.” The commercial chiplet ecosystem is rapidly evolving, driven by the need for greater scalability, performance, and cost efficiency. However, its growth is challenged by the lack of standardized interfaces, industry-wide collaboration, and the complexity of integrating chiplets from multiple vendors. This session will explore the readiness of advanced packaging technologies, the role of design tool vendors, silicon makers, and IP providers, and the collaborative efforts required to establish a thriving chiplet economy. Tuesday, June 24, 2:00pm – 2:55pm.

Brian Karguth, distinguished engineer, Cadence, will present “Cadence SoC Cockpit: Full Spectrum Automation for Chiplet Development.” The semiconductor industry is undergoing a transformation from traditional monolithic system-on-chip (SoC) architectures to modular, chiplet-based designs. This strategic shift is essential to mitigate complexities associated with scaling designs, optimize yields, and address rising fabrication costs driven by increasing transistor costs. To address these challenges, Cadence is offering a full set of chiplet development solutions, including our new Cadence SoC Cockpit, which aims to streamline and optimize the development of next-generation chiplet and system in package (SiP) designs. Learn about Cadence SoC Cockpit and its use for accelerating SoC designs. Tuesday, June 24, 3:50pm – 4:10pm.

Powering the Future: Mastering IEEE 2416 System-Level Power Modeling Standard for Low-Power AI and Beyond: Daniel Cross, senior principal solutions engineer, Cadence, will present a tutorial that will provide attendees with a comprehensive understanding of the IEEE 2416 standard, which is used for system-level power modeling in the design and analysis of integrated circuits and systems. Participants will gain the practical knowledge necessary to implement and utilize the standard effectively. The tutorial will highlight the pressing need for low-power design methodologies, particularly in cutting-edge fields like AI, where computational demands are high. Sunday, June 22, 9:00am – 12:30pm.

Vinod Kariat, CVP and GM of the Custom Products Group, Cadence, will participate in a panel discussion, “The Renaissance of EDA Startups,” on Tuesday, June 24, 2:30pm – 3:15pm.

Cadence will present a series of posters with GlobalFoundries, Intel, IBM, NXP, Samsung, and STMicroelectronics on Tuesday, June 24, 5:00pm – 6:00pm.

A complete list of Cadence activities at DAC can be found at Cadence @ Conference – Design Automation Conference 2025.

Cadence recruiters will be at the DAC Career Development Day on Tuesday, June 24, 10:00am – 3:30pm, inside the entrance of the Exhibit Hall on Level 1. Members of the DAC Community who are considering a job change or a new career opportunity are encouraged to complete an application and upload a résumé/CV, which will be shared in advance with participating employers. Attendees may stop by at any time on Tuesday between 10:00am and 3:30pm to speak with employers.

To arrange a meeting with Cadence at DAC 2025: REQUEST MEETING

Also Read:

Verific Design Automation at the 2025 Design Automation Conference

ChipAgent AI at the 2025 Design Automation Conference

proteanTecs at the 2025 Design Automation Conference

Breker Verification Systems at the 2025 Design Automation Conference


Verific Design Automation at the 2025 Design Automation Conference

Verific Design Automation at the 2025 Design Automation Conference
by Lauro Rizzatti on 06-08-2025 at 8:00 am

62nd DAC SemiWiki

Rick Carlson, Verific Design Automation’s Vice President of Sales, is an EDA trends spotter. I was reminded of his prescience when he recently called to catch up and talk about Verific’s role as provider of front-end platforms powering an emerging EDA market.

Verific, he said, is joining forces with a group of well-funded startups using AI technology to eliminate error-prone repetitive tasks for efficient and more productive chip design. “We’re in a new space where no one is sure of the outcome or the impact that AI is going to have on chip design. We know there are going to be some significant improvements in productivity. It’s going to be an amazing foundation.”

I was intrigued and wanted to learn more. Rick set up a call for us to talk with Ann Wu, CEO of startup Silimate, an engaging and articulate spokesperson for this new market. Silimate, one of the first companies to market, is developing a co-pilot (chat-based GenAI) for chip and IP designers to help them find and fix functional and PPA issues. Impressively, it is the first EDA startup to get funding from Y Combinator, a tech startup accelerator. Silimate is also a Verific customer.

Ann was formerly a hardware designer at Apple, a departure from the traditional EDA developer profile. Like Ann, other founders of many of the new breed of EDA startups were formerly designers from Arm, NVIDIA, SpaceX, Stanford and Synopsys.

While doing a startup was always part of her game plan, Ann’s motivation for becoming an entrepreneur came from frustrations within the chip design flow and availability of new technology to solve some of these pressing issues.

AI, Ann acknowledged, may provide a solution to some of the problems she encountered and the reason behind the excitement and appetite about AI for EDA applications. “Traditional EDA solutions solve isolated problems through heuristic algorithms. There’s a high volume of gray area in between these well-defined boxes of inputs and outputs that had previously been unsolvable. Now with AI, there is finally a way to sift through and glean patterns, insights and actions from these gray areas.”

We turn to the benefits of EDA using AI technology. “Having been in the industry as long as I have,” says Rick. “I know the challenges are daunting, especially when you consider that our customers want to avoid as much risk as possible. They want to improve the speed to get chips out, but they are all about de-risking everything.”

I ask Ann if adding AI is only a productivity gain. “Productivity as a keyword is not compelling.” It’s an indirect measure of the true ROI, she notes, and adds it’s ultimately reducing the time to tape out while achieving the target feature set that engineering directors and managers look for.”

“What we are doing has been time-tested,” answered Rick when asked why these startups are going to Verific. “We recently had a random phone call from a researcher at IBM. He already knew that IBM was using Verific in chip design. He said, “I know that we need to deal with language and Verific is the gold standard.’

“We’re lucky we’ve just been around long enough. Nobody else in their right mind would want to do what we’ve done because it’s painstaking. I wouldn’t say boring, but it’s not as much fun as what Ann is doing, that’s for sure.”

As we move on to talk about funding and opportunities, Rick jumps in. “When people look at an industry, they want to know the leaders and immediately jump to the discussion of revenue and maturity. EDA is a mature industry and a three- or four-horse race. I think there are more horses at the starting line today that have the potential to make a dramatic impact.

“We’ve got an incredible amount of funds we can throw at this, assuming that we can achieve what we want to achieve. This is not something that just came along. This is a seismic shift in the commitment to use all the talent, tools, technology and money to make this happen.

“To me, it’s not a three-horse race—maybe it’s a 10-horse race. We really won’t know until we look back in another six months or a year from now at what that translates to. I am betting on it because the people doing this for the most part are not professional CAD developers. They looked at the problem and think they can make a dent.”

DAC Registration is Open

Notes:

Verific will exhibit at the 62 Design Automation Conference (DAC) in Booth #1316 at the Moscone Center in San Francisco from June 23–25.

Silimate’s Akash Levy, Founder and CTO, will participate in a panel titled “AI-Enabled EDA for Chip Design” at 10:30am PT Tuesday, June 24, during DAC.

Also Read:

Breker Verification Systems at the 2025 Design Automation Conference

The SemiWiki 62nd DAC Preview


Video EP8: How Defacto Technologies Helps Customers Build Complex SoC Designs

Video EP8: How Defacto Technologies Helps Customers Build Complex SoC Designs
by Daniel Nenni on 06-06-2025 at 10:00 am

In this episode of the Semiconductor Insiders video series, Dan is joined by Chouki Aktouf, CEO and Founder of Defacto Technologies. Dan explores the challenges of building complex SoCs with Chouki, who describes challenges around managing complexity at the front end of the process while staying within PPA requirements and still delivering a quality design as fast and cost effectively as possible.

Chouki describes how Defacto’s SoC Compiler addresses the challenges discussed along with other important items such as design reuse. He provides details about how Defacto is helping customers of all sizes to optimize the front end of the design process quickly and efficiently so the resulting chip meets all requirements.

Contact Defacto

The views, thoughts, and opinions expressed in these videos belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Podcast EP290: Navigating the Shift to Quantum Safe Security with PQShield’s Graeme Hickey

Podcast EP290: Navigating the Shift to Quantum Safe Security with PQShield’s Graeme Hickey
by Daniel Nenni on 06-06-2025 at 6:00 am

Dan is joined by Graeme Hickey, vice president of engineering at PQShield. Graeme has over 25 years of experience in the semiconductor industry creating cryptographic IP and security subsystems for secure products. Formerly of NXP Semiconductor, he was senior manager of the company’s Secure Hardware Subsystems group responsible for developing security and cryptographic solutions for an expansive range of business lines.

Dan explores the changes that are ahead to address post-quantum security with Graeme, who explains what these changes mean for chip designers over the next five to ten years, Graeme explains that time is of the essence, and chip designers should start implementing current standards now to be ready for the requirements in 2030. This process will continue over the next five to ten years.

Graeme describes the ways PQShield is helping chip designers prepare for the post-quantum era now. One example he cites is the PQPlatform-TrustSys, a complete PQC-focused security system that provides architects with the tools needed for the quantum age and beyond. Graeme also discusses the impact of the PQShield NIST-ready test chip. Graeme describes what chip designers should expect across the supply chain as we enter the post-quantum era.

Contact PQShield

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.