SemiWiki Ad2E ILOVEDAC 800x100

SoC Front-end Build and Assembly

SoC Front-end Build and Assembly
by Daniel Payne on 06-09-2025 at 10:00 am

SoC Compiler flow min

Modern SoCs can be complex with hundreds to thousands of IP blocks, so there’s an increasing need to have a front-end build and assembly methodology in place, eliminating manual steps and error-prone approaches. I’ve been writing about an EDA company that focuses on this area for design automation, Defacto Technologies, and we met by video to get an update on their latest release of SoC Compiler, v11.

With SoC Compiler an architect or RTL designer can integrate all of their IP, auto-connect some blocks, define which blocks should be connected, and create a database for simulation and logic synthesis tools. Both the top-level and subsystems can be built, or you can easily restructure your design before sending it to synthesis. Using SoC Compiler ensures that design collaterals such as UPF, SDC and IP-XACT are coherent  with RTL. Here’s what the design flow looks like with SoC Compiler.

Another use of the Defacto tool is when physical implementation needs to be linked to RTL pre-synthesis. More precisely, when Place and Route of all the IP blocks isn’t fitting within the area goal, you capture the back-end requirements and create a physically-aware RTL to improve the PPA during synthesis, as the tool also has power and clock domain awareness. When building an SoC it’s important to keep all of the formats coherent: IP-XACT, SDC, UPF, RTL. Using a tool to keep coherence saves time by avoiding manual mistakes and miscommunications.

In the new v11 release there has been a huge improvement in runtime performance, where customers report seeing an 80X speed-up to generate new configurations. This dramatic speed improvement means that you can try out several configurations per day, resulting in faster time to reach PPA goals. What used to take 3-4 hours to run, now takes just minutes.

One customer of Defacto had an SoC design with 925 IP blocks, consisting of 4,900 instances, 5k bus interface connections, and 65k ad hoc connections, where the runtime to make a complete integration completed in under just one hour.

V11 includes IP-XACT support and management of: TGI,  Vendor Extensions, multi-view. The latest UPF 3.1 is supported. Improvements to IP-XACT include support of parameterized add_connection, and Insert IP-XACT Bus Interface (IIBI).

There’s even some new AI-based features that improve tool usability and code generation tasks. You can use your own LLM or engines, and there’s no requirements to train the AI features.

Users of SoC Compiler can run the tool from the command line, GUI, or even use an API in Tcl, Python or C++ code. Defacto has seen customers use their tool in diverse application areas: HPC, security, automotive, IoT, AI. The more IP blocks in your SoC project, the larger the benefits of using SoC Compiler are. Take any existing EDA tool flow and add in the Defacto tool to get more productive.

Summary

During the past 17 years the engineering team at Defacto has released 11 versions of the SoC Compiler tool to help system architects, RTL designers and DV teams become more efficient during the chip assembly process. I plan to visit Defacto at DAC in booth 1527 on Monday, June 23 to hear more from a customer presentation about using v11.

Related Blogs


Siemens EDA Outlines Strategic Direction for an AI-Powered, Software-Defined, Silicon-Enabled Future

Siemens EDA Outlines Strategic Direction for an AI-Powered, Software-Defined, Silicon-Enabled Future
by Kalar Rajendiran on 06-09-2025 at 6:00 am

Software defined Systems of Systems

In a keynote delivered at this year’s Siemens EDA User2User event, CEO Mike Ellow presented a focused vision for the evolving role of electronic design automation (EDA) within the broader context of global technology shifts. The session covered Siemens EDA’s current trajectory, market strategy, and the changing landscape of semiconductor and systems design. Since Mentor Graphics became part of Siemens AG, the User2User event has become the annual opportunity to gain holistic insights into the company’s performance and strategic direction.

Sustained Growth and Strategic Investment

Siemens EDA has demonstrated strong growth over the past two years, both in revenue and market share. The company has responded by increasing R&D investment and expanding its portfolio. Notably, over 80% of new hires in fiscal year 2024 were placed in R&D roles, underscoring a strategic emphasis on product and technology development.

This growth comes during a period of industry consolidation and transformation. Without its own silicon IP offerings, Siemens is reinforcing its position around full-flow EDA, advanced simulation, and systems engineering. These areas are seen as key differentiators in a market where integration across domains is increasingly essential.

Extending Beyond Traditional EDA

Mike outlined Siemens’ expanding footprint into areas traditionally considered outside the core EDA domain. The $10.5 billion acquisition of Altair, a multiphysics simulation company, along with strategic moves into mathematical modeling, reflects a long-term strategy aimed at enabling cross-domain digital engineering. These capabilities are becoming increasingly important as products evolve into complex cyber-physical systems.

The company’s parent, Siemens AG, continues to invest heavily in digitalization, simulation, and lifecycle solutions. EDA now plays a central role in this technology stack, bridging the gap between silicon and the broader systems in which it operates.

Software-Defined Systems and AI as Central Drivers

At the heart of Siemens’ vision is the recognition that software is now the primary driver of differentiation. This shift means traditional hardware-led design processes must be restructured. The industry is moving toward a software-defined model, where silicon must be architected to support flexible, updatable, software-driven functionality.

This transition includes integrating AI directly into the design process—both as a capability within the tools and as a requirement for the end products. AI is accelerating demand for compute and increasing design complexity, but it also enables new methods of automation in verification, synthesis, and optimization. Siemens EDA is investing on both fronts: helping customers build silicon for AI, while embedding AI into its own design tool flows.

Multi-Domain Digital Twins

In today’s cyber-physical products—such as electric vehicles or industrial control systems—software and hardware must co-evolve in lockstep. The traditional handoff model, where completed hardware designs are passed to software teams, often results in inefficiencies and functional mismatches.

Instead, Siemens is promoting the use of multi-domain digital twins—integrated system models that span electrical, mechanical, manufacturing, and lifecycle domains. These models enable real-time collaboration and help prevent costly late-stage trade-offs. For example, a software update could inadvertently impact braking, weight distribution, and overall performance, resulting in a significant drop in range. A tightly coupled digital twin helps identify and mitigate such cascading effects before deployment.

Silicon Lifecycle Management and Embedded Monitoring

Beyond early design, Siemens is advancing silicon lifecycle management (SLM) by embedding monitors directly into chips to collect real-world operational data throughout their lifespan. This telemetry, feeding continuously into the digital twin, enables predictive maintenance, lifecycle optimization, and performance tuning as systems age.

This approach transforms silicon from a static component into a dynamic asset. Over-the-air updates, anomaly detection, and usage-aware software adaptation become feasible, improving product reliability and long-term value.

AI Infrastructure and Secure Data Lakes

To manage the escalating complexity of software-defined, AI-powered electronics, Siemens is building a robust AI infrastructure anchored in secure data lakes. These repositories aggregate verified design, simulation, and test data while maintaining strict access control—crucial for IP protection.

Domain-specific large language models (LLMs) and AI agents are being trained on this data to automate tasks such as script generation, testbench development, and design space exploration. Siemens is developing a unified AI platform to further support automation, decision-making, and cross-domain intelligence throughout the design lifecycle. The platform will be formally announced in the months ahead.

3D IC, Advanced Packaging, and Enterprise-Scale EDA

A key focus is the rise of 3D ICs and heterogeneous integration, from chiplets to PCB-level packaging. Siemens is enhancing its toolsets to support the convergence of digital and analog design, using AI-driven workflows to increase scalability and accuracy in these complex architectures.

These initiatives support Siemens’ broader push toward enterprise-scale EDA—democratizing access to advanced design tools through cloud platforms. These environments empower distributed teams, including less-experienced engineers, to collaborate on sophisticated designs. AI-powered automation bridges skills gaps, accelerates time-to-market, and enhances design quality.

Navigating Geopolitics and Sustainability

Mike also addressed external forces reshaping the semiconductor industry, including geopolitical pressures and the growing need for sustainability. Regionalization is accelerating, as countries invest in domestic design and manufacturing to mitigate supply chain risks and safeguard IP.

Meanwhile, AI and ubiquitous connectivity are driving compute demands beyond traditional energy efficiency gains. Siemens EDA is responding with low-power design methodologies, energy-efficient architectures, and system-wide optimization strategies that combine AI with simulation to reduce power consumption.

Summary

The central message of the keynote was that the future of electronics is AI-powered, software-defined, and silicon-enabled. For EDA providers, this means going beyond traditional design boundaries toward a full-stack, lifecycle-aware development model that integrates software, systems, and silicon from the outset.

Siemens EDA is positioning itself as a leader in this transformation—through comprehensive digital twins, embedded silicon lifecycle management, secure AI infrastructure, and cloud-enabled, democratized design platforms.

Also Read:

EDA AI agents will come in three waves and usher us into the next era of electronic design

Safeguard power domain compatibility by finding missing level shifters

Metal fill extraction: Breaking the speed-accuracy tradeoff

 


Cadence at the 2025 Design Automation Conference

Cadence at the 2025 Design Automation Conference
by Daniel Nenni on 06-08-2025 at 10:00 am

Cadence, a DAC 2025 industry sponsor, will exhibit in booth 1609 at the 62nd Design Automation Conference at San Francisco’s Moscone West Convention Center.

Highlights:

Paul Cunningham, SVP and GM of the System Verification Group, Cadence, will speak at Cooley’s DAC Troublemaker Panel. This discussion will be an open Q&A covering interesting and even controversial EDA topics. Monday, June 23, 3:00pm – 4:00pm, DAC Pavilion, Exhibit Hall, Level 2

Cadence will be at the DAC Chiplet Pavilion hosted by EE Times on Level 2, Exhibit Hall Booth 2308:

David Glasco, VP of the Compute Solutions Group, Cadence, will participate in a panel discussion, “Developing the Chiplet Economy.” The commercial chiplet ecosystem is rapidly evolving, driven by the need for greater scalability, performance, and cost efficiency. However, its growth is challenged by the lack of standardized interfaces, industry-wide collaboration, and the complexity of integrating chiplets from multiple vendors. This session will explore the readiness of advanced packaging technologies, the role of design tool vendors, silicon makers, and IP providers, and the collaborative efforts required to establish a thriving chiplet economy. Tuesday, June 24, 2:00pm – 2:55pm.

Brian Karguth, distinguished engineer, Cadence, will present “Cadence SoC Cockpit: Full Spectrum Automation for Chiplet Development.” The semiconductor industry is undergoing a transformation from traditional monolithic system-on-chip (SoC) architectures to modular, chiplet-based designs. This strategic shift is essential to mitigate complexities associated with scaling designs, optimize yields, and address rising fabrication costs driven by increasing transistor costs. To address these challenges, Cadence is offering a full set of chiplet development solutions, including our new Cadence SoC Cockpit, which aims to streamline and optimize the development of next-generation chiplet and system in package (SiP) designs. Learn about Cadence SoC Cockpit and its use for accelerating SoC designs. Tuesday, June 24, 3:50pm – 4:10pm.

Powering the Future: Mastering IEEE 2416 System-Level Power Modeling Standard for Low-Power AI and Beyond: Daniel Cross, senior principal solutions engineer, Cadence, will present a tutorial that will provide attendees with a comprehensive understanding of the IEEE 2416 standard, which is used for system-level power modeling in the design and analysis of integrated circuits and systems. Participants will gain the practical knowledge necessary to implement and utilize the standard effectively. The tutorial will highlight the pressing need for low-power design methodologies, particularly in cutting-edge fields like AI, where computational demands are high. Sunday, June 22, 9:00am – 12:30pm.

Vinod Kariat, CVP and GM of the Custom Products Group, Cadence, will participate in a panel discussion, “The Renaissance of EDA Startups,” on Tuesday, June 24, 2:30pm – 3:15pm.

Cadence will present a series of posters with GlobalFoundries, Intel, IBM, NXP, Samsung, and STMicroelectronics on Tuesday, June 24, 5:00pm – 6:00pm.

A complete list of Cadence activities at DAC can be found at Cadence @ Conference – Design Automation Conference 2025.

Cadence recruiters will be at the DAC Career Development Day on Tuesday, June 24, 10:00am – 3:30pm, inside the entrance of the Exhibit Hall on Level 1. Members of the DAC Community who are considering a job change or a new career opportunity are encouraged to complete an application and upload a résumé/CV, which will be shared in advance with participating employers. Attendees may stop by at any time on Tuesday between 10:00am and 3:30pm to speak with employers.

To arrange a meeting with Cadence at DAC 2025: REQUEST MEETING

Also Read:

Verific Design Automation at the 2025 Design Automation Conference

ChipAgent AI at the 2025 Design Automation Conference

proteanTecs at the 2025 Design Automation Conference

Breker Verification Systems at the 2025 Design Automation Conference


Verific Design Automation at the 2025 Design Automation Conference

Verific Design Automation at the 2025 Design Automation Conference
by Lauro Rizzatti on 06-08-2025 at 8:00 am

62nd DAC SemiWiki

Rick Carlson, Verific Design Automation’s Vice President of Sales, is an EDA trends spotter. I was reminded of his prescience when he recently called to catch up and talk about Verific’s role as provider of front-end platforms powering an emerging EDA market.

Verific, he said, is joining forces with a group of well-funded startups using AI technology to eliminate error-prone repetitive tasks for efficient and more productive chip design. “We’re in a new space where no one is sure of the outcome or the impact that AI is going to have on chip design. We know there are going to be some significant improvements in productivity. It’s going to be an amazing foundation.”

I was intrigued and wanted to learn more. Rick set up a call for us to talk with Ann Wu, CEO of startup Silimate, an engaging and articulate spokesperson for this new market. Silimate, one of the first companies to market, is developing a co-pilot (chat-based GenAI) for chip and IP designers to help them find and fix functional and PPA issues. Impressively, it is the first EDA startup to get funding from Y Combinator, a tech startup accelerator. Silimate is also a Verific customer.

Ann was formerly a hardware designer at Apple, a departure from the traditional EDA developer profile. Like Ann, other founders of many of the new breed of EDA startups were formerly designers from Arm, NVIDIA, SpaceX, Stanford and Synopsys.

While doing a startup was always part of her game plan, Ann’s motivation for becoming an entrepreneur came from frustrations within the chip design flow and availability of new technology to solve some of these pressing issues.

AI, Ann acknowledged, may provide a solution to some of the problems she encountered and the reason behind the excitement and appetite about AI for EDA applications. “Traditional EDA solutions solve isolated problems through heuristic algorithms. There’s a high volume of gray area in between these well-defined boxes of inputs and outputs that had previously been unsolvable. Now with AI, there is finally a way to sift through and glean patterns, insights and actions from these gray areas.”

We turn to the benefits of EDA using AI technology. “Having been in the industry as long as I have,” says Rick. “I know the challenges are daunting, especially when you consider that our customers want to avoid as much risk as possible. They want to improve the speed to get chips out, but they are all about de-risking everything.”

I ask Ann if adding AI is only a productivity gain. “Productivity as a keyword is not compelling.” It’s an indirect measure of the true ROI, she notes, and adds it’s ultimately reducing the time to tape out while achieving the target feature set that engineering directors and managers look for.”

“What we are doing has been time-tested,” answered Rick when asked why these startups are going to Verific. “We recently had a random phone call from a researcher at IBM. He already knew that IBM was using Verific in chip design. He said, “I know that we need to deal with language and Verific is the gold standard.’

“We’re lucky we’ve just been around long enough. Nobody else in their right mind would want to do what we’ve done because it’s painstaking. I wouldn’t say boring, but it’s not as much fun as what Ann is doing, that’s for sure.”

As we move on to talk about funding and opportunities, Rick jumps in. “When people look at an industry, they want to know the leaders and immediately jump to the discussion of revenue and maturity. EDA is a mature industry and a three- or four-horse race. I think there are more horses at the starting line today that have the potential to make a dramatic impact.

“We’ve got an incredible amount of funds we can throw at this, assuming that we can achieve what we want to achieve. This is not something that just came along. This is a seismic shift in the commitment to use all the talent, tools, technology and money to make this happen.

“To me, it’s not a three-horse race—maybe it’s a 10-horse race. We really won’t know until we look back in another six months or a year from now at what that translates to. I am betting on it because the people doing this for the most part are not professional CAD developers. They looked at the problem and think they can make a dent.”

DAC Registration is Open

Notes:

Verific will exhibit at the 62 Design Automation Conference (DAC) in Booth #1316 at the Moscone Center in San Francisco from June 23–25.

Silimate’s Akash Levy, Founder and CTO, will participate in a panel titled “AI-Enabled EDA for Chip Design” at 10:30am PT Tuesday, June 24, during DAC.

Also Read:

Breker Verification Systems at the 2025 Design Automation Conference

The SemiWiki 62nd DAC Preview


Video EP8: How Defacto Technologies Helps Customers Build Complex SoC Designs

Video EP8: How Defacto Technologies Helps Customers Build Complex SoC Designs
by Daniel Nenni on 06-06-2025 at 10:00 am

In this episode of the Semiconductor Insiders video series, Dan is joined by Chouki Aktouf, CEO and Founder of Defacto Technologies. Dan explores the challenges of building complex SoCs with Chouki, who describes challenges around managing complexity at the front end of the process while staying within PPA requirements and still delivering a quality design as fast and cost effectively as possible.

Chouki describes how Defacto’s SoC Compiler addresses the challenges discussed along with other important items such as design reuse. He provides details about how Defacto is helping customers of all sizes to optimize the front end of the design process quickly and efficiently so the resulting chip meets all requirements.

Contact Defacto

The views, thoughts, and opinions expressed in these videos belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Podcast EP290: Navigating the Shift to Quantum Safe Security with PQShield’s Graeme Hickey

Podcast EP290: Navigating the Shift to Quantum Safe Security with PQShield’s Graeme Hickey
by Daniel Nenni on 06-06-2025 at 6:00 am

Dan is joined by Graeme Hickey, vice president of engineering at PQShield. Graeme has over 25 years of experience in the semiconductor industry creating cryptographic IP and security subsystems for secure products. Formerly of NXP Semiconductor, he was senior manager of the company’s Secure Hardware Subsystems group responsible for developing security and cryptographic solutions for an expansive range of business lines.

Dan explores the changes that are ahead to address post-quantum security with Graeme, who explains what these changes mean for chip designers over the next five to ten years, Graeme explains that time is of the essence, and chip designers should start implementing current standards now to be ready for the requirements in 2030. This process will continue over the next five to ten years.

Graeme describes the ways PQShield is helping chip designers prepare for the post-quantum era now. One example he cites is the PQPlatform-TrustSys, a complete PQC-focused security system that provides architects with the tools needed for the quantum age and beyond. Graeme also discusses the impact of the PQShield NIST-ready test chip. Graeme describes what chip designers should expect across the supply chain as we enter the post-quantum era.

Contact PQShield

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


ChipAgent AI at the 2025 Design Automation Conference

ChipAgent AI at the 2025 Design Automation Conference
by Daniel Nenni on 06-05-2025 at 10:00 am

ChipAgentsAtDAC

The semiconductor world is gathering at DAC 62, and ChipAgents AI is coming ready to show why agentic AI is the missing piece in modern RTL design and verification. Whether you’re drowning in terabytes of waveform data, grinding toward 100% functional coverage, or hunting for ways to accelerate time-to-market, our sessions and live demos will give you a first-hand look at how autonomous AI agents can transform your flow.

ChipAgents AI @ DAC 62: Where Agentic AI Meets Next-Gen Verification

June 23–25, 2025 • Moscone West, San Francisco

ChipAgents Sessions

Day & Time

Venue

Title

What You’ll Learn

Mon 6/23 10:30 a.m.

Exhibitor Forum (Level 1)

Taming the Waveform Tsunami: Agentic AI for Smarter Debugging

See Waveform Agents trace failure propagation across modules and time in seconds—no manual spelunking required. Real case studies show days-long debug cycles cut to minutes.

Tue 6/24 1:45 p.m.

Exhibitor Forum (Level 1)

CoverAgent: How Agentic AI Is Redefining Functional Coverage Closure

Watch CoverAgent analyze coverage reports, infer unreachable bins, and auto-generate targeted stimuli—driving up to 80 % faster closure in complex SoCs.

Wed 6/25 11:15 a.m.

DAC Pavilion (Level 2)

Beyond Automation: How Agentic AI Is Reinventing Chip Design & Verification

CEO Prof. William Wang reveals how multi-agent workflows tackle constraint solving, automated debug, proactive design optimization, and more.

Tip: All three talks are designed for live Q&A—bring your toughest verification pain points.

Live Demo & 1-on-1s

Exhibition Booth #1308, Level 1 10 a.m.–6 p.m. daily

  • Waveform Agents: Natural-language root-cause analysis on multi-TB VCD/FST dumps
  • CoverAgent: Autonomous coverage gap hunting & stimulus generation
  • ChipAgents CLI & VS Code Extension: Plug-in AI agents for Verilog, SystemVerilog, UVM

Come with your own specs, traces or coverage reports and we’ll run them live.

Why Agentic AI Now?

  • Scale: LLM-powered agents reason across RTL, waveforms, testbenches, logs, and documentation simultaneously.
  • Speed: Hypothesis-driven search slashes debug and closure cycles by orders of magnitude.
  • Explainability: Results are surfaced as step-by-step causal chains, so engineers stay in control.
  • Complementary: Works alongside existing simulators, formal tools, and waveform viewers—no rip-and-replace.

Meet the Team

  • William Wang – Founder & CEO, UCSB AI faculty
  • Zackary Glazewski – Forward-Deployed Engineering Lead
  • Mehir Arora – AI Research Engineer, Functional Coverage Specialist

They’ll be joined by the engineering crew behind our SOC-scale deployments and early-access customers.

Book a Private Briefing or Join Our Private Party

Slots fill fast during DAC week. To reserve a 30-minute roadmap briefing—or to request an invitation to our private rooftop dinner for semiconductor executives and leading engineers—visit chipagents.ai or stop by Booth #1308.

See You in San Francisco! DAC Registration is Open

If your verification team is buried under data, waveforms, coverage debt, or deadline pressure, ChipAgents AI has something you’ll want to witness live. Mark your calendar for June 23–25, swing by Booth #1308, and discover how agentic AI is turning RTL understanding from an art into a science.

About us

We are reinventing semiconductor design and verification through advanced AI agent techniques. ChipAgents AI is pioneering an AI-native approach to Electronic Design Automation (EDA), transforming how chips are designed and verified. Our flagship product, ChipAgents, aims to boost RTL design and verification productivity by 10x, driving innovation across industries with smarter, more efficient chip design.

Also Read:

AlphaDesign AI Experts Wade into Design and Verification

CEO Interview with Dr. William Wang of Alpha Design AI

 

 


proteanTecs at the 2025 Design Automation Conference

proteanTecs at the 2025 Design Automation Conference
by Daniel Nenni on 06-05-2025 at 8:00 am

62nd DAC SemiWiki

Discover how proteanTecs is transforming health and performance monitoring across the semiconductor lifecycle to meet the growing demands of AI and Next-Gen SoCs.

Stop by DAC booth #1616 to experience our latest technologies in action, including interactive live demos and explore our full suite of solutions — designed to boost reliability, optimize power, and enhance product quality for next-gen AI and data-driven applications.

Don’t miss our daily in-booth theater sessions, featuring expert talks from industry leaders in ASIC design, IP, EDA, cloud infrastructure, including: Arm, Andes, Samsung, Advantest, Alchip, Siemens, PDF Solutions, Teradyne, Cadence, GUC, and more!  Plus, hear insights from proteanTecs’ own experts.

Interested in a deeper dive? We’re now booking private meeting room sessions tailored to your company’s needs. Learn how our cutting-edge, machine learning-powered in-system monitoring delivers unprecedented visibility into device behavior — from design to field.

During the show, we will be presenting multiple solutions, including:
  1. Power and Performance
  2. Reliability, Availability, Serviceability
  3. Functional Safety & Diagnostics
  4. Chip Production
  5. System Production
  6. Advanced Packaging

Meet us at Booth #1616

See the full booth agenda, HERE.

Book a meeting with proteanTecs at DAC 2025

proteanTecs is the leading provider of deep data analytics for advanced electronics monitoring. Trusted by global leaders in the datacenter, automotive, communications and mobile markets, the company provides system health and performance monitoring, from production to the field.  By applying machine learning to novel data created by on-chip monitors, the company’s deep data analytics solutions deliver unparalleled visibility and actionable insights—leading to new levels of quality and reliability. Founded in 2017 and backed by world-leading investors, the company is headquartered in Israel and has offices in the United States, India and Taiwan.

DAC registration is open.

Also Read:

Cut Defects, Not Yield: Outlier Detection with ML Precision

2025 Outlook with Uzi Baruch of proteanTecs

Datacenter Chipmaker Achieves Power Reduction With proteanTecs AVS Pro

 


Arm Reveals Zena Automotive Compute Subsystem

Arm Reveals Zena Automotive Compute Subsystem
by Bernard Murphy on 06-05-2025 at 6:00 am

Zena CSS min

Last year Arm announced their support for standards-based virtual prototyping in automotive, along with a portfolio of new AE (automotive enhanced) cores. They also suggested that in 2025 they would be following Arm directions in other LOBs by offering integrated compute subsystems (CSS). Now they have delivered: their Zena CSS subsystems for automotive applications round out their automotive strategy.

The Motivation

What is the point of a CSS and why is it important for automotive? In part the motivation is the same as for CSS applications in infrastructure. The customers for these subsystems see them as necessary but not core to their brand. Complete and pre-validated subsystem IPs like Zena are an obvious win, reducing effort and time to deployment without compromising opportunities for differentiation. Automotive OEMs, Tier1s, even leading automotive semi suppliers in some instances, aren’t going to differentiate in compute subsystems. Their brand builds around AI features, sensing, IVI, control, and communication (V2X and in the car). Zena provides a jump start in designing their systems.

Arm is a good company to watch in this area because electronic/AI content is now a huge part of how automotive brands are defined, and Arm completely dominates in processor usage among automakers and automotive chip suppliers. As a result, Arm sees further ahead than most when it comes to trends in automotive electronics. For example we’re already familiar with the concept of a software defined vehicle (SDV), supporting over the air (OTA) updates for maintenance and feature enhancements, orchestrating sensing and control between multiple functions across the car, and emerging potential in V2X communication. Dipti Vachani (Senior VP and GM for Automotive at Arm) says that looking forward she sees the next step being a trend towards AI-defined vehicles. This concept is worth peeling further.

A cynic might assume “AI-defined vehicles” is just buzzword inflation, but there’s more to it than that. First AI has become central to innovation in the modern car – how automakers differentiate and defend their brands, even how they monetize what they provide. Dipti suggests a range of emerging possibilities: in ADAS, adjusting to driver behavior and environment in real-time to better support safety; in IVI to provide support for more personalized voice-enabled control, an important step beyond the limited voice options available today; and for vehicle control to optimize energy consumption and vehicle dynamics based on load and road conditions. I have written separately about advances like birds-eye-view with depth for finer control in autonomy when cornering, for driver and occupant monitoring systems, and for more intelligent battery management.

OK, so lots of AI capabilities in the car, but what does this have to do with Arm, especially if OEMs and Tier1s are differentiating in AI, etc? We already know that to manage the cost side of all this innovation OEMs have moved to zonal architectures, a small number of hardware components around the car rather than components everywhere. Differentiating AI models can be updated OTA as needed, important because AI innovation is fast and furious – what is competitive this year may look dated next year. Models must operate reliably and be updated safely and securely, with regular in-flight checking and corrective action for hardware misbehavior and robust protection against hacking in-flight or during updates. All critical requirements in a car, but this management is beyond the bounds of AI.

Compute subsystems and SDV in the age of AI-defined vehicles

From what I see, safety and security are out of scope today for AI. Research in safety in AI is nascent at best. AI for car-quality security is a bit more advanced, primarily for attack detection and not yet production level. More obviously, orchestration of functions across the car, the communication through which that orchestration must operate, actuation for mechanical functions, display functions and many other non-AI functions, all these are beyond the scope of AI. Such functions, still the great majority of administrative compute in a car, must continue to be handled through software running on a backbone of zonal processors, each managed by one or more standard CPU subsystems (here Zena) front ending the AI engines. In this context, given the cloud-based virtual software development Arm highlighted last year natively modeling Zena in that development, Arm’s role becomes more obvious.

Zena role in zonal processors

Further, there are likely to be many more AI models to support in any given car than there are zonal processors. Running multiple AI models on an NPU is already possible since multi-core NPUs are now common. But which models should run when must be governed by orchestration under an OS running on a CPU subsystem. This orchestration also handles feeding data into the NPU, taking results back out to the larger system, swapping models in and out, and managing updates from the cloud. Together of course with comprehensive safety and security control for the complete automotive electronic system.

Safety in advanced automotive electronics has already evolved to ASIL-B or ASIL-D levels implemented through ASIL-D certified safety islands which regularly monitor other functions in the processor through function isolation, self-test, reboot if necessary, before bringing that function back online. Or perhaps shutting down a broken subsystem and triggering a driver/dealer warning to be addressed in a service call. Security is even more rigorous: secure boot, state of the art encryption, secure enclaves, authentication for downloads, etc. etc.

In short, complete automotive systems depend on CPU subsystem front-ends to the NPU back-end which run the AI models. A standard to ensure interoperability is essential to making this complex environment work well, as is a trusted virtual platform/digital twin to support software development in advance of a car being ready for testing. This is why Arm kicked off the SOAFEE standard four years ago. Dipti says that Zena is the physical manifestation of SOAFEE and claims that between software virtual prototyping and time and effort saved by having a fully characterized compute subsystem in Zena, automotive systems builders can save up to a whole model year in time and 20% in engineering effort over building their own compute subsystem.

For developers, virtual prototyping platforms are already available from major EDA suppliers. Zena is currently in deployment with early adopters and is expected to become more generally available later in 2025.

Takeaway

I see Zena and the larger strategy continuing a theme that has been quite successful for Arm in their Neoverse/infrastructure directions – pre-verified/validated compute subsystems as an IP, backed by cloud-native development based on open standards.  The ecosystem will continue to grow around these standards, competitors are free to enter but will be expected to comply with the same standards, while Arm must continue to execute to stay ahead. Nothing wrong with that for automotive OEMs and Tier1s, though clearly Arm has a strong head start.

You can read more HERE.


High-NA Hard Sell: EUV Multi-patterning Practices Revealed, Depth of Focus Not Mentioned

High-NA Hard Sell: EUV Multi-patterning Practices Revealed, Depth of Focus Not Mentioned
by Fred Chen on 06-04-2025 at 10:00 am

HNA EUV Fred Chen

In High-NA EUV lithography systems, the numerical aperture (NA) is expanded from 0.33 to 0.55. This change has been marketed as allowing multi-patterning on the 0.33 NA EUV systems to be avoided. Only very recently have specific examples of this been provided [1]. In fact, it can be shown that double patterning has been implemented for EUV in cases where DUV double patterning could have sufficed.

What a Higher NA offers

The increase in NA allows more diffraction orders or a wider range of spatial frequencies to be used for imaging. Having more diffraction orders for the same image allows brighter, narrower peaks, as shown in the example of Figure 1.

The sharper peak means the normalized image log slope (NILS) is better, so the stochastic effect of shot noise in the photon absorption won’t be as severe. Consequently, a directly printed image would be more likely to be degraded for 0.33 NA compared to 0.55 NA.

Current EUV Uses Multipatterning

To keep the shot noise low enough to keep the single 0.33 NA exposure, the dose would have to be increased to a point where throughput or resist loss would be a detracting issue, e.g., > 100 mJ/cm2. On the other hand, if the 0.33 NA pattern were split into two separately exposed portions (Figure 2), each one would have a denser range of spatial frequencies due to wider separations between features, which will improve the NILS.

Figure 2. Random 36 nm via pattern (taken from [1]) split into two portions for 0.33 NA EUV double patterning; each color represents one of two masks. DUV double patterning can follow the same split for this case.

Interestingly, in this case, the minimum 100 nm distance means DUV can also be used with double patterning for the same pattern. This is consistent with an earlier finding that DUV and EUV double patterning may be overlapped due to the impact of stochastic effects [2].

Furthermore, if the pattern of Figure 2 were scaled down by a factor of the NA ratio (0.33/0.55), so that the via size becomes 36 nm x 0.6 = 21.6 nm, the same situation will apply to the High-NA case as well, since the spatial frequency range (normalized to 0.55 NA) is now reduced to the same as previously for 0.33 NA. This means we should expect double patterning for High-NA EUV, triple patterning for low-NA EUV, and quadruple patterning for DUV (Figure 3).

Figure 3. Different multipatterning scenarios for the 0.6x scaled pattern of Figure 2.

On the other hand, it can be noted that via patterns can conform to a diagonal grid [3], which would enable DUV/low-NA double patterning or High-NA EUV single patterning for location selection if the vias are fully self-aligned (Figure 4).

Figure 4. Applying via diagonal grid location selection to the pattern of Figure 3 simplifies the multipatterning (double patterning for DUV/Low-NA EUV, single patterning for High-NA EUV).

High-NA Depth of Focus Challenged by Resist Thickness

A fundamental consequence of having a wider range of spatial frequencies in a larger numerical aperture is that there is a wider range of optical paths used in forming the image. Each path corresponds to an angle with the optical axis. At the wafer, the wider range leads to the higher spatial frequencies getting more out of phase with the lower ones, causing the image to lose contrast from defocus. This is visualized in Figure 5.

Figure 5. 30 nm pitch with line breaks presents a wide range of diffraction orders with High-NA, leading to a relatively limited depth of focus.

As Figure 5 shows, this is particularly bad for line breaks, where the tip-to-tip distance needs to be controlled. Likewise, it would apply to the corresponding line cut pattern. The depth of focus reduction applies generally to patterns with wide spacings between features such as the random via pattern of Figure 2. Figure 6 shows that even 15 nm defocus is enough to significantly affect a 40 nm pitch line pattern, due to four diffraction orders being included by a 0.55 numerical aperture as opposed to two diffraction orders for a 0.33 numerical aperture.

Figure 6. A 40 nm pitch line pattern is significantly affected even with 15 nm defocus, due to more diffraction orders being included with 0.55 NA.

To preserve the image uniformity as much as possible through the resist thickness, the resist thickness needs to be at most as thick as the depth of focus. A depth of focus < 30 nm for High-NA means resist thickness has to be < 30 nm, which may further experience 50% resist thickness loss [4]. Such a thin retained resist layer also would have absorbed very little EUV, leading to even greater absorbed photon shot noise and greater sensitivity to electrons from the underlayer [5] as well as the EUV plasma [6].

Thus, though obviously not mentioned in the marketing, it is reasonable to expect that High-NA EUV exposure cannot provide enough depth of focus for a reasonable resist thickness, and any future Hyper-NA (at least 0.75 [7]) would be even worse.

Exposing EUV

References

[1] C. Zahlten et al., Proc. SPIE 13424, 134240Z (2025).

[2] F. Chen, Can LELE Multipatterning Help Against EUV Stochastics?.

[3] F. Chen, Routing and Patterning Simplification with a Diagonal Via Grid.

[4] F. Chen, Resist Loss Prohibits Elevated DosesJ. Severi et al., “Chemically amplified resist CDSEM metrology exploration for high NA EUV lithography,” J. Micro/Nanopatterning, Materials, and Metrology 21, 021207 (2022).

[5] H. Im et al., Proc. SPIE 13428, 1342815 (2025).

[6] Y-H. Huang, C-J. Lin, and Y-C. King, Discover Nano 18:22 (2023).

[7] G. Bottiglieri et al., Proc. SPIE 13424, 1342404 (2025).

This article first appeared in Exposing EUV: High-NA Hard Sell: EUV Multipatterning Practices Revealed, Depth of Focus Not Mentioned

Also Read: