You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!
The semiconductor industry is rapidly moving beyond traditional 2D packaging, embracing technologies such as 3D integrated circuits (3D ICs) and 2.5D advanced packaging. These approaches combine heterogeneous chiplets, silicon interposers, and complex multi-layer routing to achieve higher performance and integration. However, this evolution introduces significant challenges in modeling, simulation, and reliability assessment due to the massive size and complexity of ECAD data.
A webinar addressing this very topic was recently offered by Altair. Iyad Rayane, senior technical specialist at the company delivered the webinar session.
The Growing Complexity of Modern ECAD Models
Modern IC packages feature thousands of nets across multiple routing layers and use a variety of materials with different physical properties. This results in extremely large ECAD datasets that are difficult to manage and analyze. High-density routing and compact layouts in 3D memory cubes and stacked-die packages also lead to increased power densities and mechanical stresses. Designers face issues like thermal stress, delamination, chip warpage, and solder fatigue, which can severely impact package reliability. Traditional simulation tools struggle to handle these detailed models efficiently, often requiring prohibitively long runtimes and limiting early-stage design exploration.
Challenges in Multiphysics Simulation
Several challenges complicate multiphysics simulation of large-scale 3D IC packages. The volume and complexity of ECAD data strain the capacity of existing tools to import and process models quickly. Accurate analysis requires coupling thermal, mechanical, fatigue, and electromagnetic effects, all while managing heterogeneous materials and thin-layer geometries. Applying fine mesh detail throughout the entire model is computationally expensive, yet necessary in critical regions. Moreover, the shift to system-level floorplanning and heterogeneous integration demands new workflows that traditional EDA tools do not fully support.
Altair SimLab’s Innovative Solution
Altair SimLab addresses these challenges by providing a paradigm-shifting multiphysics environment tailored for large-scale ECAD models. It drastically reduces import times—from hours to minutes—enabling detailed simulation on common desktop hardware. Its metal-mapping technology computes equivalent material properties based on volumetric metal and dielectric content, simplifying fine routing into effective continuous materials without sacrificing accuracy. The software supports hybrid modeling, where signal layers are represented as sheet bodies, vias as wire bodies, and insulating layers as solids, allowing flexible and efficient meshing strategies.
SimLab also incorporates submodeling, allowing designers to run fast global simulations with trace mapping to identify critical areas for detailed local analysis. Displacement and other boundary conditions are transferred from the global model to the detailed submodel, balancing speed with accuracy. Furthermore, the platform integrates thermal, thermal stress, solder fatigue, and package reliability simulations within a single, user-friendly environment. It also interfaces with third-party solvers to extend multiphysics capabilities, providing a comprehensive solution for advanced packaging analysis.
How Altair SimLab Helps Engineers
By combining scalable import, flexible modeling approaches, and multiphysics coupling, Altair SimLab enables engineers to accelerate simulation turnaround and improve prediction accuracy. Designers can quickly explore “what-if” scenarios early in the design cycle, making better-informed decisions about process nodes and package configurations. The efficient data handling allows for detailed reliability analysis of solder bumps, vias, and interconnects, helping identify potential failure points before manufacturing. This approach reduces costly redesigns, shortens development cycles, and ultimately leads to more robust semiconductor products.
Test Case Results: Significant Time Savings
The power of Altair SimLab is evident in real-world test cases. One example involves a large PCB measuring 42 cm by 34 cm with 14 routing layers and over 7,500 nets. SimLab reduced the import runtime from more than four hours on a high-performance computing system to just five minutes on a standard laptop. Another case features a 66 mm by 66 mm silicon interposer with 12 routing layers and over 3,000 nets. Import time was cut from one hour to three minutes. These results demonstrate how Altair’s efficient ECAD data handling enables complex multiphysics simulations to be performed quickly and cost-effectively on everyday hardware.
Summary
As semiconductor packaging continues to evolve toward 3D ICs and heterogeneous integration, simulation tools must keep pace with increasing complexity. Altair SimLab delivers a scalable, integrated platform that bridges the gap between massive ECAD datasets and accurate multiphysics analysis. Its innovative modeling techniques and efficient workflows empower designers to accelerate innovation, optimize reliability, and confidently address the challenges of advanced packaging technologies. By transforming how large-scale ECAD models are imported and analyzed, Altair SimLab plays a critical role in advancing the next generation of semiconductor devices.
AI explosion is clearly driving semi-industry since 2020. AI processing, based on GPU, need to be as powerful as possible, but a system will reach optimum only if it can rely on top interconnects. The various sub-system need to be interconnected with ever more bandwidth and lower latency, creating the need for ever advanced protocol like DDR5 or HBM memory controller, PCIe and CXL, 224G SerDes and so on.
When you design a supercomputer, raw processing power is important, but the way you access memory, latency and network speed optimization will allow you to succeed. It’s the same with AI, that’s why interconnects protocols are becoming key.
In 2024, the interface IP segment grew by 23.5% to reach $2365 million. Our forecast shows growth for years 2024 to 2029, comparable to 20% growth in the 2020’s. AI is driving the semiconductor industry and Interconnect protocols efficiency are fueling AI performance. Virtuous cycle!
The interface IP category has moved from 18% share of all IP categories in 2017 to 28% in 2023. In 2024, we think this trend will amplify during the decade and Interface IP to grow to 38% of total (detrimental to processor IP passing from 47% in 2023 to 41% in 2029). We forecast total IP to weight $15 billion in 2029 and Interface IP $5.4 billion itself.
As usual, IPnest has made the five-year forecast (2024-2028) by protocol and computed the CAGR by protocol (picture below). As you can see on the picture, most of the growth is expected to come from three categories, PCIe, memory controller (DDR) and Ethernet, SerDes & D2D, exhibiting 5 years CAGR of resp. 17%, 17% and 21%. It should not be surprising as all these protocols are linked with data-centric applications! If we consider that the weight of the Top 5 protocols was $2200 million in 2024, the value forecasted in 2029 will be $4900 million, or CAGR of 17%.
This forecast is based on amazing growth of data-centric applications, AI in short. Looking at TSMC revenues split by platform in 2024, HPC is clearly the driver. Starting in 2020, we expect this trend to continue up to 2029, at least.
Conclusion
Synopsys has built a strong position on every protocol -and on every application, enjoying more than 55% market share, by doing strategic acquisitions since the early 2000’s and by offering integrated solutions, PHY and Controller. We still don’t see any competitor in position of challenging the leader. Next two are Cadence and Alphawave, with market share in the 15%, far from the leader.
In 2025 and after, we think that a major strategy change will happen during the decade. IP vendors focused on high-end IP architecture will try to develop a multi-product strategy and market ASIC, ASSP and chiplet derived from leading IP (PCIe, CXL, memory controller, SerDes…). Some have already started, like Credo, Rambus or Alphawave. Credo and Rambus already see significant revenues results on ASSP, but we will have to wait to 2026, at best, to see measurable results on chiplet.
AI technology was prevalent at DAC 2025, but can we really trust what Generative AI (GenAI) is producing? Vishal Moondhra, VP of Solutions Engineering from Perforce talked about this topic in the Exhibitor Forum on Monday, so I got a front row seat to learn more.
Vishal started out by introducing the four challenges and risks of using GenAI for semiconductor design:
Liability questions arise over who owns the IP in a new design, how the IP is being licensed, and whose data is being used to train an AI model. Companies need to know how AI models were trained, so that training is a traceable procedure. What if a new IP block was created by GenAI and it contains an error, lengthening the time to production and increasing development costs? When mixing both external IP with internal IP, are there data quality concerns?
Establishing trust for using AI to train models requires three pillars:
Clear and auditable data provenance for all training datasets
Complete traceability of IPs and IP versions
Secure and compliant use of both internal and external IP
With the Perforce methodology, a semiconductor team will use IP lifecycle management to provide the proper provenance for training datasets. Team members define their design in terms of IPs, establishing provenance for each IP, providing complete traceability down to the file level, along with its history. Data contamination of the dataset is avoided by adding and enforcing rules about IP usage. Incremental training is supported, giving users a history of what changed since the last training set. All of this reduces the risk of using GenAI to train models for semiconductor design.
Vishal spoke about what an IP object could be: processor, memory, register, library cell, PDK, chiplet, embedded SW, or firmware. Adding metadata to an IP block enables permissions to be set, technical specs to be added, properties to be defined, usage set, versions updated, and even bugs to be tracked. IP lifecycle management enables high-level abstract IPs to have dependencies, form subsystems and use hierarchy. Even the environment configuration is stored as workspace settings, documentation, startup files and scripts, or templates.
The AI training process was presented, showing the flow of how files are created and models formed by using scripts and actions. With Perforce, a completely traceable flow of data creates new training data, and the pipeline can be run multiple times.
With an IP Lifecycle Management (IPLM) flow, all data sources can be modeled as IPs. Each derived data source becomes the parent of the source, and the preprocessed data is the parent of raw data, while training data is the parent of the preprocessed data. Both the training and final models are also modeled as IPs. Each script that runs a data transformation and model training becomes an IP.
Using IPLM in AI/ML training pipelines allows semiconductor teams to manage all of their data and dependencies, creating a series of parent-child relationships. Data management can be mixed between tools like Perforce P4 and Git based on user preferences. Teams can quickly build new workspaces with the right data, scripts, and models each time. This whole ecosystem is fully traceable, so you always record the data source, know how the derivatives were assembled, have a record of all parameters, and ensure that all models are fully traceable and reproducible. Engineers learn how to add metadata to the training projects, like the training parameters, results, history of runs and runtimes. You can view and model the model lineage to see which project was derived or copied, and where it came from.
Summary
GenAI is sweeping across the EDA and IP landscape, so it’s worth questioning the integrity of this new paradigm. The software team at Perforce has thought about each trust challenge and come up with an approach to make GenAI model development and usage trustworthy. Learn more about Perforce and semiconductor design by visiting their website.
Dan is joined by Dr. Tiffany Callahan from SandboxAQ. As one of the early movers in the evolving sciences of computational biology, machine learning and artificial intelligence, Tiffany serves as the technical lead for agentic and autonomous systems at SandboxAQ. She has authored over 50 peer-reviewed publications, launched several high-impact open-source projects and holds multiple patents.
Dan explores the foundation of the agentic and autonomous systems SandboxAQ is developing with Tiffany. She describes the impact of large quantitative models, or LQMs, particularly in drug discovery and material science research. Unlike LLMs that are trained on broad-based Internet data for text reasoning, LQMs are trained on first principles of physics, chemistry and engineering, This creates AI that can reason about the physical world. SanboxAQ aims to deploy this technology as an adjunct to existing research experts by simulating and predicting physical outcomes on a massive scale. This provides scientists with tools that are both grounded in physical science and generative, facilitating more targeted and efficient research,
You can learn more about this unique company and the impact it aims to have on advanced research here.
The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.
In extreme ultraviolet lithography (EUVL) systems, the collector mirror is a critical optical component that gathers and directs EUV light from the source toward the projection optics. Over time, the collector surface accumulates contaminant films — primarily tin (Sn) debris from the laser-produced plasma (LPP) source, along with other byproducts. These deposits reduce reflectivity, degrade throughput, and ultimately limit the uptime of the lithography tool. Therefore, in-situ cleaning methods that restore mirror performance without dismantling the system are essential.
Concept of Hydrogen Surface Wave Plasma (SWP) Cleaning The hydrogen surface wave plasma cleaning method uses a low-pressure hydrogen plasma to remove contaminant layers from the collector. In this approach, a radio-frequency (RF) or microwave field sustains a surface wave along a dielectric tube or waveguide, generating a stable, large-volume plasma. Atomic hydrogen radicals (H*) formed in the plasma are the primary cleaning agents. These radicals chemically react with tin deposits to form volatile tin hydrides (SnH₄) that can be pumped away.
Advantages of the SWP Method Hydrogen SWP offers several benefits over alternative plasma cleaning approaches:
Low ion energy minimizes sputter damage to the underlying multilayer mirror coating.
Uniform radical flux can be sustained over large, complex mirror geometries.
Compatibility with in-situ operation, allowing cleaning without removing the collector from the EUV source chamber.
High selectivity for tin over the Mo/Si multilayer stack, preserving optical performance.
Cleaning Mechanism The cleaning proceeds through two main processes:
Chemical Reduction – Atomic hydrogen reacts with oxidized tin and tin layers to form volatile hydrides.
Physical Assistance – Low-energy ions and vacuum pumping assist in dislodging reaction products and residual particles.
Temperature plays a role in reaction efficiency; elevated mirror temperatures (typically 150–250°C) improve tin hydride desorption rates and overall cleaning speed.
Experimental Setup The dissertation describes a system where the SWP source is positioned to generate plasma in proximity to the collector surface. Process parameters include:
Pressure: A few Pascals of H₂.
Microwave Power: Sufficient to sustain high-density plasma without damaging the optics.
Cleaning Time: Dependent on deposit thickness and operating conditions.
Diagnostic tools such as optical emission spectroscopy (OES) are used to monitor plasma species and confirm the presence of atomic hydrogen.
Results and Performance Hydrogen SWP cleaning successfully removed tin deposits from EUV collector samples while maintaining the reflectivity of the Mo/Si multilayer. The process achieved:
High removal rates for tin films (order of nanometers per minute).
Negligible reflectivity loss in the EUV wavelength range.
Repeatability over multiple cleaning cycles, demonstrating process stability.
In some experiments, the SWP method was compared to downstream plasma cleaning and thermal hydrogen radical cleaning. SWP exhibited faster rates and better uniformity while minimizing collateral damage.
Challenges and Considerations Despite its advantages, hydrogen SWP cleaning requires careful optimization:
Over-cleaning risk – Prolonged exposure can lead to hydrogen-induced blistering or modification of the mirror substrate.
Access geometry – The plasma source must be positioned to ensure full surface coverage.
Plasma-surface interactions – Long-term effects on multilayer coatings require continued study.
Industrial Relevance With EUV lithography being adopted for advanced semiconductor manufacturing nodes, uptime and cost-of-ownership are directly impacted by maintenance cycles. In-situ hydrogen SWP cleaning offers a way to extend collector life, reduce tool downtime, and maintain consistent exposure dose. This is particularly important for high-volume manufacturing (HVM) environments, where even minor throughput losses translate into significant production costs.
Conclusion Hydrogen surface wave plasma cleaning presents a practical and effective method for restoring EUV collector performance. It combines chemical selectivity, gentle cleaning action, and operational compatibility with lithography tools. Continued optimization and long-term durability studies will help integrate this technique into standard EUV tool maintenance protocols.
In AI it is easy to be distracted by hype and miss the real advances in technology and adoption that are making a difference today. Accellera hosted a panel at DAC on just this topic, moderated by Dan Nenni (Mr. SemiWiki). Panelists were: Chuck Alpert, Cadence’s AI Fellow driving cross-functional Agentic AI solutions throughout Cadence; Dr. Erik Berg, Senior Principal Engineer at Microsoft, leading generative AI strategy for end-to-end silicon development; Dr. Monica Farkash, AMD fellow, creator of ML/AI based solutions to reshape HW development flows; Harry Foster, Chief Scientist for Verification at Siemens Digital Industries Software; Badri Gopalan, R&D Scientist at Synopsys, architect and developer for coverage closure and GenAI related technology; and Syed Suhaib leading CPU Formal Verification at Nvidia.
Where are we really at with AI in EDA?
In 2023 everyone in EDA wanted to climb on the AI hype train. There was some substance behind the stories but in my view the promise outran reality. Two years later in this panel I heard more grounded views, not a reset but practical positions on what is already in production, what is imminent, and what is further out. Along with practical advice for teams eager to take advantage of AI but not sure where to start.
I like Chuck’s view, modeling AI evolution in EDA like the SAE model for automotive autonomy, progressing through a series of levels. Capabilities at level 1 we already see in production use, such as PPA optimization in implementation or regression optimization in verification. Level 2 should be coming soon, providing chat/search help for tools and flows. Level 3 introduces generation for code, assertions, SDCs, testbenches. Level 4 will support workflows and level 5 may provide full autonomy – someday. Just as in automotive autonomy, the higher you go, the more levels become aspirational but still worthy goals to drive advances.
According to Erik, executives in Microsoft see accelerating adoption in software engineering and want to know why the hardware folks aren’t there yet. Part of the problem is the tiny size (~1%) of the training corpus versus the software corpus, also a significantly more complex development flow. Execs get that but want hardware teams to come up with creative workarounds, to not keep falling further behind. An especially interesting insight is that in Microsoft teams are building more data awareness and learning how to curate and label data to drive AI based optimizations.
Monica offered another interesting insight. She has been working in AI for quite a long time and is very familiar with the advances that many of us now see as revolutionary. The big change for her is that, after a long period of general disinterest from the design community, suddenly all design teams want these capabilities yesterday. This sudden demand can’t be explained by hype. Hype generates curiosity, urgency comes from results seen in other teams. I know that this is already happening in implementation optimization and in regression suite optimization. Results aren’t always compelling, but they are compelling often enough to command attention.
Harry Foster added an important point. We’ve had forms of AI in point tools for some time now and they have made a difference, but the big gains are going to come from flow/agentic optimizations (Erik suggested between 30% and 50%).
Badri echoed this point and added that progress won’t just be about technical advances, it will also be about building trust. He sees agents as a form of collaboration which should be modeled on our own collaboration. While today we are allergic to the idea of any kind of collaboration in AI, he thinks we need to find ways to make some level of collaboration more feasible. Perhaps in sharing weights or RAG data. Unclear what methods might be acceptable and when, but more will be possible if we could find a path.
Syed offered some very practical applications of AI. Auto-fixing (or at least suggesting fixes) for naming compliance violations. At first glance this application might seem trivial. What’s important about a filename or signal name? A lot, if tools use those names to guide generation or verification, or AI itself. Equivalence checking for example uses names to figure out correspondence points in a design. At Nvidia, among other applications they use AI to clean up naming sloppiness, saving engineers significant effort in cleanup and boosting productivity through improved compliance. AI is also used to bootstrap testbench generation, certainly in the formal group.
Audience Q&A
There were some excellent questions from the audience. I’ll pick just a couple to highlight here. The first was essentially “how do you benchmark/decide on AI and agentic systems?” The consensus answer was to first figure out in detail what problem you want to solve and how you would solve it without AI. Then perhaps you can use an off-the shelf chatbot augmented with some well-organized in-house RAG content. Maybe you can add some fine-tuning to get close to what you want. Maybe you can use a much simpler model. Or if you have the resources and budget, you can go all the way to a customized LLM, as some companies represented on this panel have done.
Design houses have always built their own differentiated flows around vendor tools, often a mix of tools from different vendors. They build scripting and add in-house tools for all kinds of applications: creating or extracting memory and register maps, defining package pin and IO muxing maps and so on. In-house AI and particularly agentic AI could perhaps over time supersede scripting and even drive new approaches to agents for product team-specific tasks. EDA agents will likely also play a part in this evolution around their own flows. For interoperability in such flows one proposal was increased use of standards like MCP.
Another very good question came from the leader of a formal verification team who is ramping up a few engineers on SVA, while also aiming to ramp them up on machine learning. His question was how to train his team in AI methods, a challenge that I am sure is widely shared. Erik said “ask ChatGPT” and we all laughed, but then he added (I’ll roughly quote here):
“I’m 100% serious. I’ve had people complain, where’s the help menu? I said, just ask it your question. And if you’re having trouble with your prompts, give it your prompt and say, this is the output that I want. What am I doing wrong? It will be very frank with you. Use the tool to learn.”
Now that is a refreshing perspective. A technology that isn’t just useful for individual contributors, but also for their managers!
I’m not always a fan of panels. I often find that they offer few new insights, but this panel was different. Good questions and thought-provoking responses. More of these please Accellera. Benchmarking AI and agentic systems sounds like one topic that would draw a crowd!
In the race to deliver ever-larger SoCs under shrinking schedules, simulation is becoming a bottleneck. With debug cycles constrained by long iteration times—even for minor code changes—teams are finding traditional flows too rigid and slow. The problem is further magnified in continuous integration and continuous deployment (CI/CD) environments, where each commit may trigger a full simulation cycle, consuming unnecessary time and compute resources. Siemens EDA’s SmartCompile aims to break this logjam.
SmartCompile: A Paradigm Shift in Simulation Workflows
Siemens EDA addresses this critical challenge with SmartCompile, a feature of its Questa One simulation environment. Rather than iterating on top of the traditional flow, SmartCompile introduces a fundamental redesign of the compile-optimize-simulate pipeline. It adopts a modular and highly parallel approach to managing design verification tasks, enabling faster turnaround times without compromising design integrity.
The foundation of SmartCompile’s innovation lies in its ability to break apart large, monolithic processes into discrete, manageable units. This divide-and-conquer philosophy allows each component—be it compilation, optimization, or test loading—to be performed independently and in parallel, dramatically improving simulation readiness and design iteration velocity.
Enhancing Performance through Incremental Workflows
One of the most significant advantages of SmartCompile is its incremental compilation and optimization strategy. By utilizing timestamp tracking and smart signature analysis, the system identifies precisely which parts of the design have changed and compiles only those. This targeted approach drastically reduces build times across repeated verification cycles and streamlines test and debug cycles for developers.
Furthermore, the introduction of separate test loading revolutionizes how simulation teams manage test scenarios. Instead of recompiling the entire testbench for each new test, SmartCompile allows users to reuse the base compilation and optimization while isolating and processing only the new or modified tests. This capability significantly accelerates the test development process and promotes faster feedback loops during debugging.
Tackling Design Scale with Intelligent Partitioning
As designs increase in complexity, optimization becomes one of the most time-consuming stages of verification. To combat this, SmartCompile introduces the concept of AutoPDU—automatically pre-optimized design units. This feature partitions large designs into smaller, manageable units that can be independently compiled and optimized. When changes are made, only the affected units need to be processed again, leaving the rest untouched. This approach not only reduces the time required for each optimization run but also allows the process to be distributed across multiple grid computing nodes. By enabling parallelism at the design unit level, AutoPDU transforms how large SoCs are handled, dramatically decreasing overall simulation setup time.
Boosting CI/CD Efficiency with SmartCompile
Questa One’s SmartCompile is uniquely suited to enhance CI/CD (Continuous Integration and Continuous Deployment) pipelines in hardware design. By enabling rapid, incremental builds and leveraging precompiled design caches, SmartCompile allows frequent code check-ins to be verified quickly without reprocessing the entire design. Its intelligent reuse of elaboration and optimization data significantly reduces turnaround times in automated workflows. This capability ensures that regression tests, triggered automatically by CI systems, execute efficiently, allowing development teams to scale their productivity while maintaining robust quality assurance throughout the design lifecycle. This feature is particularly valuable for large teams and distributed projects, where multiple engineers may need to reproduce simulation environments “on demand—without losing valuable time.”
Flexible Configuration for Advanced Use Cases
In many simulation environments, different abstraction levels—such as RTL, gate-level, or behavioral models—are needed for different verification tasks. Traditionally, switching between these configurations requires recompilation and re-optimization. SmartCompile’s dynamic reconfiguration capability removes this barrier by allowing blocks to be swapped in or out at simulation time. This feature lets users pre-compile various block configurations and select the appropriate one during elaboration, enabling greater flexibility and reducing redundant processing.
Additionally, debug data generation in SmartCompile is no longer tightly coupled with optimization. Engineers can generate debug files on demand, rather than each time a build is processed. This not only improves resource efficiency but also empowers teams to target their debugging efforts more precisely.
The Business Value of Smarter Simulation
The cumulative effect of these innovations is substantial. SmartCompile enables design teams to iterate faster, simulate more often, and reduce wasted compute cycles. With its support for incremental workflows, distributed optimization, configuration flexibility, and CI-friendly features, it presents a compelling solution for organizations looking to scale their design verification capabilities without scaling their costs. This means faster time-to-market, reduced operational expenses, and more reliable development pipelines. As competition in the semiconductor market intensifies, the ability to verify designs quickly and efficiently becomes a critical differentiator. By integrating SmartCompile into their verification strategy, companies can better manage complexity while maintaining agility and performance.
Summary
Simulation has always been a cornerstone of digital design verification, but as designs grow more complex and development timelines shrink, traditional flows no longer meet the needs of modern engineering teams. Siemens EDA has recognized this shift and responded with a comprehensive and intelligent approach in SmartCompile. It tackles the fundamental inefficiencies of traditional workflows, enabling faster, smarter, and more scalable verification from the ground up.
Many know Arteris as the “network-on-chip”, or NoC, company. Through acquisitions and forward-looking development, the footprint for Arteris has grown beyond smart interconnect IP. At DAC this year, Arteris highlighted its latest expansion with a new SoC integration automation product called Magillem Packaging. The announcement focused on substantial new capabilities to simplify and speed up the process of building advanced chips used in everything from AI data centers to edge devices. I had an opportunity to visit Arteris at DAC and to speak with some of the executives there. Let’s examine how Arteris simplifies design reuse with Magillem Packaging.
The Announcement
The announcement made at DAC pointed out that chip design is becoming increasingly complex, with more components, higher performance demands, and tighter timelines. There is no argument there. The release states that Magillem Packaging helps engineering teams work faster and more efficiently by automating one of the most time-consuming parts of the design process: assembling and reusing existing technology.
Going deeper, Magillem Packaging enables IP teams to quickly and reliably package and prepare hundreds or even thousands of components for integration into a single chiplet or chip design, including new, existing, or third-party IP blocks.
Some of the key capabilities of this new product from Arteris are:
IP reuse with comprehensive IP, subsystem, and chiplet packaging in a reusable format, including configuration, implementation, and verification for incremental and full packaging with a proven methodology.
IEEE 1685-2022 generation is correct-by-construction without requiring any pre-requisite IP-XACT expertise. Standard compliance and data consistency are ensured by construction and assessed with a built-in Magillem checkers suite.
Scalable and fully automated generation of IP packaging for reused and new IP blocks, with support for legacy 2009 and 2014 versions of the IEEE 1685 standard, with intuitive graphical editors enabling fast viewing and editing of IP block descriptions.
Ecosystem Support
Arteris technology is agnostic and works across the ecosystem to ensure ease of integration for end customers. Among those voicing support for the new capability are:
Andes Technology
“Andes Technology is recognized for our comprehensive family of RISC-V processor IP and customization tools that empower customers to easily differentiate their SoC designs,” said Marc Evans, director of business development & marketing at Andes Technology Corporation. “The latest IP-XACT 2022 specifications enable structured automation, optimizing IP packaging and integration. Magillem Packaging complements Andes’ commitment to streamlined workflows, enabling faster and more reliable SoC development.”
MIPS
“The MIPS Atlas portfolio is engineered for high-efficiency compute in autonomous, industrial, and embedded AI applications, where rapid integration and design reuse are critical,” said Drew Barbier, VP & GM of the IP Business Unit at MIPS. “Arteris Magillem Packaging, with its automation of IP-XACT 2022-compliant packaging and support for industry standards, aligns with customer needs to accelerate SoC development. Together, we empower customers to streamline IP integration, reduce design complexity, and bring innovative silicon to market faster.”
More From the Show Floor at DAC
While visiting Arteris at DAC, I had the opportunity to discuss this announcement with two key members of the management team in more detail.
Insaf and Andy at the SoC Integration pod in the Arteris booth
Insaf Meliane is a product management and marketing director at Arteris. Before joining the product team, she was a field application manager, supporting customers with complex SoC design integration. She holds an engineering degree in microelectronics option system-on-chips from École Nationale Supérieure d’Electronique et de Radioélectricité de Grenoble.
Andy Nightingale is the VP of product marketing at Arteris. Andy is a seasoned global business leader with a diverse engineering and product marketing background. He’s a Chartered Member of the British Computer Society and the Chartered Institute of Marketing and has over 35 years of experience in the high-tech industry.
We began by discussing the overall reaction to Magillem Packaging at DAC. Interest was high, and reactions were quite positive. There has been an increase in momentum for IP-XACT. The features of the latest IP-XACT 2022 version have helped. Arteris has been a major supporter of this standard, and the new capabilities delivered by Magillem Packaging have helped as well.
Insaf explained that Magillem Packaging leverages the Arteris Magillem Platform by integrating parts of Magillem Connectivity and Magillem Registers to create the new product. The figure below provides an overview of the platform and how the pieces fit together. Insaf described the significant benefits this new product delivers. The image at the top of this post includes a summary of the key benefits.
Arteris SoC Integration Automation with the Magillem Platform
She went on to explain the significant automation provided by Magillem Packaging. Keeping track of a complex system’s connectivity and interface requirements is a daunting challenge. With Magillem Packaging, these details are automated and verified as correct. She described how the new version of IP-XACT 2022 delivers substantial new capabilities, and Magillem Packaging leverages all these capabilities in an automated way. There is no need for the user to learn all those details.
She summarized some of the key benefits of the new tool as follows:
Effortless, scalable automation: handles both legacy and new IPs for a smoother assembly, faster scaling for large designs with less risks, reducing the potential for human error, and increasing efficiency
Single source of truth specification: ensures consistency across various uses, bringing up immediate collaboration across the relevant teams, and catching errors before they become costly roadblocks.
Safely, easily, and quickly adapt to changes: with a robust, rapid, highly iterative design environment. It reduces effort and rework to focus on core business, leverage technical expertise, and dream up what comes next.
She also pointed out that Arteris is working with various IP providers to ensure full support for IP-XACT 2022 so customers can fully enjoy its benefits.
I then explored the bigger development programs at Arteris with Andy. He described some of the joint efforts between the NoC and Magillem Connectivity teams. This work improves the target system’s overall connectivity management and helps with the complex verification tasks, thanks to the consistent views created across simulation, FPGA, emulation, synthesis, and fault injection.
Andy couldn’t disclose too many details about upcoming enhancements, but this is an area to observe going forward, and Arteris is leading the charge.
We concluded our discussion with a broader view of multi-die design requirements. On SemiWiki, you can learn more about how Arteris responds to these challenges. Some eye-opening statistics about Arteris technology include that over 200 customers have completed 860 design starts and shipped about 3.75 billion units.
To Learn More
Managing all the information associated with the new heterogeneous semiconductor systems under development can be a considerable challenge. One error can jeopardize the entire project. If these issues keep you up at night, you want to learn more about what Arteris is doing with its Magillem technology. You can read the press release announcing Magillem Packaging here. And you can learn more about this new product here. And that’s how Arteris simplifies design reuse with Magillem Packaging.
Last month, Lj Ristic delivered an invited talk on MEMS technology as a driving force at the Laser Display and Lighting Conference 2025, held at Trinity College Dublin. His talk included a review of some major successes of MEMS industry. We used that occasion to talk to him and discuss some major achievements and the status of MEMS technology today.
Dr. Lj Ristic is recognized for his pioneering contributions to the field of semiconductors, particularly in the development and commercialization of MEMS technology. He has been instrumental in creating innovative MEMS products, with hundreds of millions of units shipped globally. He is also credited with inventing a microprocessor with integrated sensing capabilities—widely adopted in smart sensors. Among his other notable achievements is the development of a novel method for integrating front-end antenna solutions for RF and wireless systems, which has been widely used by the mobile telecommunications industry. He has also conducted groundbreaking research in magnetic field sensors, advancing one- to three-dimensional sensing using lateral bipolar transistors. Dr. Ristic has also published ‘Sensor Technology and Devices’, the first book to introduce MEMS technology to general public.
In addition to his technical accomplishments, he has held senior leadership roles at major corporations and startups alike, including Motorola, ON Semiconductor, Alpha Industries, Sirific, Coniun, Crocus, and SensSpree. He currently serves as Chief of Business Development and Strategy at Mirrorcle Technologies, a leader in the development of MEMS mirror technology and products.
What can you tell us about the status of MEMS Technology today?
Let us briefly look at the history of leveraging Si as mechanical material. In the early 80’s Kurt Petersen opened the eyes of the rest of the world by saying, look silicon is not the only majestic material for integrated circuits, but it is equally majestic for its mechanical properties at micro scale. Why not leverage that? Of course, at the time it was considered to be exotic and on fringe. Then in 1983 Roger Howe and Prof. Muller came with surface micromachining, creating an additional toolbox for making micromechanical structures. And the race was on. Big companies jumped in, including Motorola (I was there), and they focused on leveraging this technology for automotive applications. Where there is a will and funding, there is success. 40 years later MEMS technology is the mainstream and MEMS products are delivered in billions per year serving all possible markets including automotive, commercial, consumer, communications, industrial, biomedical, space, and robotics.
What was the first MEMS product to gain credibility?
It is important to point out that the acronym MEMS was coined by Jacobs and Wood in 1986 in their proposal for grant to DARPA, and it was used to describe micro electro-mechanical systems consisting of micro-mechanical devices and driving electronics. Since micromechanical devices described in the proposal were made using surface micromachining, often the phrase MEMS at that time was associated with surface micromachining. In years to come, the acronym MEMS technology has evolved into an umbrella to include all aspects of micromachined devices, from bulk machining, to wafer bonding, to surface micromachining.
Going back to the question, considering the historic content and initial meaning of MEMS, one can say the first product that gave credibility to MEMS technology was an accelerometer developed for automotive industry. In the late 1980s and early 1990s, Motorola and Analog Devices led the development of accelerometers for airbag applications, neck and neck. While Analog Devices adopted the comb-structure approach invented by Howe and Lee, Motorola pursued its own distinct path of developing a three-polysilicon-layer surface micromachining technology and a differential capacitive device as the vertical stack, that became the foundation for its accelerometers. Ultimately, both companies were successful in introducing accelerometers to automotive customers, in volume production, and they showed that surface micromachining may take many shapes and flavors. Thus, the MEMS accelerometer has a distinct place of putting MEMS technology on the industrialization map. It should be also pointed out that the success of MEMS accelerometer without coupling the sensing element with CMOS ICs would have been impossible. This is an excellent example of the fusion of two technologies, MEMS and CMOS, to give something new that none of these technologies could do on their own,
Since the 1990s, MEMS accelerometers have significantly expanded their range of applications. Advances in miniaturization, high-yield manufacturing, and cost-effective production have driven their continued growth in the automotive market and facilitated their widespread adoption across new markets, including consumer electronics, industrial automation, robotics, medical devices, the Internet of Things (IoT), and defense. Today, MEMS accelerometers play a crucial role in applications such screen orientation detection, movement monitoring, step counting, gesture recognition, structural health monitoring, and vibration monitoring of machinery, among many others.
Current leaders in accelerometer products are Bosch, TDK-InvenSense, NXP, STM, ADI, Murata, and others.
Where do pressure sensors stand in acceptance of MEMS technology?
Pressure sensors are a granddaddy of silicon sensors and transducers. They typically leverage a piezoresistive effect that was discovered in mid 50s in silicon and germanium. It took more than a decade before the first commercial pressure sensors started appearing in the late 60s. They were made by using bulk micromachining to create thin silicon diaphragm that contain resistors diffused into it. Kulite was the first company to leverage piezoresistive effect using silicon. Many others followed. I firmly believe that without pressure sensors MEMS technology would not have been today where it is. The pressure sensors were predecessors to the broader field of MEMS technology as we know it today. They were the base on which we started building to achieve the status of mainstream technology – the MEMS is currently.
It should be mentioned that among numerus pressure sensor products in existence two product categories deserve a special place because of the impact they have had since their introduction. These two are MAP (Manifold Absolute Pressure) sensor and TPMS (Tire Pressure Measurement System). MAP is essential in maintaining efficient fuel injection of the engine, while TPMS is used for real-time pressure monitoring. Motorola/Freescale (today part of NXP), as a leading supplier of semiconductor products to the automotive industry, has contributed significantly to the reputation of these two products. Both are crucial in improving overall efficiency and safety management of vehicles and that is what puts these two in the special category on their own.
Current leaders in pressure sensor products are Bosch, STM, NXP, Honeywell, Infineon, and others.
What are the other significant achievements that have contributed to the credibility of MEMS technology?
With the success of accelerometers, the progress in development of other products was relatively fast. The MEMS gyro followed and then the integration of MEMS accelerometer and gyro in the single chip. Then others. Today, the list of MEMS products in existence is literally endless, but I will focus here on the exclusive club of products that have unique distinction of being made in billions per year. In addition to pressure sensors and accelerometers and gyros, the other products that make exclusive 1 billion units/year club are microphones, speakers, and timing devices.
Microphones have significantly benefited from ongoing advancements in MEMS technology, leading to substantial miniaturization of these devices. As a result of their small size and low power consumption, MEMS microphones are well-suited for applications in compact, battery-powered devices such as smartphones, tablets, smartwatches, earbuds, hearing aids, laptops, smart speakers, etc.
Current leaders in microphone products are Knowles, Goertek, Bosch, Cirus Logic, Infineon, and others.
MEMS speakers are among the latest product families to leverage MEMS technology, with commercial products emerging only over the last decade. Their development has generally been more challenging compared to MEMS microphones. This challenge is primarily driven by the need for large diaphragm displacement. On one hand, sufficient power is required to generate enough force to move the diaphragm, while on the other hand, the structural integrity of the diaphragm must be preserved despite the large displacement requirement. The advantages of MEMS speakers over traditional non-MEMS technologies (such as electrodynamic and balanced armature speakers) are their small form factor, lower power consumption, and ease of integration with electronics. These benefits make them highly attractive for many applications in consumer electronics and other size- and power-constraint applications.
Current leaders in speaker products are xMEMS, Usound, Bosch, Sonitron, SonicEdge, and others.
The MEMS timing devices are used to provide clock functions that are required in modern electronic products. From a functional point of view, they can be divided in three categories: resonators, oscillators, and clocks, and all of them can be made as silicon MEMS devices. They are an excellent alternative to classical quartz crystal timing devices and are gaining acceptance in many market segments including automotive, aerospace and defense, telecommunications, IT, consumer, and medical applications. They offer reliable performance, low power consumption, high stability, small form factors, and low cost. Current leaders in MEMS timing device segment are Si-Times, Microchip, Kyocera, Abracon, Rakon, and others.
In the end, a general assertion can be made for all MEMS products: they gained reputation because they perform reliably, they offer small power consumption, small form factor, and low cost, and that is a winning combination.
What is the latest in MEMS Technology?
One of the latest group of products developed in MEMS technology are silicon MEMS Mirrors that are essential for many applications in optoelectronics. MEMS Mirrors are small silicon-based devices capable of tilting along a single axis (one-dimensional mirror) or in two axes (two-dimensional mirror). Depending on their design they can also move along a third axis making a piston action. What makes them special are small diameters usually ranging from 0.5 mm to 10 mm, and their thickness, around only 40 um, which is thinner than the average thickness of human hair. The technology for making MEMS mirrors has significantly matured over the last two decades and these products are at the cusp of experiencing tremendous growth in the next decade.
A MEMS mirror’s primary function is to deflect a focused beam of light in different directions. The light beam can be steered along a single axis (one-dimensional mirror) or along two axes (two-dimensional mirror). This ability to receive and redirect a focused beam of light is fundamental to scanning technology.
The scanning capability of MEMS mirrors is cleverly utilized to enable two basic functions for manipulating laser light: directed light projection and directed light acquisition (imaging). In almost all MEMS Mirror applications these two basic functions are explored, and the rest are custom additions tailored to specific applications. And customers tell us applications are numerus from automotive and transportation to AR/VR, from consumer and commercial to industrial, from biomedical to free space optical communications, from robotics to smart city. I firmly believe MEMS Mirrors are the next big wave in MEMS technology in the next decade, poised to reach an annual production volume of billions/year.
Current leaders in MEMS Mirror products are Mirrorcle, Hamamatsu, Bosch, Sercalo, Ultimems, and others.
Stay tuned!
Products from pressure sensors to speakers have already reached shipments of more than billions annually. MEMS Mirrors are the next big wave in MEMS technology following in the footsteps of its predecessors.
As expected, security was a big topic at DAC this year. The growth of AI has demanded complex, purpose-built semiconductors to run ever-increasing workloads. AI has helped to design those complex chips more efficiently and with less power demands. There was a lot of discussion on these topics. But there is another part of this trend. While sophisticated, generative AI makes it easier to design complex AI chips, it also makes it easier to attack and compromise those same chips. GenAI must also be used to harden designs from these attacks to keep innovation moving ahead.
Caspia is a company that clearly sees this challenge and has developed a comprehensive approach to reduce these risks. At DAC, Caspia co-founders hosted a workshop on Sunday, the company presented a SKYTalk on Tuesday and issued a press release detailing collaboration to add its security technology to Siemens Questa One. Let’s take a closer look at how Caspia focuses security requirements at DAC.
The Workshop
Sunday at DAC is when various workshops and tutorials are held. One of these events was the third AI/CAD for Hardware Security Workshop, or AICAD4Sec 2025. Building on the success of the first two events, this one aimed to embrace the transformative intersection of AI, CAD, and hardware security. The stated vision of AICAD4Sec is to establish a cutting-edge platform that shows advancements and sets the roadmap for secure, AI-enabled hardware design. Organizations that are involved include Google, Microsoft, Synopsys, and ARM, alongside academia and government agencies such as DARPA and AFRL.
The event was hosted by a small group of researchers, including two co-founders of Caspia Technologies.
Dr. Mark Tehranipoor, Department Chair & Intel Charles E. Young Chair in Cybersecurity at University of Florida ECE. He is a founding Director of the Florida Institute for Cybersecurity Research and a former Associate Chair and Program Director at the University of Florida. He has authored 16 books and over 230 invited talks and holds 22 patents. Mark is a recipient of the IEEE, ACM, and National Academy of Inventors fellowships.
Dr. Farimah Farahmandi, Wally Rhines Endowed Professor in Hardware Security and Assistant Professor at the University of Florida ECE. She is the Founding Director of the Silicon Design and Assurance Laboratory and is Associate Director of the Florida Institute of Cybersecurity and Edaptive Computing Transition Center. She has authored seven textbooks, 120+ journal/conference papers, and holds 12 patents issued/pending.
These are the folks who founded this workshop three years ago. The event on Sunday covered a wide range of topics, including:
CAD Tools for Side-Channel Vulnerability Assessment (Power, Timing, and Electromagnetic Leakage)
Security-Oriented Equivalency Checking and Property Validation
Fault Injection Analysis and Countermeasure Integration in CAD
CAD for Secure Packaging and Heterogeneous Integration
Assessment of Physical Probing and Reverse Engineering Risks
AI-Powered Tools for Pre-Silicon Vulnerability Mitigation and Countermeasure Suggestions
Large Language Models for Security-Aware Design Automation
ML-Enhanced Threat Detection Across Design Abstractions
AI-Augmented Detection of Malicious Functionality in Hardware Designs
AI-Enabled Security Verification for Emerging SoC Architectures
This workshop provided a great opportunity for researchers from many organizations to come together to develop a big picture plan. One attendee was quoted as saying, “I was very energized by the workshop today. It was a great dialogue, and I enjoyed the time with the Mark, Farimah, and the rest of the Caspia team.”
The SKYTalk
SKYTalks are keynote-style presentations delivered in the DAC Pavillion located on the show floor. Mark Tehranipoor delivered a very well received presentation entitled New Innovation Frontier with Large Language Models for SoC Security.
There were two parts to his talk. In the first part, he described the problem faced by design teams today. While there is a strong focus on performance, power and functional verification, there exists a significant blind spot regarding security verification. The graphic at the top of this post was used by Mark to illustrate the perils that lurk below the water line.
He cited several examples from recent headlines that show how significant and real these security threats are becoming. He described the platform Caspia is developing to address these security risks using GenAI technology. The LLM-powered security agents in this platform continually learn from real world behaviors so designers can stay ahead of new and emerging threats. The tools are designed to complement and not replace existing flows. These tools essentially adding GenAI fueled expert-level security verification to existing design flows. The figure below summarizes the current capabilities of the Caspia security verification platform.
Caspia’s GenAI Security Verification Platform
In the second part of his talk, Mark described the details of how GenAI can be applied to SoC security verification with real examples. He began by describing the overall architecture of the GenAI security platform. The layers of this platform and how they interact are summarized in the diagram below.
GenAI Security Platform
The functions of each layer can be summarized as follows:
Application Layer
Handles user display, query submission, and UI/UX rendering
Provides chat-based interface and structured responses for ease of interaction
Supervisor and Orchestrator Layers
Performs LLM-driven user intent detection and input completion
Assigns tasks to appropriate agent
Generates and schedules task plans for execution
Initiates and confirms execution of the tasks
Agent Layer
Verification chat agent
Security asset identification
Threat modeling and test plan
Security property generation
Vulnerability detection
Bug validation
Data Layer
Stores and provides access to datasets of text embeddings
Infrastructure Layer
Leverages cloud GPU clusters, APIs
Ensures scalable deployment of LLM & secure backend
This system provides a robust environment to facilitate analysis of the design and interaction with the designer using highly focused security data to drive the overall process. Access to specialized security data is a key element to make the system useful for its intended purpose. Mark provided examples of results using general purpose LLMs (e.g., ChatGPT) and the specialized security LLMs and agents in this platform. The results from Caspia’s specialized technology were substantially more targeted, accurate and effective.
The Agent Layer is where specific analysis of a design occurs. Mark provided several examples of how security assets can be identified, analyzed for weaknesses and enhanced to deliver a security hardened design. This architecture will continue to grow and become more specialized and sophisticated over time.
The Press Release
Siemens booth interview
Just before DAC began, Caspia issued a press release describing how Caspia and Siemens are collaborating to add Caspia’s portfolio of security technologies to expand security verification features in Siemens’ recently announced Questa™ One smart verification software portfolio. The Caspia platform is designed to add expert security verification to existing flows, so this announcement is an example of that strategy.
At DAC, there was a follow-on event at the Siemens booth related to this announcement. Siemens had a sound-proof, glass enclosed recording booth at the show where they recorded discussions with various companies about collaborative efforts. Mark Tehranipoor was interviewed in the Siemens recording booth about the work Caspia is doing with Siemens Digital Industries Software.
Mark covered the challenges design teams are facing and the technology Caspia is developing to address those challenges. The collaboration work is still in the early phase, so there will be more on this work going forward. The final slide in Mark’s presentation brought together several points of view on the work to illustrate the possibilities as shown below.
To Learn More
Security verification is a new and growing area for chip designers. I expect a lot more discussion on this topic at next year’s DAC, and Caspia appears to be positioned to lead the discussion. You can read the entire press release announcing the collaboration with Siemens here. There is an excellent interview with Caspia’s CEO, Rich Hegberg on SemiWiki here. And you can learn more about Capsia’s products and plans on the company’s website here. The interview in the Siemens booth will be added to the Caspia website, so check back to see this discussion. And that’s how Caspia focuses security requirements at DAC.