100X800 Banner (1)

Podcast EP313: How proteanTecs Optimizes Production Test

Podcast EP313: How proteanTecs Optimizes Production Test
by Daniel Nenni on 10-24-2025 at 10:00 am

Daniel is joined by Alex Burlak is Vice President of Test & Analytics at proteanTecs. With combined expertise in production testing and data analytics of ICs and system products, Alex joined proteanTecs in October, 2018. Before joining the company, Alex held a Senior Director of Interconnect and Silicon Photonics Product Engineering positions at Mellanox.

Dan explores the changing landscape of production testing with Alex, who explains that the current method of pass/fail testing doesn’t work well on the very large, chiplet-based designs of today. He describes a new approach from proteanTecs that utilizes detailed parametric data to increase visibility and optimize performance.

Alex explains how using proteanTecs embedded agents allows better understanding of parametric quality as well as the opportunity to optimize performance. He describes how the technology supports real-time decisions at the chip level and also allows system-wide analysis on the cloud to achieve better quality and reliability. Alex also discusses trends in this area and where he sees adoption and growth.

Contact proteanTecs

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview with Alex Demkov of La Luce Cristallina

CEO Interview with Alex Demkov of La Luce Cristallina
by Daniel Nenni on 10-24-2025 at 8:00 am

Alex and Agham 8 Inch

Alex Demkov is co-founder and CEO of La Luce Cristallina. He is a distinguished figure in the field of materials physics, serving as a Professor at the University of Texas at Austin. With a prolific career marked by notable achievements, Alex boasts an impressive portfolio of 10 U.S. patents and many patent applications, showcasing his innovative contributions to the field. He has published almost three hundred research papers, several books, and is a Fellow of the American Physical Society and a Senior Member of IEEE. Alex received his Ph.D. in Physics from Arizona State University. Prior to joining the University of Texas in 2005, he worked in Motorola’s R&D organizations. His expertise extends globally, positioning him as an international authority and thought leader in the crucial domain of semiconductor manufacturing materials.

Can you tell us about your company?

La Luce Cristallina (LC) transforms over two decades of research into 200-mm (8-inch) barium titanate (BaTiO₃) wafers that overcome the performance, space, and power limitations of silicon-photonics modulators. Using RF magnetron sputtering to grow single-crystal, fully insulating films, we deliver foundry-compatible wafers from a secure U.S.-based supply chain at a scale of multiple 200-mm wafers per day. A typical stack includes 300–500 nm BaTiO₃ on top of 6–8 nm SrTiO₃ and 3 µm SiO₂ on a silicon carrier.

What problems are you solving?

Si photonics modulators reached their limits in terms of speed, size, and power consumption. Our 200 mm BaTiO₃ wafers offer many benefits over alternative materials such as lithium niobate, which is prone to contamination issues. They are also more durable than electro-optic polymers and are able to withstand high temperatures, radiation, high frequencies, and other harsh conditions. Our product remains compatible with existing Si and SiN foundry flows, and enables compact, low-drive electro-optic devices.

For example, a sub-500 µm MZI achieves VπL of 0.45 V·cm. BaTiO₃’s exceptionally high Pockels coefficient delivers a much stronger electro-optic response, enabling sub-volt, ultra-compact modulators that lithium niobate cannot match. Our work turns breakthrough research into practical solutions that meet the performance demands of numerous technology sectors.

What application areas are your strongest?

La Luce Cristallina’s solutions directly address the growing need for faster, lower-power optical interconnects in the datacom and data center markets, which currently represent the largest share of integrated silicon photonics demand. Our platform enables compact, efficient devices that support AI-driven workloads and high-density optical interconnects. Beyond data centers, our solutions extend to telecom, sensing, defense, aerospace, and advanced computing applications, including emerging quantum use cases. We empower device manufacturers and system developers to create next-generation electro-optic modulators, PICs, and integrated systems that deliver higher performance and greater reliability across these markets.

What keeps your customers up at night?

Cost is on everyone’s minds, with companies striving to reduce power, space, and OPEX without sacrificing performance. Sourcing and manufacturing through a secure supply chain is also a primary consideration, with many companies evaluating risks from restrictions on thin-film lithium niobate suppliers in China.

What does the competitive landscape look like, and how do you differentiate?

Our 200 mm wafers offer economies of scale at a reasonable price. We differentiate ourselves by offering customers more generations of device innovation (10+ years) than alternative materials. This helps companies secure a higher and quicker return on their investment while positioning them to support the rising bandwidth requirements of current and emerging use cases. BaTiO₃ is still a relative newcomer, but we’re confident its benefits will enable it to outpace indium phosphide, lithium niobate, polymers, and other materials long-term.

What new features/technology are you working on?

We recently launched our new fabrication facility in Austin, TX, and announced the availability of our new 200-mm BaTiO₃ wafers. Our flagship product brings the functionality of crystalline oxides to semiconductors. Through this solution, we’re spearheading the transition from lithium niobate to high-performance BaTiO₃ as numerous sectors work to support optical interconnects, sensing, and optical computing use cases amid rising bandwidth demands.

How do customers normally engage with your company?

Through referrals, our website, our LinkedIn account, or by discovering us in leading industry publications.

Also Read:

CEO Interview with David Zhi LuoZhang of Bronco AI

CEO Interview with Dr. Bernie Malouin Founder of JetCool and VP of Flex Liquid Cooling

CEO Interview with Gary Spittle of Sonical


IPLM Today and Tomorrow from Perforce

IPLM Today and Tomorrow from Perforce
by Daniel Nenni on 10-24-2025 at 6:00 am

ads mdx semiwiki future forward 400x400

Today, Perforce IPLM stands at the intersection of data management, automation, and collaboration, shaping the way companies design the next generation of chips and systems. Looking ahead, its evolution will reflect the growing convergence of hardware, software, and AI-driven engineering.

WEBINAR – Future Forward: IPLM Today and Tomorrow

At its core, Perforce IPLM provides a unified framework for managing semiconductor and embedded IP throughout its lifecycle. It allows design teams to catalog every block, core, and library in a centralized database, complete with metadata, version histories, and dependency maps. This approach replaces the fragmented spreadsheets and manual versioning methods that once plagued engineering organizations. Through its hierarchical BoM capabilities, IPLM gives users visibility into which IPs are used in which projects and how changes in one component ripple through a system. The result is greater design reuse, faster project ramp-up, and reduced risk of costly mismatches or rework.

Perforce IPLM also serves as a bridge between hardware and software development workflows. It integrates with issue tracking systems such as Jira, incorporates permission controls for IP access, and provides traceability from design requirements to implementation. The recent simplification of the product’s branding—from Helix IPLM to Perforce IPLM, reflects a broader strategic vision: unifying the company’s tools under one ecosystem and highlighting the platform’s role in managing not just data, but organizational knowledge.

Perforce has also sought partnerships to expand IPLM’s role beyond traditional semiconductor design. Its collaboration with Siemens Digital Industries Software demonstrates a move toward unifying hardware and software development in an era of software-defined, silicon-enabled products. From autonomous vehicles to AI accelerators, the line between chip design and software architecture is blurring. IPLM’s ability to connect design data across domains positions it as an enabling technology for this convergence.

Looking toward the future, several forces will shape how IPLM evolves. The first is the rise of AI-assisted design. Second, global scalability will be essential. With design teams increasingly distributed across continents, IPLM must ensure secure and performant data access through caching, mirroring, and fine-grained permissions. Built-in compliance controls, such as geofencing and export management, will help companies navigate the complex regulatory landscape of semiconductor trade. A third trend is the growing emphasis on process maturity. Perforce’s collaboration with the Global Semiconductor Alliance on the IP Maturity Model provides a structured path for organizations to assess and improve their IP management practices. As design cycles shorten and product complexity rises, structured maturity frameworks will become essential for maintaining quality and repeatability across projects.

In parallel, the user experience of IPLM is evolving toward greater intuitiveness and interoperability. Engineers increasingly expect the same ease of use from enterprise tools that they enjoy from consumer software. Features such as advanced search filters, “shopping-cart” style IP selection, and visual dashboards are already modernizing how teams interact with data. Future iterations of IPLM will likely offer deeper integration with continuous integration/continuous deployment pipelines and EDA tools, creating a seamless bridge from specification to verification.

WEBINAR – Future Forward: IPLM Today and Tomorrow

Bottom line: The future of Perforce IPLM lies in its ability to act as both a knowledge repository and a collaborative engine. By harnessing AI, strengthening global scalability, and deepening its cross-domain integrations, Perforce is positioning IPLM to become a foundational layer of modern product development. As industries move toward AI-driven, software-defined, and silicon-enabled systems, the need for intelligent, traceable, and automated IP management will only grow. In that landscape, IPLM will not merely track design assets, it will help shape the future of how complex technology is imagined, built, and brought to market.

Also Read:

Chiplets: Powering the Next Generation of AI Systems

Chiplets: Powering the Next Generation of AI Systems
by Kalar Rajendiran on 10-23-2025 at 10:00 am

Arm Synopsys at Chiplet Summit

AI’s rapid expansion is reshaping semiconductor design. The compute and I/O needs of modern AI workloads have outgrown what traditional SoC scaling can deliver. As monolithic dies approach reticle limits, yields drop and costs rise, while analog and I/O circuits gain little from moving to advanced process nodes. To sustain performance growth, the industry is turning to chiplets—modular, scalable building blocks for multi-die designs that redefine how high-performance systems are built.

Why Chiplets Matter

Forecast to become a $411 billion market by 2035 (IDTechEx), multi-die designs divide large SoC functions into smaller, reusable dies (also called chiplets) that can be integrated into a single system-in-package (SiP). These chiplets may be heterogeneous or homogeneous, replicating cores for scaling. SiPs can rely on standard organic substrates or advanced interposers that enable dense interconnects and greater functionality within a compact footprint.

The vision is an open marketplace where designers can mix and match chiplets from multiple suppliers. Beyond JEDEC’s HBM memory modules, however, widespread adoption of off the shelf chiplets has been limited by fragmented standards and fragmented use cases. Progress continues with UCIe (Universal Chiplet Interconnect Express), Arm’s Chiplet System Architecture (CSA), and new industry collaborations aimed at breaking these barriers.

System Partitioning and Process Node Choices

The first step in chiplet design is deciding how to partition system functions. Compute, I/O, and memory blocks can each be implemented on the process node that offers the best balance of power, performance, and cost. For example, an AI compute die benefits from the latest node, while SRAM or analog functions may be built on less advanced—and less expensive—nodes.

Latency and bandwidth demands guide how these blocks connect. A 2.5D interposer may provide sufficient performance, but latency-sensitive systems sometimes require 3D stacking, as seen in AMD’s Ryzen 7000X3D processors, where compute and cache are vertically integrated for faster data access.

Designing Die-to-Die Connectivity

Interconnect performance defines chiplet success. UCIe has become the industry’s preferred die-to-die standard, offering configurations for both cost-efficient organic substrates and high-density silicon interposers. Designers must weigh data rates, lane counts, and bump pitch to achieve the right mix of bandwidth, area, and power.

AI I/O chiplets, for instance, may require UCIe links supporting 16G–64G data rates to maintain low-latency communication with compute dies. Physical layout choices for interface IPs—single-row or double-stacked PHYs—may affect the beachfront area available for die-to-die interfaces and affect both area efficiency and design complexity.

Bridging UCIe’s streaming interface with on-chip protocols such as AXI, Arm CXS, or PXS is also key to maximizing throughput and minimizing wasted bandwidth.

Advanced Packaging and Integration

Packaging now sits at the heart of semiconductor innovation. Designers must choose between lower-cost organic substrates and denser 2.5D or 3D approaches. Silicon interposers deliver unmatched interconnect density but come with size and cost constraints. Emerging RDL (Redistribution Layer) interposers provide a balanced alternative—supporting larger system integration at reduced cost. Typical bump pitches range from 110–150 microns for substrates to 25–55 microns for interposers, shrinking further for 3D stacks.

Thermal, mechanical, and power-integrity challenges grow as multiple chiplets share one package. Early co-design across silicon and packaging domains is essential. Testability must also be planned in advance, using the IEEE 1838 protocol and multi-chiplet test strategies to ensure known-good-die (KGD) quality before assembly.

Securing and Verifying Multi-Die Designs

With multiple chiplets, the attack surface widens. Each chiplet must be authenticated and protected through attestation and secure boot mechanisms. Depending on the application, designers may integrate a root of trust to manage encryption keys or isolate sensitive workloads.

Data in transit must be secured using standards such as PCIe and CXL Integrity and Data Encryption (IDE), DDR inline memory encryption (IME), or Ethernet MACsec. Verification is equally critical: full-system simulation, emulation, and prototyping are required to validate die interactions before fabrication. Virtual development environments enable parallel software bring-up, shortening time-to-market.

Synopsys and Arm: Simplifying AI Chip Design

AI accelerators bring these challenges into sharp focus. They demand enormous compute density, massive bandwidth, and efficient integration across heterogeneous dies. To address this complexity, Synopsys and Arm—long-time collaborators—are combining their expertise to streamline AI and multi-die development.

At Chiplet Summit 2025, Synopsys VP of Engineering Abhijeet Chakraborty and Arm VP of Marketing Eddie Ramirez discussed how their companies are reducing design risk and speeding delivery. Under  Arm Total Design, Arm’s Neoverse Compute Subsystems (CSS) are now pre-validated with Synopsys IP, while Synopsys has expanded its Virtualizer prototyping solution and Fusion Compiler quick-start flows for the Arm ecosystem. These integrations let customers implement Arm compute cores more efficiently, validate designs earlier, and begin software development long before silicon arrives.

“We feel we are barely scratching the surface,” Chakraborty noted. “There’s a lot more work we can do in this space.” Both leaders emphasized that interoperability, reliability, and security will remain top priorities as chiplet ecosystems evolve.

The Road Ahead

The semiconductor industry is shifting from monolithic to modular design. Continued progress will depend on collaboration, standardization, and shared innovation across companies and ecosystems. With Synopsys advancing chiplet standards, design flows, and verified IP subsystems, the path from concept to production is becoming faster and more predictable. Customers can focus on their core competencies, while offloading other aspects of the design to respective experts in those areas for fast and reliable time-to-market.

The next generation of AI systems won’t rely on bigger chips—they’ll be built from smarter, interconnected chiplets, delivering scalable performance, efficiency, and flexibility for the most demanding compute workloads of the future.

Also Read:

The Rise, Fall, and Rebirth of In-Circuit Emulation: Real-World Case Studies (Part 2 of 2)

Statically Verifying RTL Connectivity with Synopsys

Why Choose PCIe 5.0 for Power, Performance and Bandwidth at the Edge?


Better Automatic Generation of Documentation from RTL Code

Better Automatic Generation of Documentation from RTL Code
by Tom Anderson on 10-23-2025 at 6:00 am

Specador Doc

One technical topic I always find intriguing is the availability of links between documentation and chip design. It used to be simple: there weren’t any. Architects wrote a specification (spec) in text, in Word if they had PCs, or using “troff” or a similar format if they were limited to Unix platforms. Then the hardware designers started drawing schematics or writing RTL code, and the programmers did their thing. Verification and validation were all about making sure everything worked together.

Whenever the specification changed, manual updates to the hardware and software were required. When implementation issues caused the design to differ from the original intent, the impact was rarely reflected back in the spec. Thus, when it came time to produce documentation for the end user, it was a lot of manual work to combine bits of the spec, update to reflect the actual design, and add explanatory material for the target audience.

These days, we have links galore. There are many ways to generate hardware and software code from various specification and documentation formats, plus methods to generate documentation from source code. The former is not a new idea. I remember working on a new processor design around 1988-1989 in which the details of the instruction set changed numerous times. I wrote an “awk” script to automatically generate the Verilog RTL design for the instruction decoder based on a tabular representation of the opcodes and their meanings.

These days, there are lot of pieces in a typical chip that can be generated from various specification formats, such as registers from IP-XACT or SystemRDL and state machines from transition tables. We’re starting to see generative AI spit out even larger chunks of the design, essentially based on natural language specifications in the form of chats. Being able to regenerate RTL code whenever a specification changes saves a great deal of time of effort over the course of a chip project.

Generating documentation from code is also not a new idea. Solutions in this space started on the software side, with tools such as Doxygen. The idea is that certain aspects of the documentation can be generated automatically from the code, with pragmas or some other in-line mechanism available for programmers to control the generation and add content. Numerous options for document generation are now available, with AI-based techniques quickly gaining acceptance. Being able to regenerate documentation every time the source code changes also saves a lot of time and eliminates a lot of manual effort.

From what I can gather, documentation generation from code is very common in the software world. However, I’ve been surprised by how few hardware projects embrace this approach in a big way.  I often hear designers and verification engineers complain that languages such as SystemVerilog are not as well supported by shareware documentation tools. They also say that they don’t have the level of control needed to get the quality of results their end users demand.

AMIQ EDA has a commercial product, Specador Documentation Generator, focused on hardware design and verification code. I figured that there must be some good reasons why their users chose this solution over free utilities, so I chatted with CEO Cristian Amitroaie. The first thing he said was that Specador was created with hardware engineers in mind. It supports source code written in SystemVerilog, Verilog, VHDL, the e language, and more. It covers the RTL design plus the verification testbench, components, models, and tests. It generates both PDF and HTML output.

To me, the most impressive aspect of Specador is that it leverages all the language knowledge available in the AMIQ EDA Design and Verification Tools (DVT) suite. Their front end compiles all the design and verification code and builds a flexible internal model. Users of the integrated development environment DVT IDE can easily browse, edit, and understand the code, and even query the model with AI Assistant.

Understanding the design means, for example, that users can generate design hierarchies, schematics, and state machine diagrams. Since the DVT tools also understand the Universal Verification Methodology (UVM), users can generate class or component diagrams including TLM connections, cross-linked class inheritance trees, and other useful forms of documentation for the testbench. My choice of the word “documentation” here is deliberate, because many of the design and verification diagrams that users might generate within the IDE are also useful as part of user manuals and other chip documentation.

Cristian stressed that Specador (like all their products) uses accurate language parsers to compile the code so that it understands the project structure. Users can employ it to document design or verification environments, even when comments are not present to provide additional context. Of course, Specador also supports the ability to use comments to format documentation and to add content that can’t be inferred from the source code.

I asked Cristian what’s new in Specador, and he mentioned that AMIQ EDA keeps implementing new features and enhancements based on customer feedback. For example, they recently added the ability to quickly preview the documentation directly in the IDE, the ability to apply custom filters when generating schematic or FSM diagrams, the ability to work with Markdown and reStructuredText markup languages, and last but not least the ability to generate documentation using their AI Assistant.

Specador makes it possible for design and verification engineers to easily create and maintain proper and well-organized documentation. Users can control what documentation they create by filtering or selecting elements in the design and testbench. They can quickly embed or link to external documentation. Specador integrates easily into existing development flows, allowing design and verification groups to automate the documentation process.

Above all, Specador keeps the generated documentation in sync with the source code, saving a great deal of maintenance time and effort as the code evolves. I thank Cristian for his time, and recommend looking at the product information, exploring the documentation generated for the Ibex embedded 32-bit RISC-V CPU core, and reading a post on real-world user experience to learn more.

Also Read:

2025 Outlook with Cristian Amitroaie, Founder and CEO of AMIQ EDA

Adding an AI Assistant to a Hardware Language IDE

Writing Better Code More Quickly with an IDE and Linting


FD-SOI: A Cyber-Resilient Substrate for Secure Automotive Electronics

FD-SOI: A Cyber-Resilient Substrate for Secure Automotive Electronics
by Daniel Nenni on 10-22-2025 at 10:00 am

Soitec white paper image

The paper highlights how Fully Depleted Silicon-On-Insulator (FD-SOI) technology provides a robust defense against Laser Fault Injection (LFI), a precise, laboratory-grade attack method that can compromise cryptographic and safety-critical hardware. As vehicles become increasingly digital and connected, with dozens of microcontrollers and over-the-air updates, hardware-level security has become central to automotive cybersecurity standards such as ISO/SAE 21434.

The Rising Threat of Physical Fault Attacks

Physical fault injection attacks (FIA) can bypass secure boot, unlock protected debug ports, and disrupt program flow. Among these, LFI stands out for its precision, using tightly focused near-infrared laser pulses to flip bits or alter circuit timing. While voltage and electromagnetic glitches can occur in the field, LFI remains the gold standard for systematically probing silicon vulnerabilities in controlled laboratory conditions.

As front-side access becomes harder due to thicker metal layers and shielding, back-side laser access through the substrate is increasingly used. This shift makes substrate engineering—the physical foundation of a chip—a critical security factor.

Why FD-SOI Disrupts Laser Attack Mechanisms

FD-SOI differs from bulk CMOS in that its transistors are built on an ultra-thin silicon layer electrically isolated from the main wafer by a buried oxide (BOX). This structural difference eliminates the main LFI fault mechanisms found in bulk silicon.

Four dominant bulk mechanisms are neutralized by FD-SOI:

  1. Drain/body charge collection – FD-SOI’s thin silicon layer and BOX barrier dramatically reduce the photocurrent that lasers generate at PN junctions.

  2. Laser-induced IR-drop – In bulk CMOS, current loops between wells and substrate can cause transient voltage drops. FD-SOI, using isolated body-bias networks, removes this conduction path.

  3. Substrate diffusion and funneling – Charge carriers cannot spread vertically through the BOX, preventing multi-cell upsets and latch-up.

  4. Parasitic bipolar amplification – Only a weak, lateral bipolar effect remains in FD-SOI, which can be further mitigated using reverse body-bias (RBB) to raise the laser energy threshold.

By blocking substrate conduction and confining active regions, FD-SOI significantly reduces the area and energy range vulnerable to laser faults.

Experimental Validation

Experiments comparing 22FDX FD-SOI and 28 nm bulk CMOS devices including D-flip-flops, SRAMs, and AES/ECC crypto cores confirmed the theoretical advantages. In tests, FD-SOI required up to 150× more laser shots to produce the same fault observed in bulk devices. The time-to-first-fault rose from roughly ten minutes to ten hours, while the minimum fault energy threshold increased from 0.3 W to over 0.5 W.

Spatial and depth-sensitivity mapping showed that bulk silicon has wide fault-prone zones, while FD-SOI faults are confined to sub-micron “hotspots” with a narrow focal depth of only about ±1 µm. Attackers must therefore perform ultra-fine spatial scans, drastically increasing effort and cost.

Furthermore, excessive laser power in FD-SOI caused permanent damage or stuck bits effectively creating a natural deterrent since aggressive attempts could destroy the target device.

Implications for Automotive Security Compliance

In the ISO/SAE 21434 framework, reducing attack likelihood directly lowers cybersecurity risk. FD-SOI’s physical resilience therefore simplifies compliance and can help products achieve Common Criteria or SESIP assurance levels (EAL4+ or higher) without extensive additional countermeasures. Because attack duration, equipment complexity, and expertise all increase, FD-SOI provides a quantifiable uplift in assurance for automotive OEMs and tier-one suppliers.

Toward a Next-Generation Secure Substrate

The authors envision extending FD-SOI’s benefits through substrate-level innovation, transforming it from a passive platform into an active cyber-resilient layer. Two emerging techniques are highlighted:

  1. Buried optical barriers—highly doped layers under the BOX that absorb or scatter infrared light, reducing LFI energy transmission while enabling anti-counterfeit watermarking.

  2. Integrated sensors and PUFs (Physically Unclonable Functions) substrate-embedded monitors that detect tampering or derive unique cryptographic identities from manufacturing variations.

Together, these innovations could allow the substrate to detect attacks, react in real time, and cryptographically bind the silicon identity to the vehicle platform.

Bottom line: FD-SOI represents a material-level breakthrough in hardware security. By eliminating substrate pathways exploited in bulk CMOS, it narrows the laser fault window, increases attack complexity, and provides tunable resilience through body-bias control. These benefits align directly with evolving automotive cybersecurity regulations, offering faster certification and lower system costs.

As substrate engineering continues toward integrated optical barriers and anti-tamper features, FD-SOI is poised to become the reference platform for secure automotive electronics, anchoring trust at the silicon level.

Read the full white paper here.

Also Read:

Soitec’s “Engineering the Future” Event at Semicon West 2025

How FD-SOI Powers the Future of AI in Automobiles

Powering the Future: How Engineered Substrates and Material Innovation Drive the Semiconductor Revolution


Podcast EP312: Approaches to Advance the Use of Non-Volatile Embedded Memory with Dave Eggleston

Podcast EP312: Approaches to Advance the Use of Non-Volatile Embedded Memory with Dave Eggleston
by Daniel Nenni on 10-22-2025 at 8:00 am

Daniel is joined by Dave Eggleston is senior business development manager at Microchip with a focus on licensing SST SuperFlash technology. Dave’s extensive background in Flash, MRAM, RRAM, and storage is built on 30+ years of industry experience. This includes serving as VP of Embedded Memory at GLOBALFOUNDRIES, CEO of RRAM pioneer start-up Unity Semiconductor (acquired by Rambus), Director of Flash Systems Engineering at Micron, NVM Product Engineering manager at Sandisk, and NVM Engineer at AMD. Dave is frequently invited as a speaker at international conferences as an expert on emerging NVM technologies and their applications and has 25+ NVM-related patents granted.

Dan explores the requirements of embedded non-volatile memory (NVM) for application in 32-bit microcontrollers with Dave, who provides a broad overview of the many markets served by these technologies. He describes the challenges of integrating NVM as process nodes advanced. Dave explains the benefits of Microchip’s SST SuperFlash technology and also discusses the cost and time-to-market benefits of using a chiplet approach to add VVM.

Dave also touches on the recently announced strategic collaboration between Microchip/SST and Deca Technologies to innovate a comprehensive NVM chiplet package to facilitate customer adoption of modular, multi-die systems.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Learning from In-House Datasets

Learning from In-House Datasets
by Bernard Murphy on 10-22-2025 at 6:00 am

Training in a Constrained Environment min

At a DAC Accellera panel this year there was some discussion on cross-company collaboration in training. The theory is that more collaboration would mean a larger training set and therefore higher accuracy in GenAI (for example in RTL generation). But semiconductor companies are very protective of their data and reports of copyrighted text being hacked out of chatbots do nothing to allay their concerns. Also, does evidence support that more mass training leads to more effective GenAI? GPT5 is estimated to have been trained on 70 trillion tokens versus GPT4 at 13 trillion tokens, yet GPT5 is generally viewed as unimpressive, certainly not a major advance on the previous generation. Maybe we need a different approach.

More training or better focused training?

A view gathering considerable momentum is that while LLMs do an excellent job in understanding natural language, domain-specific expertise is better learned from in-house data. While this data is obviously relevant, clearly there’s a lot less of it than in the datasets used to train big GenAI models. A more thoughtful approach is necessary to learn effectively from this constrained dataset.

Most/all approaches start with a pre-trained model (the “P” in GPT) since that already provides natural language understanding and a base of general knowledge. New methods add to this base through fine-tuning. Here I’ll touch on labelling and federated learning methods.

Learning through labels

Labeling harks back to the early days of neural nets, where you provided training pictures of dogs labelled “dog” or perhaps the breed of dog. The same intent applies here except you are training on design data examples which you want a GenAI model to recognize/classify. Since manually labeling large design datasets would not be practical, recent innovation is around semi-automated labeling assisted by LLMs.

Some large enterprises outsource this task to value-added service providers like Scale.com who deploy large teams of experts using their internal tools to develop labeling, using supervised fine-tuning (SFT) augmented by reinforcement learning with human feedback (RLHF). Something important to understand here is that labeling is GenAI-centric. You shouldn’t think of labels as tags on design data features but rather as fine-tuning additions to GenAI data (attention, etc) generated from training question/answer (Q/A) pairs expressed in natural language, where answers include supporting explanations perhaps augmented by content for RAG.

In EDA this is a very new field as far as I can tell. The topic comes up in some of the papers from the first International Conference on LLM-Aided Design (LAD) held this year at Stanford. One such paper works around the challenge of getting enough expert-generated Q/A pairs by generating synthetic pairs through LLM analysis of unlabeled but topic-appropriate documents (for example on clock domain crossings). This they augment with few-shot learning based on whatever human expert Q/A pairs they can gather.

You could imagine using similar methods for labeling around other topics in design expertise: low-power design, secure design methods, optimizing synthesis, floorplanning methods and so on. While attention in the papers I have read tends to focus on using this added training to improve RTL generation, I can see more immediate value in verification, especially in static verification and automated design reviews.

Federated Learning

Maybe beyond some threshold more training data isn’t necessarily better, but perhaps the design data that can be found in any given design enterprise doesn’t yet suffer from that problem and more data could still help, if we could figure out how to combine learning from multiple enterprises without jeopardizing the security of each proprietary dataset.  This is a common need across many domains where webcrawling for training data is not permitted (medical and defense data are two obvious examples).

Instead of bringing data to the model for training, Federated Learning sends an initial model from a central site (aggregator) to individual clients and develops fine-tuning training in the conventional manner within that secure environment. When training is complete, trained parameters only are sent back to the aggregator which harmonizes inputs from all clients, then sends the refined model back to the clients. This process iterates, terminating when the central model converges.

There are some commercial platforms for Federated Learning, also open-source options from some big names: TensorFlow Federated from Google and NVIDIA FLARE are two examples. Google Cloud and IBM Cloud offer Federated Learning support, while Microsoft supports open-source Federated Learning options within Azure.

This method could be quite effective in the semiconductor space if a central AI platform or consortium could be organized to manage the process. And if a critical mass of semiconductor vendors is prepared to buy in 😀.

Perhaps the way forward for learning in industries like ours will be through a combination of these methods – federated learning as a base layer to handle undifferentiated expertise and labeled learning for continued differentiation in more challenging aspects of design expertise. Definitely an area to watch!

Also Read:

PDF Solutions Calls for a Revolution in Semiconductor Collaboration at SEMICON West

The AI PC: A New Category Poised to Reignite the PC Market

Webinar – The Path to Smaller, Denser, and Faster with CPX, Samtec’s Co-Packaged Copper and Optics


Liberty IP Excellence: Building a Robust Verification Framework for Automotive IPs

Liberty IP Excellence: Building a Robust Verification Framework for Automotive IPs
by Daniel Nenni on 10-21-2025 at 2:00 pm

As 2025 draws to a close, the semiconductor industry continues to push boundaries, particularly in automotive applications where reliability is non-negotiable. At the TSMC Open Innovation Platform forum this year, a collaborative presentation by NXP Semiconductors and Siemens EDA stood out: “Liberty IP Excellence: Building a Robust Verification Framework for Automotive IPs.” Presented by Santhosh K, Khushboo R, Pramod G from NXP, alongside Ajay Kumar and Ray Valencia from Siemens EDA, this talk highlighted the critical role of Liberty files in SoC design and proposed innovative quality assurance (QA) methodologies to ensure flawless IP delivery.

The motivation stems from the foundational IPs (standard cells, memories, IOs) that form the backbone of System-on-Chip designs. Liberty (.lib) files serve as the industry standard for encapsulating timing, power, noise, and more, including advanced features like statistical variation and waveform data for sub-10nm nodes. NXP, a pioneer in automotive semiconductors, emphasizes uncompromising quality to meet the sector’s stringent demands. Early QA in their flow minimizes costs, time, and resources, but Liberty’s complexity encompassing aspects like Liberty Variation Format (LVF) for statistical data and Composite Current Source (CCS) for timing, noise, and power—poses significant challenges. Interpreting these files manually is error-prone, and inaccuracies can cascade into design failures, especially in safety-critical automotive systems.

The proposed methodology embeds advanced verification tools into NXP’s QA flow, leveraging Siemens’ Solido Analytics for AI-driven error detection, analysis, and comparison. This solution enables full automation, easing adoption for Liberty users while saving engineering and compute resources. It targets advanced nodes, focusing on LVF and CCS to ensure design reliability.

Diving into NXP’s QA flow analysis, the presentation detailed several key components. First, outlier detection using AI identifies anomalies in Liberty data that deviate from neighboring values, such as slew/load points within tables or across Process-Voltage-Temperature (PVT) conditions. Siemens Solido Analytics employs machine learning models to sweep dimensions like transitions, constraints, temperature, voltage, and custom sweeps (e.g., cell drive strength). Users set tolerance thresholds, triggering alerts for outliers, which could indicate characterization issues.

Version-to-version comparison addresses common scenarios like PDK revisions, design spec changes, or legacy setup recreations. The flow uses plotting for rapid visualization and interpolates tables for fair, apples-to-apples assessments, accelerating correlation and reducing manual effort.

Another pillar is CCS versus Non-Linear Delay Model (NLDM) verification. CCS captures timing in the current domain, while NLDM uses voltage; mismatches signal incorrect settings. The methodology converts CCS data to the voltage domain for direct comparison, ensuring consistency.

LVF and LVF Moments checks tackle the complexity of statistical characterization. Nominal Liberty characterization takes hours, but LVF—requiring up to five additional statistical measures—can extend to weeks or months. LVF formats include 1-sigma early/late (typically 3-sigma/3, representing Gaussian distributions) and Moments (mean shift, skewness, standard deviation) for complex distributions. On-chip variation is crucial for nodes ≤20nm, where inaccurate LVF can cause 50-100% timing deviations. Automated checks detect outliers in LVF groups and recreate sigma tables from Moments for matching; discrepancies highlight inconsistencies.

Finally, Power-Performance-Area (PPA) analysis enables early library comparisons across technologies. With thousands of cells and varying formats, traditional methods delay insights until later design phases like synthesis or P&R. The proposed heuristics align data by cell/pin names, functionalities, and table indices, revealing trends like performance versus drive strength (e.g., Dataset A buffers outperforming B at drives >3) or power consumption (e.g., Dataset A flip-flops superior across drives).

Bottom line: this embedded solution delivers comprehensive QA for automotive libraries, including impact analysis across PDK revisions, automated execution, LVF validation, outlier detection, CCS-NLDM alignment, and full coverage. NXP achieved 2x efficiency gains: for trends and validation on 1000 cells across 25 PVTs, runtime dropped from 10 to 5 units; impact analysis (6 PVTs) from 5 to 3 units; PPA comparison (4 PVTs) similarly improved. This collaboration exemplifies 2025’s tech ethos—AI-augmented precision amid escalating complexity—paving the way for safer, more efficient automotive semiconductors.

Also Read:

Accelerating SRAM Design Cycles: MediaTek’s Adoption of Siemens EDA’s Additive AI Technology at TSMC OIP 2025

AI-Driven DRC Productivity Optimization: Insights from Siemens EDA’s 2025 TSMC OIP Presentation

Why chip design needs industrial-grade EDA AI


ASU Silvaco Device TCAD Workshop: From Fundamentals to Applications

ASU Silvaco Device TCAD Workshop: From Fundamentals to Applications
by Daniel Nenni on 10-21-2025 at 10:00 am

SILVACO ASU Workshop 400x400

The ASU-Silvaco Device Technology Computer-Aided Design Workshop is a pivotal educational and professional development event designed to bridge the gap between theoretical semiconductor physics and practical device engineering. Hosted by Arizona State University in collaboration with Silvaco, a leading provider of TCAD software, this workshop offers participants a comprehensive exploration of semiconductor device simulation, from foundational concepts to advanced applications. Spanning topics such as device physics, process simulation, and real-world design challenges, the workshop equips engineers, researchers, and students with the tools to innovate in the rapidly evolving field of microelectronics.

The workshop typically begins with an introduction to TCAD fundamentals, emphasizing the role of simulation in modern semiconductor design. Participants learn how TCAD tools model the electrical, thermal, and optical behavior of devices at the nanoscale. Silvaco’s suite of software, including Atlas, Victory Process, and DeckBuild, is introduced as a powerful platform for simulating semiconductor fabrication and performance. These tools allow users to predict device behavior under various conditions, optimize designs, and reduce the need for costly physical prototyping. The foundational sessions cover key concepts like carrier transport, quantum effects, and material properties, ensuring attendees grasp the physics underpinning TCAD simulations.

As the workshop progresses, it delves into practical applications, demonstrating how TCAD is used in industries such as integrated circuits, power electronics, and photovoltaics. Participants engage in hands-on sessions, guided by ASU faculty and Silvaco engineers, to simulate processes like doping, oxidation, and lithography. These exercises highlight how TCAD can optimize fabrication steps, improve yield, and enhance device reliability. For instance, attendees might simulate a MOSFET’s performance to analyze parameters like threshold voltage or leakage current, gaining insights into design trade-offs. The workshop also covers advanced topics, such as modeling FinFETs, tunnel FETs, or emerging 2D materials like graphene, reflecting the cutting-edge needs of the semiconductor industry.

A key strength of the ASU-Silvaco workshop is its emphasis on bridging academia and industry. ASU’s expertise in semiconductor research, combined with Silvaco’s industry-standard tools, creates a unique learning environment. Participants, ranging from graduate students to seasoned engineers, benefit from real-world case studies, such as optimizing power devices for electric vehicles or designing low-power chips for IoT applications. The collaborative setting fosters networking, enabling attendees to connect with peers and experts, potentially sparking future research or career opportunities.

By the workshop’s conclusion, participants gain a robust understanding of TCAD’s role in accelerating innovation. They leave equipped with practical skills to simulate and analyze semiconductor devices, as well as an appreciation for how these tools address challenges like scaling, power efficiency, and thermal management. The ASU-Silvaco Device TCAD Workshop stands out as a vital platform for advancing semiconductor expertise, empowering attendees to contribute to the next generation of electronic devices in a world increasingly driven by technology.

Register Here

About Silvaco
Silvaco is a provider of TCAD, EDA software, and SIP solutions that enable semiconductor design and digital twin modeling through AI software and innovation. Silvaco’s solutions are used for semiconductor and photonics processes, devices, and systems development across display, power devices, automotive, memory, high performance compute, foundries, photonics, internet of things, and 5G/6G mobile markets for complex SoC design. Silvaco is headquartered in Santa Clara, California, and has a global presence with offices located in North America, Europe, Egypt, Brazil, China, Japan, Korea, Singapore, Vietnam, and Taiwan. Learn more at silvaco.com.

Also Read:

GaN Device Design and Optimization with TCAD

Simulating Gate-All-Around (GAA) Devices at the Atomic Level

Silvaco: Navigating Growth and Transitions in Semiconductor Design