100X800 Banner (1)

How Channel Operating Margin (COM) Came to be and Why It Endures

How Channel Operating Margin (COM) Came to be and Why It Endures
by Admin on 07-30-2025 at 10:00 am

Samtec Eye Diagram

According to a recent whitepaper by Samtec, Channel Operating Margin (COM) didn’t start as an algorithm; it started as a truce. In the late 2000s and early 2010s, interconnect designers and SerDes architects were speaking past each other. The former optimized insertion loss, return loss, and crosstalk against frequency-domain masks; the latter wrestled with real receivers, equalization limits, and power budgets. As data rates jumped from 10 Gb/s per line to 25 Gb/s—and soon after to 50 Gb/s and beyond—the old “mask” paradigm broke. Guard-banding everything to make frequency masks work would have over-constrained designs and under-informed transceivers. The industry needed a shared language that tied physical channel features to receiver behavior. That shared language became COM.

Figure 1. The use of Eye Diagrams fundamentally Changed how compliance testing was done.

Two technical insights catalyzed the shift. First, insertion loss alone was not predictive once vias, connectors, and packages crept into electrically long territory. Ripples in the insertion-loss curve, codified as insertion loss deviation (ILD) were the visible fingerprints of reflections that eroded eye openings. Second, crosstalk budgeting matured from ad hoc limits to constructs like integrated crosstalk noise (ICN) and insertion-loss-to-crosstalk ratio (ICR), recognizing that noise must be weighed against the channel’s fundamental attenuation. These realizations coincided with the industry’s pivot from NRZ at 25 Gb/s to PAM4 at 50 Gb/s per line, raising complexity and appetite for a more realistic, end-to-end figure of merit.

Enter the time domain. The breakthrough was to treat the channel’s pulse response as the “Rosetta Stone” between interconnect physics and SerDes design. Because a random data waveform is just a symbol sequence convolved with the pulse response, you can sample that pulse at unit intervals to quantify intersymbol interference (ISI), then superimpose sampled crosstalk responses as noise. That reframed compliance from static frequency limits to a statistical signal-quality calculation anchored in what receivers actually see. Early debates over assuming Gaussian noise led to a pragmatic conclusion: copper channel noise is not perfectly IID Gaussian; using “real” distributions derived from pulse-response sampling avoids chronic over-design.

COM formalized this flow. Practically, you feed measured or simulated S-parameters into a filter chain (including transmitter/receiver shaping), generate a pulse response via iDFT, and compute ISI and crosstalk contributions as RMS noise vectors relative to the main cursor. Equalization (CTLE/FFE/DFE) and bandwidth limits are captured explicitly. Crucially, minimum transceiver capabilities are not left to inference; they’re parameterized in tables maintained inside IEEE 802.3 projects. The result is a single operating-margin number that reflects both the channel’s impairments and a realistic, baseline SerDes. MATLAB example code and configuration spreadsheets iterated alongside each project made the method transparent, debuggable, and rapidly adoptable across companies.

Process also mattered. Publishing representative channel models into IEEE working groups was an industry first. Instead of “ouch tests” where SerDes vendors reacted to whatever channels interconnect teams produced, standard bodies curated public S-parameter libraries that mirrored real backplanes and cables at 25, 50, and 100+ Gb/s per lane. That transparency let COM evolve collaboratively—tuning assumptions, refining parameter sets, and aligning equalization budgets—with evidence that mapped to shipping hardware. Over time, COM was adopted and extended across many Ethernet projects (802.3bj, bm, by, bs, cd, ck, df, and the ongoing dj effort), and it influenced parallel work in OIF and InfiniBand.

Why has COM endured? Three reasons. It aligns incentives by giving interconnect and SerDes designers a single scoreboard. It scales: the same pulse-response/statistical framework accommodates NRZ and PAM-N, evolving to higher baud rates with updated parameter tables and annexes (e.g., Annex 93A and 178A). And it’s verifiable: open example code and published channels shrink the gap between compliance and system bring-up. Looking ahead, the pressures are familiar—denser packages, rougher loss at higher Nyquist, more aggressive equalization, and tighter power. COM’s core idea—evaluate channels in the space where receivers actually operate—remains the right abstraction. It turns negotiation into engineering, replacing guesswork with a metric both sides can build to.

See the full Samtec whitepaper here.

Also Read:

Visualizing System Design with Samtec’s Picture Search

Webinar – Achieving Seamless 1.6 Tbps Interoperability with Samtec and Synopsys

Samtec Advances Multi-Channel SerDes Technology with Broadcom at DesignCon


Podcast EP300: Next Generation Metalization Innovations with Lam’s Kaihan Ashtiani

Podcast EP300: Next Generation Metalization Innovations with Lam’s Kaihan Ashtiani
by Daniel Nenni on 07-30-2025 at 10:00 am

Dan is joined by Kaihan Ashtiani, Corporate Vice President and General Manager of atomic layer deposition and chemical vapor deposition metals in Lam’s Deposition Business Unit. Kaihan has more than 30 years of experience in technical and management roles, working on a variety of semiconductor tools and processes.

Dan explores the challenges of metallization for advanced semiconductor devices with Kaihan, where billions of connections must be patterned reliably to counteract heat and signal integrity problems. Kaihan describes the move from chemical vapor deposition to the atomic layer deposition approach used for advanced nodes. He also discusses the motivations for the move from tungsten to molybdenum for metalization.

He explains that thin film resistivity challenges make molybdenum a superior choice, but working with this material requires process innovations that Lam has been leading. Kaihan describes the ALTUS Halo tool developed by Lam and the ways this technology addresses the challenges of metallization patterning for molybdenum, both in terms of quality of results and speed of processing.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Prompt Engineering for Security: Innovation in Verification

Prompt Engineering for Security: Innovation in Verification
by Bernard Murphy on 07-30-2025 at 6:00 am

Innovation New

We have a shortage of reference designs to test detection of security vulnerabilities. An LLM-based method demonstrates how to fix that problem with structured prompt engineering. Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and lecturer at Stanford, EE292A) and I continue our series on research ideas. As always, feedback welcome.

The Innovation

This month’s pick, Empowering Hardware Security with LLM: The Development of a Vulnerable Hardware Database was published in the 2024 IEEE Hardware-Oriented Security and Trust and has 12 citations. The authors are from the University of Florida Gainesville.

The authors use LLMs to create a large database (Vul-FSM) of FSM designs vulnerable to a set of 16 weaknesses, documented either in the CWE-MITRE database or in separate guidelines, by inserting these weaknesses into base designs. The intent is to use this dataset as a reference for security analysis tools or security mitigations; the dataset is available on GitHub. They also provide an LLM-based mechanism to detect such vulnerabilities.

The core of the method revolves around a structured approach to prompt engineering to generate (they claim) high integrity test cases and methods for detection. Their prompt engineering methods, such as in-context learning, appear relevant to a broader set of verification problems.

Paul’s view

Hardware security verification is still a somewhat niche market today, but it is clearly on the rise. Open databases to check for known vulnerabilities are making good progress – for example, CWE (cwe.mitre.org) is often used by our customers. However, availability of good benchmark suites of labeled testcases with known vulnerabilities is limited, which in turn limits our ability to develop good EDA tools to check for them.

This month’s paper uses LLM prompt engineering with GPT 3.5 using OpenAI’s APIs to create a labeled benchmark suite of 10k Verilog designs for simple control circuit state machines with 3 to 10 states. Each of these designs contains at least one of 16 different known vulnerabilities, and has been created from a base set of 400 control circuits that do not contain any vulnerabilities. The paper also describes a LLM-based vulnerability detection system for these same 16 vulnerabilities using prompt engineering which is surprisingly effective – 80% likely on average to detect the vulnerability.

One of the best parts of the paper is Figure 6 which shows an example of an actual complete LLM prompt clearly divided into sections showing chain-of-thought (giving LLM step-by-step instructions on how to solve the problem), reflexive verification (giving LLM instructions on how to check that it’s response is correct), and exemplary demonstration (giving the LLM an example of a solution to the problem for another circuit). There are some decent charts elsewhere in the paper that show how much these prompt engineering techniques improve the quality of response from the LLM – about 10-20% depending on the vulnerability.

I’m grateful to the authors for their contribution to the security verification community here!

Raúl’s view

This paper introduces SecRT-LLM, a novel framework for generating and detecting security vulnerabilities in hardware designs, specifically finite state machines (FSMs), that leverages large language models (LLMs). SecRT-LLM uses vulnerability insertion to create a benchmark of 10,000 small RTL FSM designs with 16 types of embedded vulnerabilities (Table II), many based on CWE (Common Weakness Enumeration) classes. It also does vulnerability detection, identifying security issues in RTL on this benchmark.

One of the key contributions is the integration of prompt engineering, LLM inference, and fidelity checking. Prompting strategies in particular are quite elaborate aimed at guiding the LLM to perform the target task. Six tailored prompt strategies greatly improve LLM performance:

  • Reflexive Verification Prompting (self-scrutiny, e.g., indicate where and how have the instructions in the prompt been followed)
  • Sequential Integration Prompting (chain-of-thought, dividing a task into sub-tasks)
  • Exemplary Demonstration Prompting (example designs)
  • Contextual Security Prompting (inserting and identifying security vulnerabilities and weaknesses)
  • Focused Assessment Prompting (emphasize detailed examination of a specific design element such as a deadlock)
  • Structured Data Prompting (systematic arrangement of extensive data for example as a table).

A prompt example is given in Fig. 6.

Experimental Validation shows high accuracy in both insertion (~82% pass@1 and ~97% pass@5) and detection (~80% pass@1 and ~99% pass@5) of vulnerabilities. Automating this process drastically reduces time and cost compared to manual efforts.

The paper applies AI capabilities to hardware security needs. Two major contributions are generating a benchmark of FSMs with embedded vulnerabilities, which serves as a resource for training and evaluating vulnerability detection tools and using prompt engineering for security-centric tasks to guide LLMs. Most commercial tools today focus on verification, threat modeling, and formal methods—but do not yet deeply leverage LLMs for RTL vulnerability tasks. Research such as SecRT-LLM addresses this gap and may influence future commercialization of AI in this field.

Also Read:

New Cooling Strategies for Future Computing

Reachability in Analog and AMS. Innovation in Verification

A Novel Approach to Future Proofing AI Hardware


Calibre Vision AI at #62DAC

Calibre Vision AI at #62DAC
by Daniel Payne on 07-29-2025 at 10:00 am

calibre vision ai min

Calibre is a well-known EDA tool from Siemens that is used for physical verification, but I didn’t really know how AI technology was being used, so I attended a Tuesday session at #62DAC to get up to speed. Priyank Jain, Calibre Product Management presented slides and finished up with a Q&A session.

In the semiconductor world we’ve seen a hardware-centric viewpoint starting with PCs in the 80s and 90s, where software ran on general-purpose hardware. Today, it’s more of a software-defined world, where the software architecture drives the hardware implementation.

The vision with Calibre is to shift-left and reduce Turn Around Time (TAT), accomplished by running the tools earlier in the design and implementation flows and using AI techniques. A huge challenge of trying to run full-chip integration earlier is that it produces billions of DRC errors, making the tool load slowly, increases debug time, all with little collaboration between engineering team members on what to fix first.

This challenge led to a new product called Calibre Vision AI, which enables full-chip analysis earlier in the implementation process by adding intelligent debug and user collaboration. With this new tool, engineers can quickly make sense of a DRC run that has billions of errors, as the AI feature clusters similar errors together making it easier to identify systematic issues such as block overlap, bad via, fill overlap and more, and let’s you  prioritize which errors should be fixed first.

Calibre Vision AI has a modern, multi-threaded foundation for fast operation. A GUI with dynamic panels for quick debug, and navigation features to pinpoint the source of errors.

The GUI helps visualize a heat map, showing the density of DRC errors. AI is used to cluster similar errors, and the AI works across all IC layout technologies with no model training required for tool users. Common failure causes are easily identified so that you will be more productive in fixing DRC errors. As an engineer uses the tool they can use dynamic bookmarks on the layout to capture, assign work and write notes for other team members to collaborate on the fixes.

It’s recommended that you run the Calibre RVE tool at the block level and for tapeouts, and run Calibre Vision AI for chip-level analysis at early stages, as the two tools complement each other. Using Calibre Vision AI for full-chip analysis accelerates full-chip debug through the high capacity and multi-threaded technology. Heat maps of errors show the entire die, so that you can pinpoint areas of highest interest. Results are visualized instantly, even when its millions of errors. One comparison showed that for 790 million DRC errors a traditional ASCII flow would load in 15 minutes, while a Vision AI flow using OASIS loaded in just 45 seconds.

Early users of Vision AI reported that it was faster to identify systematic issues and that DRC debug iterations were cut in half. For example, one run had 600M errors from 3,400 checks, then that was reduced to just 381 signal groups or clusters.

Siemens has many EDA tools using AI techniques.

There are three places where AI is used in Calibre Vision AI:

  • Chatbot – EDA knowledge using prompts
  • Reasoning – Data analysis and summarization
  • Tool Operations – Performing complex tool functions from prompts

Summary

DRC analysis and debug work can now reduce tasks that required hours into just minutes by using AI-based clusters. Teams doing physical design can collaborate and communicate more efficiently by using bookmarks, block debug and attaching reports.

Q&A

Q: Is there any plan to Auto-fix DRC errors?

AI quickly groups similar DRC violations for easier root-cause analysis, but we still need a human in the loop to fix the violations.

Q:  Can I create new Signals?

A: Vision AI comes with a set of Signals out of the box, and I can also create their custom signals by my own checks (i.e. M1 checks first).

Q: What’s the difference between RVE classifier and AI?

A: AI takes and elevates the classifier by 100X, analyzing the results, locations, proximity, root causes, cluster by groups. RVE is good for fewer errors, but AI works on billions of errors and earlier in the process.

Q: Can you aggregate AI across multiple designs, trends, library cells in common, broad trends?

A: It’s under development, stay tuned for a future release.

Q: Are signal groups an AI classification?

A: We use unsupervised learning to create the groups by location, proximity.

Related Blogs


Musk’s new job as Samsung Fab Manager – Can he disrupt chip making? Intel outside

Musk’s new job as Samsung Fab Manager – Can he disrupt chip making? Intel outside
by Robert Maire on 07-29-2025 at 6:00 am

Elon Musk Samsung Tesla

– Musk chip lifeline to Samsung comes with interesting strings attached
– Musk chose Samsung over Intel-What does that say about Intel?
– Musk will hold sway over Samsung much as Apple/NVDA over TSMC
– Will Musk do a “DOGE” on chip tool makers? How much influence?

Tesla/Samsung $16.5B deal has many, many ramifications to rile chips

Samsung has been flailing for quite some time in the foundry business. TSMC is running away with the foundry industry leaving both Samsung and Intel far behind eating dust. Samsung just got a huge lifeline in the form of an endorsement from none other than Elon himself.

The Talyor Texas fab which had been on hold due to a lack of customers now has a customer big enough to fill the whole fab and then some. It puts the fab back on track overnight and puts Samsung back in the foundry business.

We are 100% certain that Musk got a super sweetheart deal that exacted a few pounds of flesh from Samsung which was backed up against a wall.

Certainly a way better deal than he could have gotten from TSMC which is up to its eyeballs in demand especially from the likes of Apple and Nvidia.

This also clearly puts Samsung in the middle of the AI business in a way they never could have by themselves.

“Intel Outside” – What does Musk’s choice say about Intel?

We are sure that Intel would have given away the farm to get this deal from Musk. It would have been the deal they needed to justify 14A and beyond. It would have been the deal of the century to rescue Intel……but it wasn’t……

The real question is why not Intel? Did they not offer Musk enough? I doubt it. Maybe there is not enough faith in Intel’s ability to execute. Maybe concern about viability.

Maybe Musk just wanted to thumb his nose at the US chip company (Intel) and the current Trump administration trying to come up with a post CHIPS Act strategy that works.

Maybe Musk just like the short commute to the Taylor fab in Texas….

Maybe its all of the above……

But what it clearly is, is very bad for Intel to be the last person standing at the chip industry dance without a partner…..

Musk’s new role: “Samsung Fab Manager”

A few Musk words that should strike fear into every semiconductor equipment maker;

“Samsung agreed to allow Tesla to assist in maximizing manufacturing efficiency. This is a critical point, as I will walk the line personally to accelerate the pace of progress. And the fab is conveniently located not far from my house”

Read that line again; ….. “I will walk the line (fab production line) personally to accelerate the pace of progress”

Imagine seeing Musk in a bunny suit inside the fab talking to tool operators…..It just blows my mind….and the fact that Samsung agreed to this shows just how desperate they are.

Is Musk going to personally negotiate with tools makers about the price/performance of their tools? Don’t be surprised as he has very clearly completely disrupted other industries. He is certainly smart enough and rich enough to turn the chip industry on its head….

  • Electrify autos
  • Reusable spaceships
  • Global high speed internet
  • Tunnels
  • Robots
  • AI
  • Drive Ins
  • Flamethrowers
  • DOGE
  • A third political Party
  • The semiconductor industry is a piece of cake
Tesla over other auto makers and other AI suppliers

Tesla now has guaranteed bleeding edge , us sourced, tariff resistant, chip capacity for its cars versus GM stuck with ancient, outdated, Global Foundries and foreign unreliable, tariffed, fabs……

Tesla/Musk now can get critical AI chip capacity for its robots, cars etc; and not be beholden to Nvidia/TSMC

Quite a stroke of genius……

The stocks

Obviously a big positive for both Samsung and Tesla.

Obviously a negative for Intel and TSMC

A negative for GM, Ford, BMW, Mercedes, Toyota etc; left in the analog dust….

Gotta love Texas……

Positive for chip tool makers in that Samsung’s Texas fab is back on but negative given the potential involvement of Musk in running the fab and impacting decisions.

Positive for all those former DOGE Musk minions (including “Big Balls”) will now have jobs “accelerating the pace of progress” in Samsung’s Taylor fab.

About Semiconductor Advisors LLC
Semiconductor Advisors is an RIA (a Registered Investment Advisor), specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies.
We have been covering the space longer and been involved with more transactions than any other financial professional in the space.
We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors.
We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.
Also Read:

Elon Musk Given CHIPS Act & AI Oversight – Mulls Relocation of Taiwanese Fabs

CHIPS Act dies because employees are fired – NIST CHIPS people are probationary

Trump whacking CHIPS Act? When you hold the checkbook, you make up the new rules


Enabling the Ecosystem for True Heterogeneous 3D IC Designs

Enabling the Ecosystem for True Heterogeneous 3D IC Designs
by Kalar Rajendiran on 07-28-2025 at 10:00 am

The Shift to System Technology Co Optimization

The demand for higher performance, greater configurability, and more cost-effective solutions is pushing the industry toward heterogeneous integration and 3D integrated circuits (3D ICs). These solutions are no longer reserved for niche applications—they are rapidly becoming essential to mainstream semiconductor design. However, their success hinges on the development of a robust ecosystem that brings together chiplet developers, foundries, OSATs, substrate suppliers, EDA vendors, and test providers. This ecosystem must support standardized workflows, interoperable tools, and reusable components to ensure seamless design, integration, and manufacturing across the entire 3D IC value chain.

Siemens EDA is leading this shift by enabling such an ecosystem through structured workflows, collaborative standards, interoperable tools and a new class of design enablement kits.

The Shift to System Technology Co-Optimization

To meet modern design requirements, the industry is embracing a system-level methodology known as System Technology Co-Optimization (STCO). Rather than designing a single monolithic SoC, STCO breaks down functionality into modular chiplets, each optimized for specific tasks and potentially manufactured using different process nodes or by different vendors. These chiplets are then integrated into a unified 3D IC package. This approach offers several advantages. Designers can achieve higher performance by using specialized chiplets for different functions, improve yields by isolating defects to individual modules, and reduce costs by combining mature and leading-edge technologies in a single package.

However, these benefits come with significant challenges. Coordinating the design, integration, and testing of multiple chiplets within a complex 3D package requires new tools, workflows, and standards that go beyond traditional IC design.

Enabling Design with 3D IC Design Kits

Recognizing the above-mentioned challenges, Siemens EDA has introduced a comprehensive framework of 3D IC Design Kits (3DKs) to support every phase of the design process. These kits were developed in collaboration with the Chiplet Design Exchange (CDX), a working group within the Open Compute Project that includes EDA vendors, foundries, OSATs, and system designers.

The first of these kits, the Chiplet Design Kit (CDK), provides standardized models for defining the electrical and physical characteristics of a chiplet. Built on the JEDEC JEP30 part model and enhanced with the CDXML schema, CDKs make chiplet attributes machine-readable and easily integrable into design workflows. The Package Assembly Design Kit (ADK) defines the mechanical and electrical rules for assembling chiplets, interposers, and substrates into a complete 3D stack. This includes specifications for spacing, pitch, and orientation, and may soon incorporate IEEE’s 3Dblox standard for describing 3D structures.

The Material Design Kit (MDK) addresses a previously unmet need: standardizing the material properties of components for use in thermal, stress, and electrical analyses. Instead of relying on manually input data from vendors, MDKs make this information readily available in formats that can be directly imported into EDA and MCAD analysis tools. Finally, the Package Test Design Kit (TDK) defines how embedded chiplets are tested at various stages, from wafer sort to final system-level validation. These kits include physical test pin locations, test modes, and interface specifications essential for planning and executing test strategies.

Building a Connected Chiplet Marketplace

Beyond enabling the technical workflows, Siemens EDA envisions a broader transformation of the supply chain through a standardized chiplet marketplace. CDKs, encoded in the JEDEC part model format, can serve as entries in an electronic catalog of chiplets. This allows system designers to discover, evaluate, and select components based on standardized attributes. In the future, this marketplace could also support inventory management, procurement, and real-time supply chain visibility, thereby streamlining business transactions between chiplet suppliers and customers. This open marketplace model has the potential to democratize 3D IC design by lowering the barriers for entry and fostering innovation beyond the realm of Tier 1 hyperscalers.

Authoring Tools and Open Access Initiatives

To support the widespread adoption of 3DKs, Siemens EDA is also investing in the development of authoring tools that simplify the creation and use of machine-readable models. While many of the underlying formats are based on XML for compatibility with automated tools, XML is not user-friendly for manual editing. Siemens EDA proposes the creation of open, EDA-neutral authoring tools for ADKs and MDKs that would ensure consistency across different workflows and enable a diverse set of vendors to contribute to the ecosystem.

These tools would allow design and manufacturing stakeholders to align on a shared set of rules and material properties, ultimately enabling a more efficient and collaborative supply chain. By enabling consistent and reproducible design parameters, these tools can help generate EDA-specific PDKs that are tailored for individual design environments while maintaining a common data backbone.

Toward an Open and Scalable 3D IC Ecosystem

To date, early adoption of 3D IC technologies has been concentrated among large cloud providers and HPC-focused companies. These organizations have largely operated within closed ecosystems, building custom chiplets for high-performance compute and AI processors. While effective for their specific needs, these proprietary environments limit broader participation and reuse.

Siemens EDA is working to expand the reach of 3D IC technology by promoting open standards, reusable chiplet components, and accessible design tools. The adoption of 3DKs by foundries, OSATs, material providers, EDA vendors, and system integrators is an essential step toward realizing a scalable, heterogeneous 3D IC design ecosystem. This vision supports not only high-end computing but also emerging applications in consumer electronics, automotive systems, IoT devices, and beyond.

Summary

Heterogeneous 3D IC design represents a profound shift in how semiconductor systems are conceived, developed, and manufactured. Siemens EDA is playing a pivotal role in enabling this transition by offering a comprehensive suite of tools, standards, and workflows that make 3D ICs more accessible and scalable. Through collaborative initiatives like the CDX and the development of open, interoperable 3DKs, Siemens EDA is helping to pave the way for a future where innovative semiconductor designs, addressing not only HPC but other emerging applications too, can thrive across a truly global and inclusive ecosystem.

This topic is discussed in detail in a whitepaper from Siemens EDA and can be downloaded from here.

Also Read:

Scaling 3D IC Technologies – Siemens Hosts a Meeting of the Minds at DAC

Siemens Proposes Unified Static and Formal Verification with AI

Protecting Sensitive Analog and RF Signals with Net Shielding


Why I Think Intel 3.0 Will Succeed

Why I Think Intel 3.0 Will Succeed
by Daniel Nenni on 07-28-2025 at 6:00 am

Intel 3.0 Logo SemiWiki

Probably one of the most anticipated semiconductor investor calls was held last week and it did not disappoint. It was Lip-Bu Tan’s first full quarter since he took over as CEO. In the resulting discussions on the SemiWiki Forum I am viewed as overly optimistic of Intel’s recent pivot. That is true, I am optimistic, but my observations and opinions are based on 40 years of Semiconductor experience, 30 of which included foundries such as TSMC, UMC, SMIC, Samsung, Chartered, GlobalFounderies, etc… I also co-authored a book “Fabless” on the subject so this is not an armchair quarterback piece.

As it stands today, TSMC is the dominant force in the foundry industry which is the result of 30+ years of hard work. I experienced this first hand. One of the most important parts of TSMC’s success is that they do not compete with customers. Another is that TSMC is laser focused on yield which not only builds a strong financial base but also results in customer/partner trust and loyalty.

Bottom line: When TSMC says they are going to deliver something they over-deliver, absolutely.

TSMC’s leadership should also be recognized. Dr. Morris Chang and Dr. CC Wei should be in the semiconductor CEO hall of fame along with Dr. Andy Grove, Jensen Huang, Dr. Lisa Su, and Hock Tan. These CEOs have made semiconductors what they are today, a critical part of modern life.

Intel is one of the most important, if not THE most important companies in the history of semiconductors. The innovation and technology that have spawned from Intel are too numerous to count but I could easily say that the semiconductor industry would not be where we are today without intel.

Unfortunately, being a dominant semiconductor company for so many years is a blessing and a curse. Intel lost focus, and let’s just say that the massive Intel ego was no longer serviceable.

An example of that is when Intel decided to be a foundry in 2010. I had direct experience with this and the first thing that struck me was that Intel had no idea what the foundry business really was. Career Intel executives took charge and without practical foundry experience they failed. Intel decided to get give the foundry business another try when Pat Gelsinger took charge in 2021 which again failed even though Intel brought in outside expertise.

Let’s just say, in a nutshell, the Intel culture was not foundry friendly.  The foundry business is customer centric, thanks to TSMC, and that was not the Intel way, in my opinion. An example of that is building PDKs and fabs without direct customer involvement. We call it the Field of Dreams approach where you do something and expect customers to come running. That takes a very big ego and rarely does it succeed in my experience.

In 2025 Intel landed a foundry experienced CEO. Lip-Bu Tan is a famed Venture Capitalist who joined the board of Cadence Design Systems in 2004 and accepted the CEO position in 2008. As a side note, Lip-BU replaced Mike Fister as Cadence CEO who was a career Intel executive. Let’s just say that Mike left Cadence in much worse shape than when he joined. Mike’s ego was legendary. I have known Cadence since before they were Cadence so I experienced this first hand as well.

Lip-Bu lead a significant culture change at Cadence that brought them back to a leadership position in EDA. Cadence was in decline with $1 billion in revenue in 2008. Today Cadence is a $5 billion dollar company with double digit growth.

Why am I optimistic for Intel 3.0?

First and foremost Lip-Bu Tan. Lip-Bu knows the foundry business. He was an important part of the semiconductor ecosystem and is very customer centric. I saw Lip-Bu in Taiwan many times as TSMC was not only a Cadence partner but also a big customer. In fact, all of the top semiconductor companies are Cadence customers and I can assure you Lip-Bu knows the CEOs on a first name basis.

Second, Lip-Bu is transparent, he will tell it like it is, he will not tell you what you want to hear, he will set your expectations knowing full well he will beat them. Lip-Bu may have learned this from TSMC because that is a key part of the TSMC corporate culture.

Third, Lip-Bu would not have taken this job without a plan in mind. He is not in this for money, he took this job to guarantee him a spot in the semiconductor CEO Hall of Fame. There is no other explanation, this is all about his legacy and his respect for Intel. While I do believe that Lip-Bu uncovered more problems inside Intel than he was aware of as a board member, I have complete confidence in his abilities to facilitate change.

Yes, the investor call did not sound optimistic on the foundry side, but remember, Lip-Bu Tan sets expectations so he can beat them.

What does Intel Foundry need to do to succeed? I will write about that next but Lip-Bu already knows this so it will not be a surprise to him or his consolidated executive staff.

Also Read:

Making Intel Great Again!

Should Intel be Split in Half?

Should the US Government Invest in Intel?


CEO Interview with Jutta Meier of IQE

CEO Interview with Jutta Meier of IQE
by Daniel Nenni on 07-25-2025 at 10:00 am

Jutta Meier Headshot Color Courtesy IQE

Jutta Meier is an experienced executive who has held senior positions at global semiconductor companies for over 25 years. She joined IQE in January 2024 as CFO, and was announced as IQE’s CEO in May of 2025. She joined IQE after serving at Intel Corporation as a Senior Finance Director at Intel Foundry Services, supporting Intel’s Foundry business transformation. Prior to joining Intel, Jutta served as Vice President of Finance at GlobalFoundries Inc, a global leader in semiconductor manufacturing and she also held various positions at AMD.

Tell us about your company.

IQE enables the technologies that power our everyday lives, from smartphones and data centers to electric vehicles and advanced communications systems – by engineering the compound semiconductor materials at their core. We don’t make chips; we make the epitaxial wafers that make high-performance chips possible.

For over 30 years, IQE has led in compound semiconductors. Today, we remain focused on advancing smarter, faster, more efficient technologies – responsibly and sustainably, while enabling a more connected, inclusive world.

What problems are you solving?

The world is demanding more from technology — more performance, more efficiency, more reliability — and materials are where that progress starts.

Our customers come to us when they need differentiated performance. That means improving power conversion, enabling higher data throughput, supporting ultra-small displays and pushing the boundaries of speed and miniaturization. We also help customers manage complex supply chains and cost challenges, especially as demand grows for domestic sourcing and regional resilience.

What application areas are your strongest?

We focus where performance truly matters. Today, that includes:

  • Power electronics, where GaN on silicon is improving efficiency and reducing energy loss in everything from electric vehicles to AI infrastructure.
  • RF and 5G systems, where GaN on Silicon and GaN on Silicon Carbide support ultra fast, low latency, reliable wireless performance.
  • Optical communications, where our InP-based materials are helping move data faster and more efficiently.
  • MicroLED displays, especially in RGB applications, where precision and uniformity determines display viability.
  • Photonics, where our materials support sensing, imaging, and emerging applications like LiDAR and quantum.

What unites these together is our deep expertise in epitaxy. We don’t just achiebve performance, we scale it reliably.

What keeps your customers up at night?

A few things, depending on who you ask. From a technology standpoint, it’s about staying ahead of the curve while managing risk. Reliability, scalability and supply security are always top concerns.

But more broadly, I hear a lot of concerns around readiness. Is the ecosystem ready for the next wave of demand? Is the supply chain robust enough? Are the right partners in place? That’s where we come in, not just as a materials supplier, but as a long-term strategic partner who helps solve upstream challenges before they become downstream problems.

What does the competitive landscape look like and how do you differentiate?

The landscape is evolving quickly, especially as compound semiconductors gain mainstream traction.

What differentiates IQE is our ability to combine world-class epitaxial technology with deep customer alignment. We’re not trying to be everything to everyone. We focus where we know we can make a difference: in high-performance applications where material quality, consistency, and scale really matter.

We also bring a global footprint and a proven track record, which is important for customers navigating geopolitical uncertainty. That’s been especially critical in aerospace and security, where our work supports governments’ goals of building secure and resilient domestic supply chains.

What new features or technology are you working on?

As a pioneer in GaN with 20+ years experience, we’re doubling down across power and RF, to meet demand from AI to energy infrastructure. That includes scaling GaN on Silicon, improving performance and manufacturability at the epitaxy level.

We’re pushing forward in microLED, especially RGB, where our materials can enable brighter, more efficient and immersive displays. And our work in InP-based photonics is opening up exciting possibilities in data centers, telecom, and sensing applications.

Critically, we’re innovating in process, not just materials ensuring these breakthroughs scale from lab to fab.

How do customers normally engage with your company?

In Compound Semiconductors, the device performance is set with Epitaxy. Therefore, our engagement starts early, often at the design or feasibility stage because decisions made at the epitaxy level ripple through the entire device stack.

That means a lot of collaboration, problem-solving and trust which is underpinned by long-term partnerships that span decades with some customers. We’re also engaging with new customers and players across key growth markets, helping them ramp faster by sharing our experience and technical depth.

It’s a high-touch model, but we believe it’s the best way to deliver value, especially in a space where precision, reliability and innovation are non-negotiable.

Also Read:

Executive Interview with Ryan W. Parker of Photonic Inc.

CEO Interview with Jon Kemp of Qnity

Executive Interview with Matthew Addley

CEO Interview with Jonathan Reeves of CSignum


Podcast EP299: The Current and Future Capabilities of Static Verification at Synopsys with Rimpy Chugh

Podcast EP299: The Current and Future Capabilities of Static Verification at Synopsys with Rimpy Chugh
by Daniel Nenni on 07-25-2025 at 10:00 am

Dan is joined by Rimpy Chugh, a Principal Product Manager at Synopsys with 14 years of varied experience in EDA and functional verification. Prior to joining Synopsys, Rimpy held field applications and verification engineering positions at Mentor Graphics, Cadence and HCL Technologies.

Dan explores the expanding role of static verification with Rimpy. She describes significant improvements in static verification driven by increasing design complexity. These include scaling the technology to process much larger designs, the ability to efficiently analyze many more violations, and identify a larger class of bugs earlier in the design cycle (shift left). She describes the increasing usage of AI in tools such as Synopsys VC SpyGlass to identify coding practices and constraints that can cause issues during physical implementation.

She discusses how a CDC-aware synthesis flow can avoid over- and under-constrained designs. She explains how the issues associated with implementation design checks, or IDC can result in either unreliable designs or designs with sub-optimal PPA. She describes the current and future work going on at Synopsys to avoid these issues early in the design flow with lower designer effort.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Executive Interview with Ryan W. Parker of Phononic Inc.

Executive Interview with Ryan W. Parker of Phononic Inc.
by Daniel Nenni on 07-25-2025 at 9:00 am

ryan parker
Ryan W. Parker is a seasoned executive and product leader at Phononic Inc., where he oversees high-tech product incubation and drives P&L strategy. With a robust background at Intel’s IoT Group, Ryan has successfully led multi-disciplinary teams transforming cutting-edge semiconductor and IoT technologies into scalable, market-ready products.
An alum of both Arizona State University (W. P. Carey School of Business) and Intel’s internal leadership programs, Ryan combines business acumen with technical depth. His expertise lies in guiding innovation through rigorous operational discipline, streamlining development, optimizing supply chains, and achieving measurable commercial outcomes.
Known for his collaborative leadership style, Ryan cultivates cross-functional alignment between engineering, marketing, and manufacturing. At Phononic, he’s instrumental in scaling solid-state cooling solutions that address emerging edge-compute, telecom, and medical applications with zero-refrigerant, energy-efficient hardware.
Tell us about your company.

Phononic is changing the way datacenters cool.  We’re bringing solid-state technology to an industry that’s long overdue for a smarter approach.  Our first focus is transceivers that fit in tight spaces, with highly localized power, and no room for error.  That’s where we’re making an immediate difference.  Long-term, we’re building the thermal layer that AI infrastructure needs to scale.

What problems are you solving?

AI is pushing more power through the datacenter than ever before, and it’s generating heat that traditional cooling can’t handle.  We started with transceivers because they’re compact, high-density, and performance sensitive.  Our platform helps customers avoid throttling, cut energy use, and get more out of what they already have.

Where are you focused?

Right now, transceivers.  It’s a clear bottleneck with a fast ROI.  However, the same challenges exist across the datacenter, and our platform is built to meet these needs.

What keeps your customers up at night?

They’re trying to scale AI deployments but running into thermal and power walls.  They don’t want to rip and replace, they want a way to unlock performance with the systems they already trust.  That’s exactly what we’re doing, while also building for the future.

How do you differentiate?

Most cooling is fixed and passive.  Ours adapts.  It’s responsive, localized, and tuned to actual system loads.  That means better performance and less waste.  Also, since we own the full IP stack, we can work directly with customers to deliver solutions, not just parts.

What’s next?

While we’re focused on transceivers today, we’re already seeing strong interest in applying our platform more broadly.  The demand is growing, and we’re actively building toward that expansion.

How do customers engage?

We don’t just sell parts, we solve real problems.  Our customers know where the system constraints are, and we bring the thermal tech to break through them.  It’s a hands-on partnership built around performance and shared expertise.

Also Read:

CEO Interview with Jon Kemp of Qnity

Executive Interview with Matthew Addley

CEO Interview with Jonathan Reeves of CSignum

CEO Interview with Shelly Henry of MooresLabAI